Selected Publications


Empowering Language Models with Knowledge Graph Reasoning for Open-Domain Question Answering
The Conference on Empirical Methods in Natural Language Processing (EMNLP 2022)
We propose a novel symbolic Knowledge Graph (KG) reasoning layer that could be flexibly plugged into most existing Language Models (LMs) and allow LMs to interact with KG, unifying the retrieval and reasoning in a end-to-end framework. OREO-LM improves RoBERTa and T5 on various QA tasks, and the generated reasoning paths could help interpret the model's decision.
Improving Multi-Task Generalization via Regularizing Spurious Correlation
The Conference on Neural Information Processing Systems (NeurIPS 2022)
We point out the unique challenges of spurious correlation problem in multi-task setting that influence generalization. We propose Multi-Task Causal Representation Learning (MT-CRL) framework to learn 1) disentangled neural modules; 2) Task-to-Module Causal Graph; 3) Regularize spurious correlation over learned causal graph.
Zero-shot Transfer Learning within a Heterogeneous Graph via Knowledge Transfer Networks
The Conference on Neural Information Processing Systems (NeurIPS 2022)
We propose a zero-shot transfer learning module for heterogeneous graph neural networks that transfers knowledge from label-abundant node types to zero-labeled node types through rich relational information given in a single heterogeneous graph.
Fuzzy Logic based Logical Query Answering on Knowledge Graph
AAAI Conference on Artificial Intelligence (AAAI 2022)
We propose FuzzQE, a fuzzy logic based logical query embedding framework for answering FOL queries over KGs. FuzzQE define logical operators in a principled and learningfree manner, which could be trained with only KG without any complex queries.
Relation-Guided Pre-Training for Open-Domain Question Answering
The Conference on Empirical Methods in Natural Language Processing (EMNLP-Finding 2021)
We propose RGPT-QA to synthesize QA pairs from relation triplets in WikiData and WikiPedia for pre-training Open-Domain QA Model and improves the QA performance, especially for questions with long-tail relations.
Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning
The Conference on Empirical Methods in Natural Language Processing (EMNLP 2021)
we construct a Geo-Diverse Visual Commonsense Reasoning dataset (GD-VCR) to test Vision-Language models' ability to understand cultural and geo-location-specific commonsense. We find that the performance of SOTA VL models for non-Western regions (e.g., East Asia, South Asia, and Africa) is significantly lower than that for Western region.
GPT-GNN: Generative Pre-Training of Graph Neural Networks
The Conference on Knowledge Discovery and Data Mining (KDD 2020)
We introduce a self-supervised graph generation task to pre-train GNN. We factorize the likelihood of graph generation into two components: 1) attribute generation, and 2) edge generation, without lossing mutual dependency.
Heterogeneous Graph Transformer
The Web Conference (WWW 2020)
We present the Heterogeneous Graph Transformer (HGT) architecture for modeling Web-scale heterogeneous (nodes and edges have multiple types) and dynamic graphs. HGT could automatically learns important meta-paths for different downstream tasks.
Improving Neural Language Generation with Spectrum Control
The International Conference on Learning Representations (ICLR 2020)
We propose a novel spectrum control approach to address this degeneration problem. The core idea of our method is to directly guide the spectra training of the output embedding matrix with a slow-decaying singular value prior distribution through a reparameterization framework.
Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks
The Conference on Neural Information Processing Systems (NeurIPS 2019)
We propose LAyer-Dependent ImportancE Sampling (LADIES). Based on the sampled nodes in the upper layer, LADIES selects their neighborhood nodes, compute the importance probability accordingly and samples a fixed number of nodes within them.
Few-Shot Representation Learning for Out-Of-Vocabulary Words
The Conference of the Association for Computational Linguistics (ACL 2019)
We formulate the learning of OOV embedding as a few-shot regression problem by predicting an oracle embedding vector (defined as embedding trained with abundant observations) based on only K contexts. Specifically, we use Model-Agnostic Meta-Learning (MAML) for adapting a hierachical Transformer to the new corpus fast and robustly.
Unbiased LambdaMART: An Unbiased Pairwise Learning-to-Rank Algorithm
The Web Conference (WWW 2019)
We propose a novel framework for pairwise learning-to-rank. Our algorithm, Unbiased LambdaMART can jointly estimate the biases at click positions and the biases at unclick positions, and learn an unbiased ranker.
Emoji-Powered Representation Learning for Cross-Lingual Sentiment Classification
The Web Conference (WWW 2019, Best Full Paper Award)
We employ emoji prediction task as the instrument to learn both the cross-language and language-specific sentiment patterns in different languages.
Listening to Chaotic Whispers: A Deep Learning Framework for News-oriented Stock Trend Prediction
The Conference on Web Search and Data Mining (WSDM 2018).
We designed a Hybrid Attention Networkss(HAN) to predict the stock trend based on the sequence of recent related news, with self-paced learning mechanism to guide efficient learning.

Contact