Selected Publications


Heterogeneous Graph Transformer
Most GNNs are designed for homogeneous networks, in which all nodes or edges have the same feature space and representation distribution, making them infeasible for representing evolving heterogeneous structures. In this paper, we present the Heterogeneous Graph Transformer architecture for modeling Web-scale heterogeneous and dynamic graphs (Microsoft Academic Graph).
Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks
The Conference on Neural Information Processing Systems (NeurIPS 2019)
Full-batch GCN training requires calculating the representation of all the nodes in the graph, which brings in high computation costs. To solve it, we propose LAyer-Dependent ImportancE Sampling (LADIES). Based on the sampled nodes in the upper layer, LADIES selects their neighborhood nodes, compute the importance probability accordingly and samples a fixed number of nodes within them. We prove theoretically and experimentally, that our proposed sampling algorithm outperforms the previous sampling methods regarding time, memory and accuracy.
Few-Shot Representation Learning for Out-Of-Vocabulary Words
The Conference of the Association for Computational Linguistics (ACL 2019)
When a word occurs only a few times in the corpus, the existing embedding technique is not accurate. In this paper, we formulate the learning of OOV embedding as a few-shot regression problem by fitting a representation function to predict an oracle embedding vector (defined as embedding trained with abundant observations) based on limited contexts. Specifically, we propose a hierarchical attention network to serve as the neural regression function, in which the context information of a word is encoded and aggregated from K observations. Furthermore, we propose to use Model-Agnostic Meta-Learning (MAML) for adapting the learned model to the new corpus fast and robustly.
Unsupervised Pre-Training of Graph Convolutional Networks
Training an accurate GCN model requires a large collection of labeled data and expressive features, which might not be accessible. In this paper, we show that a pre-trained GCN model can capture generic structural information of graphs and benefit downstream applications. We further explore three unsupervised tasks: 1) denoising graph reconstruction, 2) centrality score ranking, and 3) cluster detection, for building the pre-trained GCN model without human annotations.
Unbiased LambdaMART: An Unbiased Pairwise Learning-to-Rank Algorithm
The Web Conference (WWW 2019)
Recently a number of algorithms under the theme of `unbiased learning-to-rank' have been proposed, which can reduce position bias and train a high-performance ranker with click data in learning-to-rank. In this paper, we propose a novel framework for pairwise learning-to-rank. Our algorithm, Unbiased LambdaMART can jointly estimate the biases at click positions and the biases at unclick positions, and learn an unbiased ranker.
Emoji-Powered Representation Learning for Cross-Lingual Sentiment Classification
The Web Conference (WWW 2019, Best Paper Award)
In this paper, we employ emojis, ubiquitous and emotional language units, as the instrument to learn both the cross-language and language-specific sentiment patterns in different languages. We propose a novel representation learning method that uses emoji prediction as an instrument to learn respective sentiment-aware representations for each language.
Listening to Chaotic Whispers: A Deep Learning Framework for News-oriented Stock Trend Prediction
The International Conference on Web Search and Data Mining (WSDM 2018).
Precise stock trend prediction is difficult since the highly volatile and non-stationary nature of stock market. The quality, trustworthiness and comprehensiveness of online content related to stock market varies drastically. To address this challenge, we designed a Hybrid Attention Networkss(HAN) to predict the stock trend based on the sequence of recent related news, with self-paced learning mechanism to guide efficient learning.

Contact