Selected Publications


Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks
Original full-batch GCN training requires calculating the representation of all the nodes in the graph per layer, which brings in high computation costs. To solve it, we propose LAyer-Dependent ImportancE Sampling (LADIES). Based on the sampled nodes in the upper layer, LADIES selects their neighborhood nodes, constructe a bipartite subgraph and compute the importance probability accordingly. Then, it samples a fixed number of nodes by the calculated probability, and recursively conducts such procedure per layer to construct the whole computation graph. We prove theoretically and experimentally, that our proposed sampling algorithm outperforms the previous sampling methods regarding time, memory and accuracy.
Few-Shot Representation Learning for Out-Of-Vocabulary Words
The Conference of the Association for Computational Linguistics (ACL 2019)
When a word occurs only a few times in the corpus, the existing embedding technique is not accurate. In this paper, we formulate the learning of OOV embedding as a few-shot regression problem by fitting a representation function to predict an oracle embedding vector (defined as embedding trained with abundant observations) based on limited contexts. Specifically, we propose a hierarchical attention network to serve as the neural regression function, in which the context information of a word is encoded and aggregated from K observations. Furthermore, we propose to use Model-Agnostic Meta-Learning (MAML) for adapting the learned model to the new corpus fast and robustly.
Unsupervised Pre-Training of Graph Convolutional Networks
Training an accurate GCN model often requires a large collection of labeled data and expressive features, which might not be accessible for certain applications. In this paper, we show that a pre-trained GCN model can capture generic structural information of graphs and benefit downstream applications. We further explore three unsupervised tasks: 1) denoising graph reconstruction, 2) centrality score ranking, and 3) cluster detection, for building the pre-trained GCN model without human annotations.
Unbiased LambdaMART: An Unbiased Pairwise Learning-to-Rank Algorithm
The Web Conference 2019 (WWW 2019)
Recently a number of algorithms under the theme of `unbiased learning-to-rank' have been proposed, which can reduce position bias and train a high-performance ranker with click data in learning-to-rank. In this paper, we propose a novel framework for pairwise learning-to-rank. Our algorithm, Unbiased LambdaMART can jointly estimate the biases at click positions and the biases at unclick positions, and learn an unbiased ranker.
Emoji-Powered Representation Learning for Cross-Lingual Sentiment Classification
The Web Conference 2019 (WWW 2019, Best Paper Award)
In this paper, we employ emojis, ubiquitous and emotional language units, as the instrument to learn both the cross-language and language-specific sentiment patterns in different languages. We propose a novel representation learning method that uses emoji prediction as an instrument to learn respective sentiment-aware representations for each language.
Listening to Chaotic Whispers: A Deep Learning Framework for News-oriented Stock Trend Prediction
The International Conference on Web Search and Data Mining (WSDM 2018).
Precise stock trend prediction is very difficult since the highly volatile and non-stationary nature of stock market. The quality, trustworthiness and comprehensiveness of online content related to stock market varies drastically, and a large portion consists of the low-quality news, comments, or even rumors. To address this challenge, we designed a Hybrid Attention Networkss(HAN) to predict the stock trend based on the sequence of recent related news, with self-paced learning mechanism to guide efficient learning.
Aladdin: Automating Release of Deep-Link APIs on Android
The Web Conference 2018 (WWW 2018)
Recently, deep links have been advocated by major companies to enable targeting and opening a specific page of an app externally with an accessible uniform resource identifier (URI). In this paper, we propose the Aladdin approach and supporting tool to release deep links to access arbitrary locations of existing apps, including a novel cooperative framework by synthesizing the static analysis and the dynamic analysis while minimally engaging developers’ inputs and configurations.

Contact