Selected Publications


SciInstruct: a Self-Reflective Instruction Annotated Dataset for Training Scientific Language Models
Conference on Neural Information Processing Systems (NeurIPS 2024, Dataset Track)
We use LLM to self-curated SciInstruct, a diverse and high-quality dataset of college-level mathematics, physics, chemistry, and formal proofs. Using SciInstruct to finetune the ChatGLM family of LLMs, we introduce SciGLM, a suite of scientific language models for college-level mathematical/scientific reasoning.
ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search
Conference on Neural Information Processing Systems (NeurIPS 2024)
In this paper, we develop a reinforced self-training approach, called ReST-MCTS*, based on integrating process reward guidance with tree search MCTS* for collecting higher-quality reasoning traces as well as per-step value to train policy and reward models. ReST-MCTS* circumvents the per-step manual annotation typically used to train process rewards by tree-search-based reinforcement learning: Given oracle final correct answers, ReST-MCTS* is able to infer the correct process rewards by estimating the probability this step can help lead to the correct answer. These inferred rewards serve dual purposes: they act as value targets for further refining the process reward model and also facilitate the selection of high-quality traces for policy model self-training.
Physics-Informed Regularization for Domain-Agnostic Dynamical System Modeling
Conference on Neural Information Processing Systems (NeurIPS 2024), Best Paper Award at NeurIPS 2023, Deep Learning and Differential Equations (DLDE) workshop
We propose a physical-law-guided regularization term corresponding to a soft constraint of time-reversal symmetry. The term is applied to GraphODE models for multi-agent dynamical systems and demonstrated as superior to several baselines on a variety of benchmarks, including the challenging pendulum problem.
Enhancing Large Vision Language Models with Self-Training on Image Comprehension
Conference on Neural Information Processing Systems (NeurIPS 2024)
We introduce Self-Training on Image Comprehension (STIC), which self-constructs a preference dataset for image descriptions using unlabeled images. Preferred responses are generated through a step-by-step prompt, while dis-preferred responses are generated from either corrupted images or misleading prompts.
Can Large Language Model Agents Simulate Human Trust Behavior?
Conference on Neural Information Processing Systems (NeurIPS 2024)
Under the framework of Trust Games, we discover that LLM agents can have high behavioral alignment with humans regarding trust behaviors, indicating the feasibility to simulate human trust behaviors with LLM agents
SceneCraft: An LLM Agent for Synthesizing 3D Scene as Blender Code
International Conference on Machine Learning (ICML 2024, Oral Presentation)
We introduces SceneCraft, an LLM Agent converting text descriptions into Blender-executable Python scripts which render complex scenes with up to a hundred 3D assets. SceneCraft can keep self-improving via Library Learning.
Symbolic Music Generation with Non-Differentiable Rule Guided Diffusion
International Conference on Machine Learning (ICML 2024, Oral Presentation)
We study the problem of symbolic music generation, with a technical focus on non-differentiable rule guidance by Musical Rules (e.g., note density or chord progression). We propose Stochastic Control Guidance (SCG), a novel guidance method that only requires forward evaluation of rule functions that can work with pre-trained diffusion models in a plug-and-play way, thus achieving training-free guidance for non-differentiable rules for the first time.
SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models
International Conference on Machine Learning (ICML 2024)
We propose SciBench to systematically examine LLM's reasoning for complex scientific problem solving. SCIBENCH contains two carefully curated datasets: an open set featuring a range of collegiate-level scientific problems drawn from mathematics, chemistry, and physics textbooks, and a closed set comprising problems from undergraduate-level exams in computer science and mathematics.
Strategist: Learning Strategic Skills by LLMs via Bi-Level Tree Search
ICML 2024, Automated Reinforcement Learning (AutoRL) Workshop
We propose Strategist, a method allowing LLMs to learn new skills for multi-agent games. With bi-level tree search approach, combining high-level strategic learning with low-level simulated self-play for feedback. It outperformed RL and other LLM-based approaches on Game of Pure Strategy and The Resistance: Avalon at action planning and dialogue generation.
Self-Control of LLM Behaviors by Compressing Suffix Gradient into Prefix Controller
ICML 2024, Workshop on Mechanistic Interpretability
We propose Self-Control, a novel method utilizing suffix gradients to control the behavior of large language models (LLMs) without explicit human annotations, across multiple domains, including emotional modulation, ensuring harmlessness, and enhancing complex reasoning.
AVIS: Autonomous Visual Information Seeking with Large Language Model Agent
Conference on Neural Information Processing Systems (NeurIPS 2023)
we propose an autonomous information seeking visual question answering framework, AVIS. Our method leverages a Large Language Model (LLM) to dynamically strategize the utilization of external tools and to investigate their outputs, thereby acquiring the indispensable knowledge needed to provide answers to the posed questions.
Learning to Group Auxiliary Datasets for Molecule
Conference on Neural Information Processing Systems (NeurIPS 2023)
We propose MolGroup to address the limited data problem in molecule property prediction by leveraging auxiliary datasets to improve performance on target datasets, via a routing mechanism w/ bi-level optimization.
Towards a Comprehensive Benchmark for FPGA Targeted High-Level Synthesis
Conference on Neural Information Processing Systems (NeurIPS 2023, Dataset Track)
High-level synthesis (HLS) aims to raise the abstraction layer in hardware design, enabling the design of domain-specific accelerators (DSAs) like FPGAs using C/C++ instead of hardware description languages. To enable machine learning models to predict design quality, we present HLSYN, a comprehensive dataset for training and evaluating design quality prediction models for hardware design.
AvalonBench: Evaluating LLMs Playing the Game of Avalon
NeurIPS 2023, Foundation Models for Decision Making (FMDM) workshop
we introduce AvalonBench - a comprehensive game environment tailored for evaluating multi-agent LLM Agents. This benchmark incorporates: (1) a game environment for Avalon, (2) rule-based bots as baseline opponents, and (3) ReAct-style LLM agents with tailored prompts for each role.
REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge Memory
Conference on Computer Vision and Pattern Recognition (CVPR 2023), selected as Highlight.
We propose an end-to-end Retrieval-Augmented Visual Language Model (REVEAL) that learns to encode world knowledge into a large-scale memory, and to retrieve from it to answer knowledge-intensive queries. The key novelty is that the memory, retriever and generator are all pre-trained end-to-end to use a diverse set of multimodal knowledge sources, bringing significant gains.
Empowering Language Models with Knowledge Graph Reasoning for Open-Domain Question Answering
Conference on Empirical Methods in Natural Language Processing (EMNLP 2022)
We propose a novel symbolic Knowledge Graph (KG) reasoning layer that could be flexibly plugged into most existing Language Models (LMs) and allow LMs to interact with KG, unifying the retrieval and reasoning in a end-to-end framework. OREO-LM improves RoBERTa and T5 on various QA tasks, and the generated reasoning paths could help interpret the model's decision.
Improving Multi-Task Generalization via Regularizing Spurious Correlation
Conference on Neural Information Processing Systems (NeurIPS 2022, Spotlight Presentation)
We point out the unique challenges of spurious correlation problem in multi-task setting that influence generalization. We propose Multi-Task Causal Representation Learning (MT-CRL) framework to learn 1) disentangled neural modules; 2) Task-to-Module Causal Graph; 3) Regularize spurious correlation over learned causal graph.
Zero-shot Transfer Learning within a Heterogeneous Graph via Knowledge Transfer Networks
Conference on Neural Information Processing Systems (NeurIPS 2022)
We propose a zero-shot transfer learning module for heterogeneous graph neural networks that transfers knowledge from label-abundant node types to zero-labeled node types through rich relational information given in a single heterogeneous graph.
Fuzzy Logic based Logical Query Answering on Knowledge Graph
AAAI Conference on Artificial Intelligence (AAAI 2022, Oral Presentation)
We propose FuzzQE, a fuzzy logic based logical query embedding framework for answering FOL queries over KGs. FuzzQE define logical operators in a principled and learningfree manner, which could be trained with only KG without any complex queries.
Relation-Guided Pre-Training for Open-Domain Question Answering
Conference on Empirical Methods in Natural Language Processing (EMNLP-Finding 2021)
We propose RGPT-QA to synthesize QA pairs from relation triplets in WikiData and WikiPedia for pre-training Open-Domain QA Model and improves the QA performance, especially for questions with long-tail relations.
Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning
Conference on Empirical Methods in Natural Language Processing (EMNLP 2021, Oral Presentation)
we construct a Geo-Diverse Visual Commonsense Reasoning dataset (GD-VCR) to test Vision-Language models' ability to understand cultural and geo-location-specific commonsense. We find that the performance of SOTA VL models for non-Western regions (e.g., East Asia, South Asia, and Africa) is significantly lower than that for Western region.
GPT-GNN: Generative Pre-Training of Graph Neural Networks
Conference on Knowledge Discovery and Data Mining (KDD 2020, Oral, Top-10 Cited Paper in KDD'20)
We introduce a self-supervised graph generation task to pre-train GNN. We factorize the likelihood of graph generation into two components: 1) attribute generation, and 2) edge generation, without lossing mutual dependency.
Heterogeneous Graph Transformer
The Web Conference (WWW 2020, Most Cited Paper in WWW'20)
We present the Heterogeneous Graph Transformer (HGT) architecture for modeling Web-scale heterogeneous (nodes and edges have multiple types) and dynamic graphs. HGT could automatically learns important meta-paths for different downstream tasks.
Improving Neural Language Generation with Spectrum Control
The International Conference on Learning Representations (ICLR 2020)
We propose a novel spectrum control approach to address this degeneration problem. The core idea of our method is to directly guide the spectra training of the output embedding matrix with a slow-decaying singular value prior distribution through a reparameterization framework.
Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks
Conference on Neural Information Processing Systems (NeurIPS 2019)
We propose LAyer-Dependent ImportancE Sampling (LADIES). Based on the sampled nodes in the upper layer, LADIES selects their neighborhood nodes, compute the importance probability accordingly and samples a fixed number of nodes within them.
Few-Shot Representation Learning for Out-Of-Vocabulary Words
Conference of the Association for Computational Linguistics (ACL 2019)
We formulate the learning of OOV embedding as a few-shot regression problem by predicting an oracle embedding vector (defined as embedding trained with abundant observations) based on only K contexts. Specifically, we use Model-Agnostic Meta-Learning (MAML) for adapting a hierachical Transformer to the new corpus fast and robustly.
Unbiased LambdaMART: An Unbiased Pairwise Learning-to-Rank Algorithm
The Web Conference (WWW 2019)
We propose a novel framework for pairwise learning-to-rank. Our algorithm, Unbiased LambdaMART can jointly estimate the biases at click positions and the biases at unclick positions, and learn an unbiased ranker.
Emoji-Powered Representation Learning for Cross-Lingual Sentiment Classification
The Web Conference (WWW 2019, Best Full Paper Award)
We employ emoji prediction task as the instrument to learn both the cross-language and language-specific sentiment patterns in different languages.
Listening to Chaotic Whispers: A Deep Learning Framework for News-oriented Stock Trend Prediction
Conference on Web Search and Data Mining (WSDM 2018).
We designed a Hybrid Attention Networkss(HAN) to predict the stock trend based on the sequence of recent related news, with self-paced learning mechanism to guide efficient learning.

Academic Services

Invited Talks

  • Enhancing Reasoning of Large Language Models through Reward-Guided Search and Self-Training
  • Make Knowledge Computable: Differentiable Neural-Symbolic Reasoning
    • USC AI Seminars at USC Information Sciences Institute
    • ByteDance AI Lab, AI Seminar
  • Self-Supervised Learning and Logical Reasoning over Knowledge Graphs