Hi, I’m now a 1st year PhD student at GaTech. I am working on the interaction of HCI and ML, especially in fragile LLMs and how to fix them. I collaborate with Prof. Xiang Anthony Chen and Sherry Wu. I was a member of Microsoft Research working with Dr. Jiang Bian on LLM Agents. Subscribe to my newsletter for research updates.
I care about promoting open, collaborative, and reproducible research. I do not limit myself to specific techniques but instead always look for better solutions and good problems. I am looking for internship opportunities for summer 2026.
In my spare time, I have keen interests in visualization, communication design, and VI systems. Everything should be able to be expressed or explained in vivid and simple ways. Anime, movies, and music are also my favorites. My goal is to be a great professor for students in places like here. Yes, I love teaching and sharing knowledge.
Undergraduate in Artificial Intelligence, 21.09 - 25.06
South China University of Technology
PhD Student in Machine Learning, 25.08 - 29.05
Georgia Institute of Technology
Working with Prof. Anthony Chen from UCLA and Prof. Sherry Wu from CMU. Investigate the over-reliance issues on LLM by designed large-scale user study, with quantization, statistics methods, customized webs and browser extension.
One paper submitted to CSCW 2025.
Working with Dr. Jian Bian, Microsoft Research Asia. Research on Automatic Research and Development (R&D) and Quantitative Finance Strategy. Imagine a world where the R&D process is fully automated, and the LLM can automatic generate analysis results of every proposed ideas and propose new ideas. Project we led is open-sourced in: Github/RD-Agent. I also won the highest honor “Star of Tomorrow” awarded by MSRA.
One paper accepted by ICLR 2024 AGI Workshop.
Advised by Prof. Danyang Wu. Research on the graph neural network, geometric representation learning and their applications for RNA, protein, and brain networks.
Two paper accepted by WWW 2024, two papers pending.
Responsibilities for CTR/CVR prediction and Smart Bidding. Project of the Year awarded. Achievements include:
A comprehensive evaluation benchmark for assessing privacy awareness of large language models in physical environments, revealing significant gaps when privacy is grounded in real-world contexts across four evaluation tiers.
With increasing demands for efficiency, information retrieval has developed a branch of sparse retrieval, further advancing towards inference-free retrieval where the documents are encoded during indexing time and there is no model-inference for queries. Existing sparse retrieval models rely on FLOPS regularization for sparsification, while this mechanism was originally designed for Siamese encoders, it is considered to be suboptimal in inference-free scenarios which is asymmetric. Previous attempts to adapt FLOPS for inference-free scenarios have been limited to rule-based methods, leaving the potential of sparsification approaches for inference-free retrieval models largely unexplored. In this paper, we explore ℓ0 inspired sparsification manner for inference-free retrievers. Through comprehensive out-of-domain evaluation on the BEIR benchmark, our method achieves state-of-the-art performance among inference-free sparse retrieval models and is comparable to leading Siamese sparse retrieval models. Furthermore, we provide insights into the trade-off between retrieval effectiveness and computational efficiency, demonstrating practical value for real-world applications.
Propose a cross-view approximation on Grassman manifold (CAGM) model to address inconsistencies within multiview adjacency matrices, feature matrices, and cross-view combinations from the two sources.
Propose an explainable stock earning framework via news factor analyzing model. Formalizing news into semantic graph and learn embedding of it as a more expressible factor. Stock earning foresting and explainable earning module are built via aggregating and utilizing news factor and numeric stock factor, achieving SOTA performance on A-stock dataset. A detailed report will be generated with the power of LLM.
I have been fortunate to work with many talented and dedicated individuals who have generously shared their time and expertise with me. I am grateful for their support and encouragement.
I also thanks friends in anime club, who always bring me joy and mental support.
Reach me by email if you have any questions.