Hello! I am a third-year PhD student in the Language Analysis Group at HIT-SCIR, under the supervision of Prof. Wanxiang Che and Assoc. Prof. Qingfu Zhu. Currently, I’m a research intern at StepFun, focusing on the code aspects of LLM pretraining.
My primary research interests are:
- Code Intelligence: Code Generation and Code Assisted Other Tasks.
- Inference Acceleration: Speculative Decoding.
If you are interested in my research or potential collaborations, please feel free to reach out to me at xzluo@ir.hit.edu.cn~🎉
I am interest in algorithm competitions. During my undergraduate years, I participated in various programming contests and served as the president of the Programming and Algorithms Association and vice president of the Federation of Student Associations.
🔥 News
- 2025.06: 🎉 Our Token Recycling and OpenCoder are selected as Oral presentation at ACL2025! See you in Vienna! 🇦🇹
- 2025.05: 🎉 Our Token Recycling, ChartCoder, OpenCoder are accepted by ACL 2025! Our ChartEdit is accepted by findings of ACL 2025! And Tool-MVRL is accepted by KDD 2025! Congratulations to all our collaborators!
- 2024.09: 🎉 Our MultiPoT and Make Some Noise are accepted by EMNLP 2024! Congratulations to all our collaborators!
- 2024.09: 🔥 We release Abacus, a 2.7B Code LLM, complete with open weights and detailed training documentation!
📝 Publications
- Preprint Success is in the Details: Evaluate and Enhance Details Sensitivity of Code LLMs through Counterfactuals, Xianzhen Luo, Qingfu Zhu, Zhiming Zhang, Mingzheng Xu, Tianhao Cheng, Yixuan Wang, Zheng Chu, Shijie Xuyang, Zhiyuan Ma, YuanTao Fan, Wanxiang Che.
- Preprint Is Compression Really Linear with Code Intelligence?, Xianzhen Luo†, Shijie Xuyang†, Tianhao Cheng, Zheng Chu, Houyi Li, Ziqi Wang, Siming Huang, Qingfu Zhu, Qiufeng Wang, Xiangyu Zhang, Shuigeng Zhou, Wanxiang Che.
- ACL 2025 Oral Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling, Xianzhen Luo, Yixuan Wang, Qingfu Zhu, Zhiming Zhang, Xuanyu Zhang, Qing Yang, Dongliang Xu.
- ACL 2025 ChartCoder: Advancing Multimodal Large Language Model for Chart-to-Code Generation, Xuanle Zhao†, Xianzhen Luo†, Qi Shi, Chi Chen, Shuo Wang, Zhiyuan Liu, Maosong Sun.
- ACL 2025 Oral OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models, Siming Huang, Tianhao Cheng, Jason Klein Liu, Weidi Xu, JIARAN HAO, Liuyihan Song, Yang Xu, Jian Yang, Jiaheng Liu, Chenchen Zhang, Linzheng Chai, Ruifeng Yuan, Xianzhen Luo, Qiufeng Wang, YuanTao Fan, Qingfu Zhu, Zhaoxiang Zhang, Yang Gao, Jie Fu, Qian Liu, Houyi Li, Ge Zhang, Yuan Qi, Xu Yinghui, Wei Chu, Zili Wang.
- ACL 2025 (Findings) ChartEdit: How Far Are MLLMs From Automating Chart Analysis? Evaluating MLLMs’ Capability via Chart Editing, Xuanle Zhao, Xuexin Liu, Yang Haoyue, Xianzhen Luo, Fanhu Zeng, Jianling Li, Qi Shi, Chi Chen.
- KDD 2025 Advancing Tool-Augmented Large Language Models via Meta-Verification and Reflection Learning, Zhiyuan Ma, Jiayu Liu, Xianzhen Luo, Zhenya Huang, Qingfu Zhu, Wanxiang Che.
- EMNLP 2024 Python is Not Always the Best Choice: Embracing Multilingual Program of Thoughts, Xianzhen Luo, Qingfu Zhu, Zhiming Zhang, Libo Qin, Xuanyu Zhang, Qing Yang, Dongliang Xu, Wanxiang Che.
- EMNLP 2024 Make Some Noise: Unlocking Language Model Parallel Inference Capability through Noisy Training, Yixuan Wang†, Xianzhen Luo†, Fuxuan Wei, Yijun Liu, Qingfu Zhu, Xuanyu Zhang, Qing Yang, Dongliang Xu, Wanxiang Che.
- LREC-COLING 2024 A Survey on Natural Language Processing for Programming, Qingfu Zhu, Xianzhen Luo, Fang Liu, Cuiyun Gao, Wanxiang Che.
- Preprint Semi-Instruct: Bridging Natural-Instruct and Self-Instruct for Code Large Language Models, Xianzhen Luo, Qingfu Zhu, Zhiming Zhang, Xu Wang, Qing Yang, Dongliang Xu, Wanxiang Che.
- ACL 2022 (Findings) Inverse is better! fast and accurate prompt for few-shot slot tagging, Yutai Hou, Cheng Chen, Xianzhen Luo, Bohan Li, Wanxiang Che.
- AI Open, 2022 Augmented and challenging datasets with multi-step reasoning and multi-span questions for Chinese judicial reading comprehension,Qingye Meng, Ziyue Wang, Hang Chen, Xianzhen Luo, Baoxin Wang, Zhipeng Chen, Yiming Cui, Dayong Wu, Zhigang Chen, Shijin Wang.
† indicates equal contribution.
🎖 Honors and Awards
- 2022.06 Outstanding Graduate.
- 2021.04 International Collegiate Programming Contest Asia-East Continent Final Contest: Bronze Medal.
- 2020.12 National Encouragement Scholarship.
- 2020.12 International Collegiate Programming Contest Asia Shanghai Regional Contest: Silver Medal.
- 2020.11 China Collegiate Programming Contest Mianyang Site: Silver Medal.
- 2020.10 Northeast Collegiate Programming Contest: First Prize.
- 2019.12 National Scholarship.
- 2019.12 International Collegiate Programming Contest Asia-East Continent Final Contest: Bronze Medal.
- 2019.11 International Collegiate Programming Contest Asia Shenyang Regional Contest: Silver Medal.
📖 Educations
- 2022.09 - now, Ph.D. student, Harbin Institute of Technology.
- 2018.09 - 2022.07, Undergraduate, Harbin Engineering University.
💬 Invited Talks
- 2024.03, I was invited to give a talk at Qiyuan Lab about the Training and Application of Code Large Language Models.
💻 Internships
- 2024.12 - 2025.06, StepFun AI, China.
- 2023.11 - 2024.09, Du Xiaoman (Beijing) Science Technology Co., Ltd., China.
- 2022.03 - 2022.08, Joint Laboratory of HIT and iFLYTEK Research (HFL), China.