Hello! I am a forth-year PhD student in the Language Analysis Group at HIT-SCIR, under the supervision of Prof. Wanxiang Che and Assoc. Prof. Qingfu Zhu. Currently, I am a KStar Research Intern at Kuaishou Technology.

My primary research interest is Code Intelligence. I focus on identifying and addressing bottlenecks across the full pipeline: Pretrain, Post-Train, Application and Acceleration of Inference.

If you are interested in my research or potential collaborations, please feel free to reach out to me at xzluo@ir.hit.edu.cn~πŸŽ‰

I am interest in algorithm competitions. During my undergraduate years, I participated in various programming contests and served as the president of the Programming and Algorithms Association and vice president of the Federation of Student Associations.

πŸ”₯ News

πŸ“ Publications

Pretrain

Post-Train

Inference

Survey

Others

† indicates equal contribution.

πŸŽ– Honors and Awards

  • 2025.10 Merit Student (δΈ‰ε₯½ε­¦η”Ÿ) of Heilongjiang Province.
  • 2025.10 (PhD Student) National Scholarship.
  • 2025.07 ACL Outstanding Paper.
  • 2022.06 Outstanding Graduate.
  • 2021.04 International Collegiate Programming Contest Asia-East Continent Final Contest: Bronze Medal.
  • 2020.12 National Encouragement Scholarship.
  • 2020.12 International Collegiate Programming Contest Asia Shanghai Regional Contest: Silver Medal.
  • 2020.11 China Collegiate Programming Contest Mianyang Site: Silver Medal.
  • 2020.10 Northeast Collegiate Programming Contest: First Prize.
  • 2019.12 (Undergraduate) National Scholarship.
  • 2019.12 International Collegiate Programming Contest Asia-East Continent Final Contest: Bronze Medal.
  • 2019.11 International Collegiate Programming Contest Asia Shenyang Regional Contest: Silver Medal.

πŸ“– Educations

  • 2022.09 - now, Ph.D. student, Harbin Institute of Technology.
  • 2018.09 - 2022.07, Undergraduate, Harbin Engineering University.

πŸ’¬ Invited Talks

  • 2025.08, I was invited to give a talk at Alibaba International Consumer Business Unit to share and discuss our paper Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling.
  • 2024.03, I was invited to give a talk at Qiyuan Lab about the Training and Application of Code Large Language Models.

πŸ’» Internships

🌍 Visitors