I am Pingzhi Tang, an undergraduate student in the General Artificial Intelligence Experimental Program (Tong Class) at Peking University.
With a major GPA of 3.92/4.00, my research centers on making foundation models scalable and accessible by improving their computational efficiency. I’m particularly interested in next-gen model architectures, efficient inference, reasoning & reinforcement learning, and continual adaptation.
Beyond the academia, I am a photography enthusiast📸 and film lover🎞️.
Tong Class.
Major GPA: 3.92/4.00 (Rank: 1/39).
* See CV for full list of awards.
Advisor: Prof. Muhan Zhang.
Working on efficient inference, model architecture, PEFT, LLM reasoning, and continual adaptation.
Advisor: Prof. Meng Li.
Worked on efficient acceleration frameworks for Diffusion Language Models.
Addressed KV cache overhead in MLA tensor parallelism. Achieved 2x inference acceleration.
* equal contribution
Click to expand abstract
* equal contribution
Click to expand abstract
* equal contribution
Click to expand abstract
* equal contribution
Click to expand abstract
* equal contribution
Click to expand abstract
* equal contribution
Click to expand abstract
* equal contribution
Click to expand abstract
* equal contribution
Click to expand abstract
For a complete list of publications, please visit my Google Scholar or CV.
A blog post accompanying our ACL 2026 paper on Parametric Skill Transfer (PaST) — injecting RL skills for continual adaptation of large language models.
By Yiding Wang & Pingzhi Tang