Select Publications

Preprints

Codefuse ; Team L; : ; Cai W; Cao Y; Chen C; Chen C; Chen S; Cui Q; Di P; Fang J; Gong Z; Guo T; He Z; Huang Y; Li C; Li J; Li Z; Lian S; Liu B; Luo S; Mao S; Shen M; Wu J; Yang J; Yang W; Ye T; Yu H; Zhang W; Zhang Z; Zhao H; Zheng X; Zhou J, 2025, Every Sample Matters: Leveraging Mixture-of-Experts and High-Quality Data for Efficient and Accurate Code LLM, http://dx.doi.org/10.48550/arxiv.2503.17793

Di P; Liu B; Gao Y, 2024, MicroFuzz: An Efficient Fuzzing Framework for Microservices, http://dx.doi.org/10.1145/3639477.3639723

Liu X; Wang J; Sun J; Yuan X; Dong G; Di P; Wang W; Wang D, 2023, Prompting Frameworks for Large Language Models: A Survey, http://dx.doi.org/10.48550/arxiv.2311.12785

Fan G; Xie X; Zheng X; Liang Y; Di P, 2023, Static Code Analysis in the AI Era: An In-depth Exploration of the Concept, Function, and Potential of Intelligent Code Analysis Agents, http://arxiv.org/abs/2310.08837v1

Di P; Li J; Yu H; Jiang W; Cai W; Cao Y; Chen C; Chen D; Chen H; Chen L; Fan G; Gong J; Gong Z; Hu W; Guo T; Lei Z; Li T; Li Z; Liang M; Liao C; Liu B; Liu J; Liu Z; Lu S; Shen M; Wang G; Wang H; Wang Z; Xu Z; Yang J; Ye Q; Zhang G; Zhang Y; Zhao Z; Zheng X; Zhou H; Zhu L; Zhu X, 2023, CodeFuse-13B: A Pretrained Multi-lingual Code Large Language Model, http://dx.doi.org/10.1145/3639477.3639719

Liu J; Liu J; Di P; Wu D; Zheng H; Liu A; Xue J, 2022, Hybrid Inlining: A Compositional and Context Sensitive Static Analysis Framework, http://arxiv.org/abs/2210.14436v1


Back to profile page