I am Linyi Li, PhD in Computer Science, University of Illinois Urbana-Champaign advised by Prof. Bo Li and co-advised by Prof. Tao Xie.
My research is in the intersection of machine learning, security, and software engineering. Specifically, I focus on: (1) building certifiably trustworthy deep learning systems, achieving certifiable robustness against noise perturbations [IJCAI 2019] [ICLR 2022a] [ICML 2022a] [SP 2023], certifiable robustness against semantic perturbations [CCS 2021] [ICML 2022b], certifiable robustness against poisoning attacks [ICLR 2022b], certifiable robustness against distributional shift [ICML 2022c], certifiable fairness [NeurIPS 2022], certifiable numerical reliability [ICSE 2023], etc. (2) Data-centric and systematic evaluaton research in large language models. Previously, I did research in robustness of ensemble models [NeurIPS 2021], black-box attacks for deep learning [ICML 2021] [AISTATS 2021], and applications of machine learning in software testing [FSE 2020 Industry]. I am awarded Rising Stars in Data Science, AdvML Rising Star Award, and Wing Kai Cheng Fellowship; and I am the finalist of 2022 Qualcomm Innovation Fellowship and 2022 Two Sigma PhD Fellowship.I got my bachelor's degree from Department of Computer Science and Technology, Tsinghua University in 2018, where I did research on Web API Automated Testing, advised by Prof. Xiaoying Bai.
Interested in trustworthy machine learning / large language models? Welcome to chat with me!
For undergrad/graduate students: Please send me an email with subject "[seek for (position/collaboration)]" to [email protected]. For those with aligned research background, academia and industry positions may be recommended or offered in North America.