Sijia Liu - CSE@MSU
Prospective Students
I always look for working with highly motivated students, in terms of RA/TA/externship/internship/visiting students. Interested candidates are strongly encouraged to contact me by email, together with resume and transcripts. Short BioSijia Liu is currently an Assistant Professor at the CSE department of Michigan State University, an Affiliated Professor at IBM Research, and an MIT-IBM Watson AI Lab affiliated PI. He received the Ph.D. degree (with All-University Doctoral Prize) in Electrical and Computer Engineering from Syracuse University, NY, USA, in 2016. He was a Postdoctoral Research Fellow at the University of Michigan, Ann Arbor, in 2016-2017, and a Research Staff Member at the MIT-IBM Watson AI Lab in 2018-2020. His research interests include scalable and trustworthy AI, e.g. scalable optimization for deep models, machine unlearning for vision and language models, AI robustness, and data-model efficiency. In 2024, he received the NSF Faculty Early Career Development (CAREER) Award. He also received the Best Student Paper Award at the 42nd IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2016) and the Best Paper Runner-Up Award at the 38th Conference on Uncertainty in Artificial Intelligence (UAI 2022). Dr. Liu has published over 70 papers at top-tier machine learning and computer vision conferences, including NeurIPS, ICML, ICLR, CVPR, ICCV, ECCV, AISTATS, and AAAI (please refer to the CS ranking). He is currently a Senior Member of IEEE, a Technical Committee (TC) Member of Machine Learning for Signal Processing (MLSP) in the IEEE’s Signal Processing Society, and an Associate Editor for IEEE Transactions on Signal Processing and IEEE Transactions on Aerospace and Electronic Systems, respectively. He has also organized a series of Adversarial ML workshops in NeurIPS’24, ICML (’22, ’23), and KDD (’19-’22), and provided tutorials on Trustworthy and Scalable ML in AAAI (’23, ’24), NeurIPS’22, and CVPR (’20, ’23, ’24). Research InterestsMy research spans the areas of machine learning (ML)/deep learning (DL), optimization, computer vision, security, signal processing and data science, with a focus on developing learning algorithms and theory, and robust and explainable artificial intelligence (AI). These research themes provide a solid foundation for reaching my long-term research objective: Making AI systems safe and scalable. As AI moves from the lab into the real world (e.g., autonomous vehicles), ensuring its safety becomes a paramount requirement prior to its deployment. Moreover, as datasets, ML/DL models, and learning tasks become increasingly complex, getting ML/DL to scale calls for new advances in learning algorithm design. Thus, robustness and scalability underscore my current and future research, and of course, these two goals are intertwined. More broadly, the study towards robust and scalable AI could make a significant impact on machine learning theories, and induce more promising applications in, e.g., automated ML, meta-learning, privacy and security, hardware design, and big data analysis. I intend to seek a new learning frontier when the current learning algorithms become infeasible, and formalize foundations of secure learning. Please refer to Projects and our OPTML group for some research highlights. Representative Publications
News* Honored to be one of the 10 recipients of the Amazon Research Award for Spring and Winter 2024. * Check out the OPTML Menu of Innovations @ NeurIPS 2024! * Thrilled to announce that our position paper, “Rethinking Machine Unlearning for LLMs”, has been accepted for publication in Nature Machine Intelligence. Congratulations to the team and our amazing collaborators for achieving this milestone! * Grateful for receiving the NAIRR Pilot Award in the field of Artificial Intelligence and Intelligent Systems. * One paper in WACV’25: Can Adversarial Examples Be Parsed to Reveal Victim Model Information? * Six papers in NeurIPS’24, including one dataset benchmark. Congrats to Yihua Zhang, Yuguang Yao, Jinghan Jia, and Yimeng Zhang for their outstanding leadership! * One paper in EMNLP’24: SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning. * Grateful to receive the Amazon Research Award for AI in Information Security–Spring 2024. * The 3rd AdvML-Frontiers Workshop is now live and will be co-located at NeurIPS’24! Submit your papers by Aug 30. * Dr. Liu has received the prestigious NSF Faculty Early Career Development (CAREER) Award. * Congratulations to Yihua for receiving the 2024 MLCommons Rising Stars Award. * Two papers in ECCV’24: (1) Exploring adversarial robustness of safety-driven concept-unlearned diffusion models through a diffusion classifier perspective; (2) Challenging forgets to unveil when and why machine unlearning could be more challenging than common beliefs. * Two papers in ICML’24: (1) Benchmarking zeroth-order optimization for memory-efficient LLM fine-tuning; (2) Why does graph transformer generalize? A Theoretical Dive into Self-attention and Positional Encoding. * [Feature Article@IEEE SPM] We are thrilled to share that our tutorial artile titled “An Introduction to Bilevel Optimization: Foundations and applications in signal processing and machine learning” has been published in the IEEE Signal Processing Magazine as a Feature Article. * [New Preprints] We are pleased to announce the release of the following papers on arXiv: * We are thrilled to share that our research paper titled “Reverse Engineering Deceptions in Machine- and Human-Centric Attacks” has been officially published in Foundations and Trends® in Privacy and Security. * [Launch of the MSU-UM-ARO Project Website] The “Lifelong Multimodal Fusion by Cross Layer Distributed Optimization” project receives funding from the Army Research Office (ARO). * Tutorial “Machine Unlearning in Computer Vision: Foundations and Applications” is accepted for presentation by CVPR 2024. See you in Seattle! * Four papers in ICLR’24: (1) Machine unlearning for safe image generation; (2) DeepZero: Training neural networks from scratch using only forward passes; (3) Backdoor data sifting; (4) Visual prompting automation * [New Preprints] We are pleased to announce the release of the following papers on arXiv: * Tutorial on “Zeroth-Order Machine Learning: Fundamental Principles and Emerging Applications in Foundation Models” is accepted by ICASSP’24 and AAAI’24. * NeurIPS 2023: 3 Papers Accepted – 1 Spotlight and 2 Posters. Congratulations to Jignhan, Jiancheng, and Yuguang for their spotlight acceptance with 'Model Sparsity Simplifies Machine Unlearning.’ And kudos to Yihua, Yimeng, Aochuan, Jinghan, and Jiancheng for their poster acceptance with 'Selectivity Boosts Transfer Learning Efficiency.’ * Grateful to receive a grant from Army Research Office (ARO) as the PI. * Our paper on Adversarial Training for MoE has been chosen for an Oral Presentation at ICCV’23. * Grateful to receive a gift funding from Cisco Research as the PI. * Call for participation in 2nd AdvML-Frontiers Workshop@ICML’23. * One paper in ICCV’23 on Adversarial Robustness of Mixture-of-Experts. * Grateful to receive a CPS Medium Grant Award from NSF as a co-PI. * Slides of our CVPR’23 tutorial on Reverse Engineering of Deceptions (RED) is now available at the tutorial page. [link] * Our paper “Visual Prompting for Adversarial Robustness” received the Top 3% Paper Recognition at ICASSP 2023. Congrats to Aochuan, Peter (internship at OPTML in 2022), Yuguang, and Pin-Yu (IBM Research)! * Grateful to be elected as Associate Editor of IEEE Transactions on Aerospace and Electronic Systems. * Two papers in ICML’23 and CFP for 2nd AdvML-Frontiers Workshop. * A new arXiv paper is released: Model Sparsification Can Simplify Machine Unlearning! [Paper] [Code] * Grateful to receive a grant from Lawrence Livermore National Laboratory. * Call for Papers and AdvML Rising Star Award Applications in the workshop AdvML-Frontiers, ICML’23 * A new arXiv paper is released: Adversarial attacks can be parsed to reveal victim model information! [Paper] * The 2nd Workshop on New Frontiers in Adversarial Machine Learning has been accepted by ICML’23. * Grateful to receive a grant from DSO National Laboratories. * Two papers in CVPR’23. * Three papers in ICASSP’23. * CVPR’23 tutorial on Reverse Engineering of Deception: Foundations and Applications is accepted and will be given with Xiaoming Liu (MSU) and Xue Lin (Northeastern). * AAAI’23 tutorial on Bi-level Optimization in ML: Foundations and Applications is now available at link. * Four papers in ICLR’23: Issues and Fixes in IRM, TextGrad: Differentiable Solution to NLP Attack Generation, Provable Benefits of Sparse GNN, Sample Complexity Analysis of ViT. * One paper in ASP-DAC’23. * One paper in SANER 2023: Towards Both Robust and Accurate Code Models; Equally contributed by Jinghan Jia (MSU) and Shashank Srikant (MIT). * Grateful to be selected as a presenter of the AAAI 2023 New Faculty Highlight Program. * Tutorial on Foundational Robustness of Foundation Models will be given in NeurIPS’22. * Tutorial on Bi-level Machine Learning will be given in AAAI’23. * Two papers in NeurIPS’22. * Grateful to receive a Robust Intelligence (RI) Core Small Grant Award from NSF as the PI. * Grateful to receive the Best Paper Runner-Up Award at UAI’2022 in recognition of our work “Distributed Adversarial Training to Robustify Deep Neural Networks at Scale”. * One paper in UAI’22 (Oral presentation). * Five papers in ICML’22: Bi-level adversarial training; Winning lottery tickets from robust pretraining; Pruning helps certified robustness; Contrastive learning theory; Generalization theory of GCN. * One paper in NAACL’22. * One paper in IJCAI’22. * CFP: 1st Workshop on New Frontiers in Adversarial Machine Learning at ICML’22 (AdvML-Frontiers@ICML’22). * Grateful to receive a gift funding from Cisco Research as the PI. * Congratulations to Yihua Zhang for his first CVPR paper. * Two papers in CVPR 2022. * Congratulations to Yimeng Zhang, Yuguang Yao, Jianghan Jia for their first ICLR papers. * Five papers in ICLR 2022: Reverse Engineering of Adversaries, Black-Box Defense (spotlight), Learning to Optimize, Self-Training Theory, Distributed Learning. * Our work on interpreting and advancing adversarial training via bi-level optimization is now available on arXiv; equally contributed by Yihua Zhang (MSU) and Guanhua Zhang (UCSB). * Grateful to receive a DARPA IP2 AIE Grant as a Co-PI. * Five papers in NeurIPS 2021. * Our MSU-NEU team (with PI Xiaoming Liu and co-PI Xue Lin) entered the Phase 2 of DARPA AIE RED. * One paper in ICML 2021. * MIT news ‘Toward deep-learning models that can reason about code more like humans’ on our ICLR’21 work ‘Adversarial Programs’ [paper, code]. * Two papers in CVPR 2021. * Two papers in AISTATS 2021. * Four papers in ICLR 2021. * Three papers in AAAI 2021. * Grateful to receive a DARPA RED AIE Grant as a Co-PI. |