Location via proxy:   
[Report a bug]   [Manage cookies]                

Yixin Liu (刘奕鑫)

Email: yila22 [AT] lehigh [.] edu

I am a 3rd-year CSE Ph.D. student working on Machine Learning at Lehigh University, advised by Prof. Lichao Sun. Previously, I obtained my B.E. of Software Engineering from South China University of Techology with honor in 2022. Feel free to contact me for any discussion or collaboration!

Research Interest

I am generally interested in the trustworthy and explainable generative AI and frontier models. My thesis research focus on data-centric approaches to safeguard user's data from unauthorized exploitation and provide source verification:

  • Proactive Learnability Control [MetaCloak (CVPR'24 Oral), GraphCloak (Preprint), MUE (ICML'24 Workshop), SEM (AAAI'24 Oral), EditShield (ECCV'24)] : nowadays, user's private content is being largely exploited. Meanwhile, one funmental venerability of neural networks is that they are not robust to even small change in the input. By exploiting this property, we seek to safeguard the data from unauthorized model training without compromising the data utility.
  • Watermarking and Verification [TextMarker (Preprint)]: As the growth of generative models, the boundary between real and fake is becoming more and more blurred. To provide a way for AI content attribution and training data source attributions, we seek to design robust watermarking and verification methods.
Moreover, I am also interested in explainable AI, focusing on improving the stability and faithfulness of explainable AI tools under adversarial manipulation [SEAT (AAAI'24), FViT (ICML'24, Spotlight)].
Reviewer Service

NeurIPS23, KDD23, CVPR24, ICML24, ECCV24 (Outstanding Reviewer), NeurIPS24, ICLR25, ICASSP25, CVPR25.

Publications ( show selected / show all by topic / show all by date )

Topics: Unauthorized Exploitation / NLP Safety / Explainable AI / Model Compresssion / Applications (*/†: indicates equal contribution.)

Toward Robust Imperceptible Perturbation against Unauthorized Text-to-image Diffusion-based Synthesis
Yixin Liu, Chenrui Fan, Yutong Dai, Xun Chen, Pan Zhou, Lichao Sun

[CVPR 2024 Oral]

Medical Unlearnable Examples: Securing Medical Data from Unauthorized Traning via Sparsity-Aware Local Masking
Weixiang Sun, Yixin Liu, Zhiling Yan, Kaidi Xu, Lichao Sun

[ICML'24 Next Gen AI Safety 2024 Workshop]

Jailbreaking GPT-4V via Self-Adversarial Attacks with System Prompts
Yuanwei Wu, Xiang Li, Yixin Liu, Pan Zhou, Lichao Sun

[Preprint.]

Stable Unlearnable Example: Enhancing the Robustness of Unlearnable Examples via Stable Error-Minimizing Noise
Yixin Liu, Kaidi Xu, Xun Chen, Lichao Sun

[AAAI 2024]

Improving Faithfulness for Vision Transformers
Lijie Hu*, Yixin Liu*, Ninghao Liu, Mengdi Huai, Lichao Sun and Di Wang

[ICML 2024 Spotlight]

GraphCloak: Safeguarding Graph-structured Data from Unauthorized Exploitation
Yixin Liu, Chenrui Fan, Xun Chen, Pan Zhou, and Lichao Sun

[Preprint]

Watermarking Classification Dataset for Copyright Protection
Yixin Liu*, Hongsheng Hu*, Xuyun Zhang, Lichao Sun

[Preprint]

BadGPT: Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT
Jiawen Shi, Yixin Liu, Pan Zhou and Lichao Sun

[NDSS 2023 Poster]

Securing Biomedical Images from Unauthorized Training with Anti-Learning Perturbation
Yixin Liu, Haohui Ye, Lichao Sun

[NDSS 2023 Poster]

SEAT: Stable and Explainable Attention
Lijie Hu*, Yixin Liu*, Ninghao Liu, Mengdi Huai, Lichao Sun and Di Wang

[AAAI 2023 Oral]

Conditional Automated Channel Pruning for Deep Neural Networks
Yixin Liu, Yong Guo, Jiaxin Guo, Luoqian Jiang, Jian Chen

[IEEE Signal Processing Letters]

Meta-Pruning with Reinforcement Learning
Yixin Liu; Advisor: Jian Chen

[Bachelor Thesis]

Priority Prediction of Sighting Report Using Machine Learning Methods
Yixin Liu, Jiaxin Guo, Jieyang Dong, Luoqian Jiang, Haoyuan Ouyang; Advisor: Han Huang

[IEEE SEAI 2021; Finalist Award in MCM/ICM 2021]