Location via proxy:   
[Report a bug]   [Manage cookies]                
profile.jpg

Yang Song (宋飏)

Leading Strategic Explorations team at OpenAI

Incoming Assistant Professor,
Electrical Engineering and Computing + Mathematical Sciences,
California Institute of Technology (Caltech).

Research: My goal is to build powerful AI models capable of understanding, generating and reasoning with high-dimensional data across diverse modalities. I am currently focused on developing transferable techniques to improve generative models, including their training methodologies, architecture design, alignment, robustness, evaluation techniques and inference efficiency. I am also interested in generative modeling as a tool for scientific discovery. I invented many foundational concepts and techniques in (score-based) diffusion models, for which you can find more in a blog post and a quanta magazine article.

Previously: I received my Ph.D. in Computer Science from Stanford University, advised by Stefano Ermon. I was a research intern at Google Brain, Uber ATG, and Microsoft Research. I obtained my Bachelor’s degree in Mathematics and Physics from Tsinghua University, where I worked with Jun Zhu, Raquel Urtasun, and Richard Zemel.

news

Dec 10, 2023 The Strategic Explorations team at OpenAI is recruiting! Contact me if you are a hardcore researcher/engineer with a passion for advancing fundamental methodologies in the training and inference of diffusion models, consistency models, or large language models.
Dec 10, 2023 I have postponed my starting date as an Assistant Professor at Caltech EE & CMS until January 2026. I will not take on new students, visitors, or postdocs in the interim.

selected publications [full list]

(*) denotes equal contribution

  1. ICLROral
    Improved Techniques for Training Consistency Models
    Yang Song, and Prafulla Dhariwal
    In the 12th International Conference on Learning Representations, 2024.
    Oral Presentation [Top 1.2%]
  2. ICML
    Consistency Models
    In the 40th International Conference on Machine Learning, 2023.
  3. Thesis
    Learning to Generate Data by Estimating Gradients of the Data Distribution
    Yang Song
    Stanford University
  4. ICLR
    Solving Inverse Problems in Medical Imaging with Score-Based Generative Models
    Yang Song*, Liyue Shen*, Lei Xing, and Stefano Ermon
    In the 10th International Conference on Learning Representations, 2022. Abridged in the NeurIPS 2021 Workshop on Deep Learning and Inverse Problems.
  5. NeurIPSSpotlight
    Maximum Likelihood Training of Score-Based Diffusion Models
    Yang Song*, Conor Durkan*, Iain Murray, and Stefano Ermon
    In the 35th Conference on Neural Information Processing Systems, 2021.
    Spotlight Presentation [top 3%]
  6. ICML
    Accelerating Feedforward Computation via Parallel Nonlinear Equation Solving
    Yang Song, Chenlin Meng, Renjie Liao, and Stefano Ermon
    In the 38th International Conference on Machine Learning, 2021.
  7. ICLROralAward
    Score-Based Generative Modeling through Stochastic Differential Equations
    In the 9th International Conference on Learning Representations, 2021.
    Outstanding Paper Award
  8. NeurIPS
    Improved Techniques for Training Score-Based Generative Models
    Yang Song, and Stefano Ermon
    In the 34th Conference on Neural Information Processing Systems, 2020.
  9. NeurIPSOral
    Generative Modeling by Estimating Gradients of the Data Distribution
    Yang Song, and Stefano Ermon
    In the 33rd Conference on Neural Information Processing Systems, 2019.
    Oral Presentation [top 0.5%]
  10. UAIOral
    Sliced Score Matching: A Scalable Approach to Density and Score Estimation
    Yang Song*, Sahaj Garg*, Jiaxin Shi, and Stefano Ermon
    In the 35th Conference on Uncertainty in Artificial Intelligence, 2019.
    Oral Presentation [top 8.7%]
  11. NeurIPS
    Constructing Unrestricted Adversarial Examples with Generative Models
    Yang Song, Rui Shu, Nate Kushman, and Stefano Ermon
    In the 32nd Conference on Neural Information Processing Systems, 2018.
  12. ICLR
    PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples
    Yang Song, Taesup Kim, Sebastian NowozinStefano Ermon, and Nate Kushman
    In the 6th International Conference on Learning Representations, 2018.