default search action
Difan Zou
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c39]Xunpeng Huang, Difan Zou, Hanze Dong, Yi-An Ma, Tong Zhang:
Faster Sampling without Isoperimetry via Diffusion-based Monte Carlo. COLT 2024: 2438-2493 - [c38]Miao Lu, Beining Wu, Xiaodong Yang, Difan Zou:
Benign Oscillation of Stochastic Gradient Descent with Large Learning Rate. ICLR 2024 - [c37]Junwei Su, Difan Zou, Chuan Wu:
PRES: Toward Scalable Memory-Based Dynamic Graph Neural Networks. ICLR 2024 - [c36]Jingfeng Wu, Difan Zou, Zixiang Chen, Vladimir Braverman, Quanquan Gu, Peter L. Bartlett:
How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression? ICLR 2024 - [c35]Xingwu Chen, Difan Zou:
What Can Transformer Learn with Varying Depth? Case Studies on Sequence Learning Tasks. ICML 2024 - [c34]Yujin Han, Difan Zou:
Improving Group Robustness on Spurious Correlation Requires Preciser Group Inference. ICML 2024 - [c33]Xunpeng Huang, Difan Zou, Hanze Dong, Yian Ma, Tong Zhang:
Faster Sampling via Stochastic Gradient Proximal Sampler. ICML 2024 - [c32]Xuran Meng, Difan Zou, Yuan Cao:
Benign Overfitting in Two-Layer ReLU Convolutional Neural Networks for XOR Data. ICML 2024 - [i45]Xunpeng Huang, Difan Zou, Hanze Dong, Yian Ma, Tong Zhang:
Faster Sampling without Isoperimetry via Diffusion-based Monte Carlo. CoRR abs/2401.06325 (2024) - [i44]Junwei Su, Difan Zou, Chuan Wu:
PRES: Toward Scalable Memory-Based Dynamic Graph Neural Networks. CoRR abs/2402.04284 (2024) - [i43]Junwei Su, Difan Zou, Zijun Zhang, Chuan Wu:
Towards Robust Graph Incremental Learning on Evolving Graphs. CoRR abs/2402.12987 (2024) - [i42]Xunpeng Huang, Hanze Dong, Difan Zou, Tong Zhang:
An Improved Analysis of Langevin Algorithms with Prior Diffusion for Non-Log-Concave Sampling. CoRR abs/2403.06183 (2024) - [i41]Junwei Su, Difan Zou, Chuan Wu:
Improving Implicit Regularization of SGD with Preconditioning for Least Square Problems. CoRR abs/2403.08585 (2024) - [i40]Yifan Hao, Yong Lin, Difan Zou, Tong Zhang:
On the Benefits of Over-parameterization for Out-of-Distribution Generalization. CoRR abs/2403.17592 (2024) - [i39]Xingwu Chen, Difan Zou:
What Can Transformer Learn with Varying Depth? Case Studies on Sequence Learning Tasks. CoRR abs/2404.01601 (2024) - [i38]Kun Zhai, Yifeng Gao, Xingjun Ma, Difan Zou, Guangnan Ye, Yu-Gang Jiang:
The Dog Walking Theory: Rethinking Convergence in Federated Learning. CoRR abs/2404.11888 (2024) - [i37]Yujin Han, Difan Zou:
Improving Group Robustness on Spurious Correlation Requires Preciser Group Inference. CoRR abs/2404.13815 (2024) - [i36]Xunpeng Huang, Difan Zou, Hanze Dong, Yi Zhang, Yi-An Ma, Tong Zhang:
Reverse Transition Kernel: A Flexible Framework to Accelerate Diffusion Inference. CoRR abs/2405.16387 (2024) - [i35]Xunpeng Huang, Difan Zou, Yi-An Ma, Hanze Dong, Tong Zhang:
Faster Sampling via Stochastic Gradient Proximal Sampler. CoRR abs/2405.16734 (2024) - [i34]Chengxing Xie, Difan Zou:
A Human-Like Reasoning Framework for Multi-Phases Planning Task with Large Language Models. CoRR abs/2405.18208 (2024) - [i33]Hao Chen, Yujin Han, Diganta Misra, Xiang Li, Kai Hu, Difan Zou, Masashi Sugiyama, Jindong Wang, Bhiksha Raj:
Slight Corruption in Pre-training Data Makes Better Diffusion Models. CoRR abs/2405.20494 (2024) - [i32]Min Cai, Yuchen Zhang, Shichang Zhang, Fan Yin, Difan Zou, Yisong Yue, Ziniu Hu:
Self-Control of LLM Behaviors by Compressing Suffix Gradient into Prefix Controller. CoRR abs/2406.02721 (2024) - [i31]Chenyang Zhang, Difan Zou, Yuan Cao:
The Implicit Bias of Adam on Separable Data. CoRR abs/2406.10650 (2024) - [i30]Yunhao Chen, Xingjun Ma, Difan Zou, Yu-Gang Jiang:
Extracting Training Data from Unconditional Diffusion Models. CoRR abs/2406.12752 (2024) - [i29]Xingwu Chen, Lei Zhao, Difan Zou:
How Transformers Utilize Multi-Head Attention in In-Context Learning? A Case Study on Sparse Linear Regression. CoRR abs/2408.04532 (2024) - 2023
- [j8]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Benign Overfitting of Constant-Stepsize SGD for Linear Regression. J. Mach. Learn. Res. 24: 326:1-326:58 (2023) - [c31]Yuan Cao, Difan Zou, Yuanzhi Li, Quanquan Gu:
The Implicit Bias of Batch Normalization in Linear Models and Two-layer Linear Convolutional Neural Networks. COLT 2023: 5699-5753 - [c30]Difan Zou, Yuan Cao, Yuanzhi Li, Quanquan Gu:
Understanding the Generalization of Adam in Learning Neural Networks with Proper Regularization. ICLR 2023 - [c29]Junwei Su, Difan Zou, Zijun Zhang, Chuan Wu:
Towards Robust Graph Incremental Learning on Evolving Graphs. ICML 2023: 32728-32748 - [c28]Jingfeng Wu, Difan Zou, Zixiang Chen, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Finite-Sample Analysis of Learning High-Dimensional Single ReLU Neuron. ICML 2023: 37919-37951 - [c27]Difan Zou, Yuan Cao, Yuanzhi Li, Quanquan Gu:
The Benefits of Mixup for Feature Learning. ICML 2023: 43423-43479 - [i28]Jingfeng Wu, Difan Zou, Zixiang Chen, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Learning High-Dimensional Single-Neuron ReLU Networks with Finite Samples. CoRR abs/2303.02255 (2023) - [i27]Difan Zou, Yuan Cao, Yuanzhi Li, Quanquan Gu:
The Benefits of Mixup for Feature Learning. CoRR abs/2303.08433 (2023) - [i26]Xuran Meng, Yuan Cao, Difan Zou:
Per-Example Gradient Regularization Improves Learning Signals from Noisy Data. CoRR abs/2303.17940 (2023) - [i25]Yuan Cao, Difan Zou, Yuanzhi Li, Quanquan Gu:
The Implicit Bias of Batch Normalization in Linear Models and Two-layer Linear Convolutional Neural Networks. CoRR abs/2306.11680 (2023) - [i24]Xuran Meng, Difan Zou, Yuan Cao:
Benign Overfitting in Two-Layer ReLU Convolutional Neural Networks for XOR Data. CoRR abs/2310.01975 (2023) - [i23]Xu Luo, Difan Zou, Lianli Gao, Zenglin Xu, Jingkuan Song:
Less is More: On the Feature Redundancy of Pretrained Models When Transferring to Few-shot Tasks. CoRR abs/2310.03843 (2023) - [i22]Jingfeng Wu, Difan Zou, Zixiang Chen, Vladimir Braverman, Quanquan Gu, Peter L. Bartlett:
How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression? CoRR abs/2310.08391 (2023) - [i21]Miao Lu, Beining Wu, Xiaodong Yang, Difan Zou:
Benign Oscillation of Stochastic Gradient Descent with Large Learning Rates. CoRR abs/2310.17074 (2023) - 2022
- [b1]Difan Zou:
Understanding the Role of Optimization Algorithms in Learning Over-parameterized Models. University of California, Los Angeles, USA, 2022 - [j7]Hong Qi, Difan Zou, Chen Gong, Zhengyuan Xu:
Two-Dimensional Intensity Distribution and Adaptive Power Allocation for Ultraviolet Ad-Hoc Network. IEEE Trans. Green Commun. Netw. 6(1): 558-570 (2022) - [c26]Spencer Frei, Difan Zou, Zixiang Chen, Quanquan Gu:
Self-training Converts Weak Learners to Strong Learners in Mixture Models. AISTATS 2022: 8003-8021 - [c25]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression. ICML 2022: 24280-24314 - [c24]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
The Power and Limitation of Pretraining-Finetuning for Linear Regression under Covariate Shift. NeurIPS 2022 - [c23]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Risk Bounds of Multi-Pass SGD for Least Squares in the Interpolation Regime. NeurIPS 2022 - [i20]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Risk Bounds of Multi-Pass SGD for Least Squares in the Interpolation Regime. CoRR abs/2203.03159 (2022) - [i19]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
The Power and Limitation of Pretraining-Finetuning for Linear Regression under Covariate Shift. CoRR abs/2208.01857 (2022) - 2021
- [j6]Bao Wang, Difan Zou, Quanquan Gu, Stanley J. Osher:
Laplacian Smoothing Stochastic Gradient Markov Chain Monte Carlo. SIAM J. Sci. Comput. 43(1): A26-A53 (2021) - [c22]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Benign Overfitting of Constant-Stepsize SGD for Linear Regression. COLT 2021: 4633-4635 - [c21]Zixiang Chen, Yuan Cao, Difan Zou, Quanquan Gu:
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks? ICLR 2021 - [c20]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu:
Direction Matters: On the Implicit Bias of Stochastic Gradient Descent with Moderate Learning Rate. ICLR 2021 - [c19]Difan Zou, Spencer Frei, Quanquan Gu:
Provable Robustness of Adversarial Training for Learning Halfspaces with Noise. ICML 2021: 13002-13011 - [c18]Difan Zou, Quanquan Gu:
On the Convergence of Hamiltonian Monte Carlo with Stochastic Gradients. ICML 2021: 13012-13022 - [c17]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Dean P. Foster, Sham M. Kakade:
The Benefits of Implicit Regularization from SGD in Least Squares Problems. NeurIPS 2021: 5456-5468 - [c16]Difan Zou, Pan Xu, Quanquan Gu:
Faster Convergence of Stochastic Gradient Langevin Dynamics for Non-Log-Concave Sampling. UAI 2021: 1152-1162 - [i18]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Benign Overfitting of Constant-Stepsize SGD for Linear Regression. CoRR abs/2103.12692 (2021) - [i17]Difan Zou, Spencer Frei, Quanquan Gu:
Provable Robustness of Adversarial Training for Learning Halfspaces with Noise. CoRR abs/2104.09437 (2021) - [i16]Spencer Frei, Difan Zou, Zixiang Chen, Quanquan Gu:
Self-training Converts Weak Learners to Strong Learners in Mixture Models. CoRR abs/2106.13805 (2021) - [i15]Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Dean P. Foster, Sham M. Kakade:
The Benefits of Implicit Regularization from SGD in Least Squares Problems. CoRR abs/2108.04552 (2021) - [i14]Difan Zou, Yuan Cao, Yuanzhi Li, Quanquan Gu:
Understanding the Generalization of Adam in Learning Neural Networks with Proper Regularization. CoRR abs/2108.11371 (2021) - [i13]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression. CoRR abs/2110.06198 (2021) - 2020
- [j5]Difan Zou, Yuan Cao, Dongruo Zhou, Quanquan Gu:
Gradient descent optimizes over-parameterized deep ReLU networks. Mach. Learn. 109(3): 467-492 (2020) - [c15]Hong Qi, Difan Zou, Chen Gong, Zhengyuan Xu:
Two-dimensional Intensity Distribution and Connectivity in Ultraviolet Ad-Hoc Network. ICC 2020: 1-6 - [c14]Yisen Wang, Difan Zou, Jinfeng Yi, James Bailey, Xingjun Ma, Quanquan Gu:
Improving Adversarial Robustness Requires Revisiting Misclassified Examples. ICLR 2020 - [c13]Difan Zou, Philip M. Long, Quanquan Gu:
On the Global Convergence of Training Deep Linear ResNets. ICLR 2020 - [i12]Difan Zou, Philip M. Long, Quanquan Gu:
On the Global Convergence of Training Deep Linear ResNets. CoRR abs/2003.01094 (2020) - [i11]Difan Zou, Pan Xu, Quanquan Gu:
Faster Convergence of Stochastic Gradient Langevin Dynamics for Non-Log-Concave Sampling. CoRR abs/2010.09597 (2020) - [i10]Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu:
Direction Matters: On the Implicit Regularization Effect of Stochastic Gradient Descent with Moderate Learning Rate. CoRR abs/2011.02538 (2020)
2010 – 2019
- 2019
- [j4]Xiaona Liu, Chen Gong, Difan Zou, Zunaira Babar, Zhengyuan Xu, Lajos Hanzo:
Signal Characterization and Achievable Transmission Rate of VLC Under Receiver Nonlinearity. IEEE Access 7: 137030-137039 (2019) - [j3]Difan Zou, Chen Gong, Kun Wang, Zhengyuan Xu:
Characterization on Practical Photon Counting Receiver in Optical Scattering Communication. IEEE Trans. Commun. 67(3): 2203-2217 (2019) - [c12]Difan Zou, Pan Xu, Quanquan Gu:
Sampling from Non-Log-Concave Distributions via Variance-Reduced Gradient Langevin Dynamics. AISTATS 2019: 2936-2945 - [c11]Difan Zou, Quanquan Gu:
An Improved Analysis of Training Over-parameterized Deep Neural Networks. NeurIPS 2019: 2053-2062 - [c10]Difan Zou, Pan Xu, Quanquan Gu:
Stochastic Gradient Hamiltonian Monte Carlo Methods with Recursive Variance Reduction. NeurIPS 2019: 3830-3841 - [c9]Difan Zou, Ziniu Hu, Yewen Wang, Song Jiang, Yizhou Sun, Quanquan Gu:
Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks. NeurIPS 2019: 11247-11256 - [i9]Difan Zou, Quanquan Gu:
An Improved Analysis of Training Over-parameterized Deep Neural Networks. CoRR abs/1906.04688 (2019) - [i8]Bao Wang, Difan Zou, Quanquan Gu, Stanley J. Osher:
Laplacian Smoothing Stochastic Gradient Markov Chain Monte Carlo. CoRR abs/1911.00782 (2019) - [i7]Difan Zou, Ziniu Hu, Yewen Wang, Song Jiang, Yizhou Sun, Quanquan Gu:
Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks. CoRR abs/1911.07323 (2019) - [i6]Zixiang Chen, Yuan Cao, Difan Zou, Quanquan Gu:
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks? CoRR abs/1911.12360 (2019) - 2018
- [j2]Difan Zou, Chen Gong, Zhengyuan Xu:
Secrecy Rate of MISO Optical Wireless Scattering Communications. IEEE Trans. Commun. 66(1): 225-238 (2018) - [j1]Difan Zou, Chen Gong, Zhengyuan Xu:
Signal Detection Under Short-Interval Sampling of Continuous Waveforms for Optical Wireless Scattering Communication. IEEE Trans. Wirel. Commun. 17(5): 3431-3443 (2018) - [c8]Difan Zou, Pan Xu, Quanquan Gu:
Stochastic Variance-Reduced Hamilton Monte Carlo Methods. ICML 2018: 6023-6032 - [c7]Pan Xu, Jinghui Chen, Difan Zou, Quanquan Gu:
Global Convergence of Langevin Dynamics Based Algorithms for Nonconvex Optimization. NeurIPS 2018: 3126-3137 - [c6]Difan Zou, Pan Xu, Quanquan Gu:
Subsampled Stochastic Variance-Reduced Gradient Langevin Dynamics. UAI 2018: 508-518 - [i5]Difan Zou, Pan Xu, Quanquan Gu:
Stochastic Variance-Reduced Hamilton Monte Carlo Methods. CoRR abs/1802.04791 (2018) - [i4]Difan Zou, Yuan Cao, Dongruo Zhou, Quanquan Gu:
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks. CoRR abs/1811.08888 (2018) - 2017
- [c5]Difan Zou, Chen Gong, Kun Wang, Zhengyuan Xu:
Characterization of a Practical Photon Counting Receiver in Optical Scattering Communication. GLOBECOM 2017: 1-6 - [i3]Difan Zou, Chen Gong, Zhengyuan Xu:
Analysis on Practical Photon Counting Receiver in Optical Scattering Communication. CoRR abs/1702.06633 (2017) - [i2]Yaodong Yu, Difan Zou, Quanquan Gu:
Saving Gradient and Negative Curvature Computations: Finding Local Minima More Efficiently. CoRR abs/1712.03950 (2017) - 2016
- [c4]Difan Zou, Chen Gong, Zhengyuan Xu:
Optical wireless scattering communication system with a non-ideal photon-counting receiver. GlobalSIP 2016: 11-15 - [c3]Difan Zou, Zhengyuan Xu, Chen Gong:
Performance of non-line-of-sight ultraviolet scattering communication under different altitudes. ICCC 2016: 1-5 - [c2]Kun Wang, Chen Gong, Difan Zou, Zhengyuan Xu:
Turbulence channel modeling and non-parametric estimation for optical wireless scattering communication. ICCS 2016: 1-6 - [i1]Difan Zou, Chen Gong, Zhengyuan Xu:
Signal Detection under Short-Interval Sampling of Continuous Waveforms for Optical Wireless Scattering Communication. CoRR abs/1612.04058 (2016) - 2014
- [c1]Difan Zou, Shang-Bin Li, Zhengyuan Xu:
Improving the NLOS optical scattering channel via beam reshaping. ACSSC 2014: 1372-1375
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 21:24 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint