Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                



Home Page

Papers

Submissions

News

Editorial Board

Special Issues

Open Source Software

Proceedings (PMLR)

Data (DMLR)

Transactions (TMLR)

Search

Statistics

Login

Frequently Asked Questions

Contact Us



RSS Feed

Generalization Bounds for Noisy Iterative Algorithms Using Properties of Additive Noise Channels

Hao Wang, Rui Gao, Flavio P. Calmon; 24(26):1−43, 2023.

Abstract

Machine learning models trained by different optimization algorithms under different data distributions can exhibit distinct generalization behaviors. In this paper, we analyze the generalization of models trained by noisy iterative algorithms. We derive distribution-dependent generalization bounds by connecting noisy iterative algorithms to additive noise channels found in communication and information theory. Our generalization bounds shed light on several applications, including differentially private stochastic gradient descent (DP-SGD), federated learning, and stochastic gradient Langevin dynamics (SGLD). We demonstrate our bounds through numerical experiments, showing that they can help understand recent empirical observations of the generalization phenomena of neural networks.

[abs][pdf][bib]       
© JMLR 2023. (edit, beta)

Mastodon