Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- demonstrationJune 2024
Demo : Privacy-Preserving Decentralized Machine Learning Framework for Clustered Resource-Constrained Devices
MOBISYS '24: Proceedings of the 22nd Annual International Conference on Mobile Systems, Applications and ServicesJune 2024, Pages 612–613https://doi.org/10.1145/3643832.3661843We present a secure decentralized learning framework suitable for resource-constrained devices within a cluster environment. Our approach focuses on enhancing privacy preservation during model aggregation by utilizing Differential Privacy. This technique ...
- research-articleMay 2024
Multi-Confederated Learning: Inclusive Non-IID Data handling with Decentralized Federated Learning
SAC '24: Proceedings of the 39th ACM/SIGAPP Symposium on Applied ComputingApril 2024, Pages 1587–1595https://doi.org/10.1145/3605098.3636000Federated Learning (FL) has emerged as a prominent privacy-preserving technique for enabling use cases like confidential clinical machine learning. FL operates by aggregating models trained by remote devices which owns the data. Thus, FL enables the ...
- research-articleMay 2024
Accelerating the Decentralized Federated Learning via Manipulating Edges
WWW '24: Proceedings of the ACM on Web Conference 2024May 2024, Pages 2945–2954https://doi.org/10.1145/3589334.3645509Federated learning enables collaborative AI training across organizations without compromising data privacy. Decentralized federated learning (DFL) improves this by offering enhanced reliability and security through peer-to-peer (P2P) model sharing. ...
- research-articleApril 2024
FedGrid: Federated Model Aggregation via Grid Shifting
UCC '23: Proceedings of the IEEE/ACM 16th International Conference on Utility and Cloud ComputingDecember 2023, Article No.: 53, Pages 1–6https://doi.org/10.1145/3603166.3632567Federated Learning is a machine learning technique where independent devices (clients) cooperatively train a machine learning model by working on decentralized training data. A fundamental challenge in federated learning is the client model aggregation. ...
- research-articleMarch 2024
Beyond spectral gap: the role of the topology in decentralized learning
The Journal of Machine Learning Research (JMLR), Volume 24, Issue 1Article No.: 355, Pages 17074–17104In data-parallel optimization of machine learning models, workers collaborate to improve their estimates of the model: more accurate gradients allow them to use larger learning rates and optimize faster. In the decentralized setting, in which workers ...
-
- research-articleOctober 2023
Communication Resources Limited Decentralized Learning with Privacy Guarantee through Over-the-Air Computation
MobiHoc '23: Proceedings of the Twenty-fourth International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile ComputingOctober 2023, Pages 201–210https://doi.org/10.1145/3565287.3610268In this paper, we propose a novel decentralized learning algorithm, namely DLLR-OA, for resource-constrained over-the-air computation with formal privacy guarantee. Theoretically, we characterize how the limited resources induced model-components ...
- research-articleOctober 2023
PRECISION: Decentralized Constrained Min-Max Learning with Low Communication and Sample Complexities
MobiHoc '23: Proceedings of the Twenty-fourth International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile ComputingOctober 2023, Pages 191–200https://doi.org/10.1145/3565287.3610267Recently, min-max optimization problems have received increasing attention due to their wide range of applications in machine learning (ML). However, most existing min-max solution techniques are either single-machine or distributed algorithms ...
- extended-abstractSeptember 2023
- research-articleJune 2023
The effect of network topologies on fully decentralized learning: a preliminary investigation
NetAISys '23: Proceedings of the 1st International Workshop on Networked AI SystemsJune 2023, Article No.: 5, Pages 1–6https://doi.org/10.1145/3597062.3597280In a decentralized machine learning system, data is typically partitioned among multiple devices or nodes, each of which trains a local model using its own data. These local models are then shared and combined to create a global model that can make ...
- research-articleJune 2023
Incentive-Aware Decentralized Data Collaboration
Proceedings of the ACM on Management of Data (PACMMOD), Volume 1, Issue 2Article No.: 158, Pages 1–27https://doi.org/10.1145/3589303Data collaboration enables multiple parties to pool data for deriving meaningful data insights. However, data misuse and unlawful data collection have led to precautionary measures being imposed by individual organizations to guide against data leakage ...
- research-articleMay 2023
A First Look at the Impact of Distillation Hyper-Parameters in Federated Knowledge Distillation
EuroMLSys '23: Proceedings of the 3rd Workshop on Machine Learning and SystemsMay 2023, Pages 123–130https://doi.org/10.1145/3578356.3592590Knowledge distillation has been known as a useful way for model compression. It has been recently adopted in the distributed training domain, such as federated learning, as a way to transfer knowledge between already pre-trained models. Knowledge ...
- research-articleMay 2023
Decentralized Learning Made Easy with DecentralizePy
EuroMLSys '23: Proceedings of the 3rd Workshop on Machine Learning and SystemsMay 2023, Pages 34–41https://doi.org/10.1145/3578356.3592587Decentralized learning (DL) has gained prominence for its potential benefits in terms of scalability, privacy, and fault tolerance. It consists of many nodes that coordinate without a central server and exchange millions of parameters in the ...
- ArticleMarch 2023
Decentralized Adaptive Clustering of Deep Nets is Beneficial for Client Collaboration
AbstractWe study the problem of training personalized deep learning models in a decentralized peer-to-peer setting, focusing on the setting where data distributions differ between the clients and where different clients have different local learning ...
- research-articleJanuary 2022
Communication-constrained distributed quantile regression with optimal statistical guarantees
The Journal of Machine Learning Research (JMLR), Volume 23, Issue 1Article No.: 272, Pages 12456–12516We address the problem of how to achieve optimal inference in distributed quantile regression without stringent scaling conditions. This is challenging due to the non-smooth nature of the quantile regression (QR) loss function, which invalidates the use ...
- research-articleJuly 2021
GT-STORM: Taming Sample, Communication, and Memory Complexities in Decentralized Non-Convex Learning
MobiHoc '21: Proceedings of the Twenty-second International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile ComputingJuly 2021, Pages 271–280https://doi.org/10.1145/3466772.3467056Decentralized nonconvex optimization has received increasing attention in recent years in machine learning due to its advantages in system robustness, data privacy, and implementation simplicity. However, three fundamental challenges in designing ...
- research-articleJanuary 2021
Model linkage selection for cooperative learning
The Journal of Machine Learning Research (JMLR), Volume 22, Issue 1Article No.: 256, Pages 11601–11644We consider the distributed learning setting where each agent or learner holds a specific parametric model and a data source. The goal is to integrate information across a set of learners and data sources to enhance the prediction accuracy of a given ...
- articleApril 2018
Crowdsourcing Exploration
Management Science (MANS), Volume 64, Issue 4April 2018, Pages 1727–1746https://doi.org/10.1287/mnsc.2016.2697Motivated by the proliferation of online platforms that collect and disseminate consumers' experiences with alternative substitutable products/services, we investigate the problem of optimal information provision when the goal is to maximize aggregate ...