Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleAugust 2024
Nebula: An Edge-Cloud Collaborative Learning Framework for Dynamic Edge Environments
ICPP '24: Proceedings of the 53rd International Conference on Parallel ProcessingPages 782–791https://doi.org/10.1145/3673038.3673120To bring the great power of modern DNNs into mobile computing and distributed systems, current practices primarily employ one of the two learning paradigms: cloud-based learning or on-device learning. Despite their distinct advantages, neither of these ...
- research-articleJuly 2024
Multi-agent Continuous Control with Generative Flow Networks
AbstractGenerative Flow Networks (GFlowNets) aim to generate diverse trajectories from a distribution in which the final states of the trajectories are proportional to the reward, serving as a powerful alternative to reinforcement learning for ...
- research-articleMarch 2024
ODE: An Online Data Selection Framework for Federated Learning With Limited Storage
IEEE/ACM Transactions on Networking (TON), Volume 32, Issue 4Pages 2794–2809https://doi.org/10.1109/TNET.2024.3365534Machine learning (ML) models have been deployed in mobile networks to deal with massive data from different layers to enable automated network management. To overcome high communication cost and severe privacy concerns of centralized ML, federated ...
- research-articleJanuary 2024
Lattice based distributed threshold additive homomorphic encryption with application in federated learning
AbstractIn federated learning (FL), a parameter server needs to aggregate user gradients and a user needs to protect the value of their gradients. Among all the possible solutions to the problem, those based on additive homomorphic encryption (...
Highlights- A communication efficient DTAHE scheme for groups with more than 26 entities.
- A ...
- research-articleNovember 2023
Source-free and black-box domain adaptation via distributionally adversarial training
Highlights- A practical new setting for domain adaptation where source models cannot be transferred to the target domain.
- A domain adaptation framework based on black-box probing that blocks the risk of privacy leakage in practical applications.
Source-free unsupervised domain adaptation is one class of practical deep learning methods which generalize in the target domain without transferring data from source domain. However, existing source-free domain adaptation methods rely on source ...
-
- research-articleOctober 2023
Generalized Universal Domain Adaptation with Generative Flow Networks
MM '23: Proceedings of the 31st ACM International Conference on MultimediaPages 8304–8315https://doi.org/10.1145/3581783.3612225We introduce a new problem in unsupervised domain adaptation, termed as Generalized Universal Domain Adaptation (GUDA), which aims to achieve precise prediction of all target labels including unknown categories. GUDA bridges the gap between label ...
- research-articleAugust 2023
Generative flow networks for precise reward-oriented active learning on graphs
IJCAI '23: Proceedings of the Thirty-Second International Joint Conference on Artificial IntelligenceArticle No.: 438, Pages 3939–3947https://doi.org/10.24963/ijcai.2023/438Many score-based active learning methods have been successfully applied to graph-structured data, aiming to reduce the number of labels and achieve better performance of graph neural networks based on predefined score functions. However, these algorithms ...
- research-articleSeptember 2023
Learning From Your Neighbours: Mobility-Driven Device-Edge-Cloud Federated Learning
ICPP '23: Proceedings of the 52nd International Conference on Parallel ProcessingPages 462–471https://doi.org/10.1145/3605573.3605643Federated learning (FL) in large-scale wireless networks is implemented in a hierarchical way by introducing edge servers as relays between the cloud server and devices, where devices are dispersed within multiple clusters coordinated by edges. However, ...
- research-articleJuly 2023
A constrained Bayesian approach to out-of-distribution prediction
UAI '23: Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial IntelligenceArticle No.: 210, Pages 2248–2258Consider the problem of out-of-distribution prediction given data from multiple environments. While a sufficiently diverse collection of training environments will facilitate the identification of an invariant predictor, with an optimal generalization ...
- research-articleApril 2023
To Store or Not? Online Data Selection for Federated Learning with Limited Storage
WWW '23: Proceedings of the ACM Web Conference 2023Pages 3044–3055https://doi.org/10.1145/3543507.3583426Machine learning models have been deployed in mobile networks to deal with massive data from different layers to enable automated network management and intelligence on devices. To overcome high communication cost and severe privacy concerns of ...
- research-articleApril 2024
Asymmetric temperature scaling makes larger networks teach well again
NIPS '22: Proceedings of the 36th International Conference on Neural Information Processing SystemsArticle No.: 277, Pages 3830–3842Knowledge Distillation (KD) aims at transferring the knowledge of a well- performed neural network (the teacher) to a weaker one (the student). A peculiar phenomenon is that a more accurate model doesn't necessarily teach better, and temperature ...
- research-articleApril 2024
Model-based offline reinforcement learning with pessimism-modulated dynamics belief
NIPS '22: Proceedings of the 36th International Conference on Neural Information Processing SystemsArticle No.: 33, Pages 449–461Model-based offline reinforcement learning (RL) aims to find highly rewarding policy, by leveraging a previously collected static dataset and a dynamics model. While the dynamics model learned through reuse of the static dataset, its generalization ...
- ArticleOctober 2022
A Distributed Threshold Additive Homomorphic Encryption for Federated Learning with Dropout Resiliency Based on Lattice
AbstractIn federated learning, a parameter server needs to aggregate user gradients and a user needs the privacy of their individual gradients. Among all the possible solutions, additive homomorphic encryption is a natural choice. As users may drop out ...
- research-articleAugust 2022
S2RL: Do We Really Need to Perceive All States in Deep Multi-Agent Reinforcement Learning?
KDD '22: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 1183–1191https://doi.org/10.1145/3534678.3539481Collaborative multi-agent reinforcement learning (MARL) has been widely used in many practical applications, where each agent makes a decision based on its own observation. Most mainstream methods treat each local observation as an entirety when modeling ...
- research-articleApril 2022
Exploring uncertainty in regression neural networks for construction of prediction intervals
Neurocomputing (NEUROC), Volume 481, Issue CPages 249–257https://doi.org/10.1016/j.neucom.2022.01.084AbstractDeep learning has achieved impressive performance on many tasks in recent years. However, it has been found that it is still not enough for deep neural networks to provide only point estimates. For high-risk tasks, we need to assess ...
- research-articleDecember 2021
Convergence analysis and Design principle for Federated learning in Wireless network
2021 IEEE Global Communications Conference (GLOBECOM)Pages 01–06https://doi.org/10.1109/GLOBECOM46510.2021.9685504Recently, federated learning (FL) has been treated as an important and promising learning scheme in IoT, enabling devices to jointly learn a model without sharing their data sets. Different from centralized training on some collected data sets, FL ...
- ArticleSeptember 2021
FedPHP: Federated Personalization with Inherited Private Models
Machine Learning and Knowledge Discovery in Databases. Research TrackPages 587–602https://doi.org/10.1007/978-3-030-86486-6_36AbstractFederated Learning (FL) generates a single global model via collaborating distributed clients without leaking data privacy. However, the statistical heterogeneity of non-iid data across clients poses a fundamental challenge to the model ...
- research-articleJanuary 2021
Bidirectional adversarial training for semi-supervised domain adaptation
IJCAI'20: Proceedings of the Twenty-Ninth International Joint Conference on Artificial IntelligenceArticle No.: 130, Pages 934–940Semi-supervised domain adaptation (SSDA) is a novel branch of machine learning that scarce labeled target examples are available, compared with unsupervised domain adaptation. To make effective use of these additional data so as to bridge the domain gap, ...
- research-articleDecember 2017
Demand-Side Response of Thermostatically Controlled Loads Using Stackelberg Game Method
ICACR 2017: Proceedings of the 2017 International Conference on Automation, Control and RobotsPages 20–24https://doi.org/10.1145/3175516.3175529This paper focuses on the demand-side response of thermostatically controlled loads (TCLs) using Stackelberg game method. First, we build an electricity market trade process to find the relationship between real time pricing (RTP) and energy demand of ...
- research-articleDecember 2016
Efficient and dynamic double auctions for resource allocation
2016 IEEE 55th Conference on Decision and Control (CDC)Pages 6062–6067https://doi.org/10.1109/CDC.2016.7799200We formulate a class of divisible resource allocation problems among a collection of suppliers and demanders as double-sided auction games. The auction mechanism adopted in this paper inherits some properties of the VCG style auction mechanism, like the ...