Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
Volume 112, Issue 8Aug 2023
Reflects downloads up to 15 Oct 2024Bibliometrics
Skip Table Of Content Section
research-article
FAC-fed: Federated adaptation for fairness and concept drift aware stream classification
Abstract

Federated learning is an emerging collaborative learning paradigm of Machine learning involving distributed and heterogeneous clients. Enormous collections of continuously arriving heterogeneous data residing on distributed clients require ...

research-article
Decentralized Bayesian learning with Metropolis-adjusted Hamiltonian Monte Carlo
Abstract

Federated learning performed by a decentralized networks of agents is becoming increasingly important with the prevalence of embedded software on autonomous devices. Bayesian approaches to learning benefit from offering more information as to the ...

research-article
Differentiable learning of matricized DNFs and its application to Boolean networks
Abstract

Boolean networks (BNs) are well-studied models of genomic regulation in biology where nodes are genes and their state transition is controlled by Boolean functions. We propose to learn Boolean functions as Boolean formulas in disjunctive normal ...

research-article
Mirror variational transport: a particle-based algorithm for distributional optimization on constrained domains
Abstract

We consider the optimization problem of minimizing an objective functional, which admits a variational form and is defined over probability distributions on a constrained domain, which poses challenges to both theoretical analysis and algorithmic ...

research-article
Weighted neural tangent kernel: a generalized and improved network-induced kernel
Abstract

The neural tangent kernel (NTK) has recently attracted intense study, as it describes the evolution of an over-parameterized neural network (NN) trained by gradient descent. However, it is now well-known that gradient descent is not always a good ...

research-article
Generating probabilistic safety guarantees for neural network controllers
Abstract

Neural networks serve as effective controllers in a variety of complex settings due to their ability to represent expressive policies. The complex nature of neural networks, however, makes their output difficult to verify and predict, which limits ...

research-article
Public Access
Diametrical Risk Minimization: theory and computations
Abstract

The theoretical and empirical performance of Empirical Risk Minimization (ERM) often suffers when loss functions are poorly behaved with large Lipschitz moduli and spurious sharp minimizers. We propose and analyze a counterpart to ERM called ...

research-article
MapFlow: latent transition via normalizing flow for unsupervised domain adaptation
Abstract

Unsupervised domain adaptation (UDA) aims at enhancing the generalizability of the classification model learned from the labeled source domain to an unlabeled target domain. An established approach to UDA is to constrain the classifier on an ...

research-article
Unified SVM algorithm based on LS-DC loss
Abstract

Over the past two decades, support vector machines (SVMs) have become a popular supervised machine learning model, and plenty of distinct algorithms are designed separately based on different KKT conditions of the SVM model for classification/...

research-article
Lagrangian objective function leads to improved unforeseen attack generalization
Abstract

Recent improvements in deep learning models and their practical applications have raised concerns about the robustness of these models against adversarial examples. Adversarial training (AT) has been shown effective in reaching a robust model ...

research-article
Also for k-means: more data does not imply better performance
Abstract

Arguably, a desirable feature of a learner is that its performance gets better with an increasing amount of training data, at least in expectation. This issue has received renewed attention in recent years and some curious and surprising findings ...

Comments