Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleOctober 2024
Spiking neural networks in the Alexiewicz topology: A new perspective on analysis and error bounds
AbstractIn order to ease the analysis of error propagation in neuromorphic computing and to get a better understanding of spiking neural networks (SNN), we address the problem of mathematical analysis of SNNs as endomorphisms that map spike trains to ...
Graphical abstractDisplay Omitted
Highlights- Understanding leaky-integrate-and-fire (LIF) as signal-to-spike-train quantization operator.
- Formula for the quantization error based on the Alexiewicz norm.
- Determination of a quasi-isometric relationship between incoming and ...
- research-articleFebruary 2024
Rethinking data augmentation for adversarial robustness
- Hamid Eghbal-zadeh,
- Werner Zellinger,
- Maura Pintor,
- Kathrin Grosse,
- Khaled Koutini,
- Bernhard A. Moser,
- Battista Biggio,
- Gerhard Widmer
Information Sciences: an International Journal (ISCI), Volume 654, Issue Chttps://doi.org/10.1016/j.ins.2023.119838AbstractRecent work has proposed novel data augmentation methods to improve the adversarial robustness of deep neural networks. In this paper, we re-evaluate such methods through the lens of different metrics that characterize the augmented manifold, ...
Highlights- Augmentation methods for adversarial robustness are often not tested in isolation.
- They are often tested on one single value of augmentation probability.
- They improve robustness only when combined with classical augmentations.
- ...
- surveyJuly 2023
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
- Antonio Emanuele Cinà,
- Kathrin Grosse,
- Ambra Demontis,
- Sebastiano Vascon,
- Werner Zellinger,
- Bernhard A. Moser,
- Alina Oprea,
- Battista Biggio,
- Marcello Pelillo,
- Fabio Roli
ACM Computing Surveys (CSUR), Volume 55, Issue 13sArticle No.: 294, Pages 1–39https://doi.org/10.1145/3585385The success of machine learning is fueled by the increasing availability of computing power and large training datasets. The training data is used to learn new models or update existing ones, assuming that it is sufficiently representative of the data ...
- research-articleMay 2019
Robust unsupervised domain adaptation for neural networks via moment alignment
- Werner Zellinger,
- Bernhard A. Moser,
- Thomas Grubinger,
- Edwin Lughofer,
- Thomas Natschläger,
- Susanne Saminger-Platz
Information Sciences: an International Journal (ISCI), Volume 483, Issue CPages 174–191https://doi.org/10.1016/j.ins.2019.01.025Highlights- A novel metric-based regularization for domain-invariant training of neural networks.
A novel approach for unsupervised domain adaptation for neural networks is proposed. It relies on metric-based regularization of the learning process. The metric-based regularization aims at domain-invariant latent feature ...