Welcome to the GitHub repository for the paper "Leveraging Mixture of Experts for Improved Speech Deepfake Detection."
This repository contains the code and resources for implementing the Mixture of Experts (MoE) architecture to improve the performance of speech deepfake detection. The proposed approach utilizes the Mixture of Experts framework to better generalize across various unseen datasets and effectively adapt to the challenges posed by evolving deepfake techniques.
- Mixture of Experts Architecture: A modular approach to handle input variability and improve generalization.
- Gating Mechanism: An efficient, lightweight dynamic expert selection for optimizing detection performance.
- Scalable Updates: The modular structure allows easy adaptation to new data and evolving deepfake detection methods.
This repository is currently under construction. The code for the Mixture of Experts model and its evaluation will be released soon. Stay tuned for updates!
If you use this code in your research, please cite the following paper:
Negroni, V., Salvi, D., Mezza, A. I., Bestagini, P., & Tubaro, S. (2024). Leveraging Mixture of Experts for Improved Speech Deepfake Detection. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
@inproceedings{Negroni2024,
title={Leveraging Mixture of Experts for Improved Speech Deepfake Detection},
author={Viola Negroni, Davide Salvi, Alessandro Ilic Mezza, Paolo Bestagini, Stefano Tubaro},
booktitle={IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
year={2024}
}
For any inquiries or collaboration requests, please contact:
- Viola Negroni: viola.negroni@polimi.it
- Davide Salvi: davide.salvi@polimi.it
- Alessandro Ilic Mezza: alessandroilic.mezza@polimi.it
- Paolo Bestagini: paolo.bestagini@polimi.it
- Stefano Tubaro: stefano.tubaro@polimi.it