Voice Deepfakes and Adversarial Attacks Detection

Voice Deepfakes and Adversarial Attacks Detection
Online
November 13, 2025, 13:00-16:00 CET
The proliferation of voice deepfakes and adversarial attacks presents an increasing challenge to the security and reliability of voice-based authentication and communication systems. While deepfake technology enables the generation of highly realistic synthetic speech, adversarial attacks exploit vulnerabilities in machine learning models to manipulate or evade detection systems. Both threats pose significant risks to biometric security and the overall trustworthiness of AI-driven applications.
Co-organized between European Association for Biometrics (EAB) and EURECOM, this workshop will feature presentations by experts to explore recent advancements in detecting and mitigating both voice deepfakes and adversarial attacks. Discussions will focus on deep learning-based countermeasures, adversarial robustness techniques, generalization across unseen attacks, and the evolving nature of attack strategies.
Article Topics
biometrics | deepfake detection | deepfakes | EAB 2025 | voice biometrics





Comments