Modeling Fluctuations of Voiced Excitation for Speech Generation Based on Recursive Volterra Systems

Author(s):  
Karl Schnell ◽  
Arild Lacroix
IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Johanes Effendi ◽  
Sakriani Sakti ◽  
Satoshi Nakamura
Keyword(s):  

2021 ◽  
pp. 127360
Author(s):  
Tassos Bountis ◽  
Zhanat Zhunussova ◽  
Karlygash Dosmagulova ◽  
George Kanellopoulos
Keyword(s):  

2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Clara Borrelli ◽  
Paolo Bestagini ◽  
Fabio Antonacci ◽  
Augusto Sarti ◽  
Stefano Tubaro

AbstractSeveral methods for synthetic audio speech generation have been developed in the literature through the years. With the great technological advances brought by deep learning, many novel synthetic speech techniques achieving incredible realistic results have been recently proposed. As these methods generate convincing fake human voices, they can be used in a malicious way to negatively impact on today’s society (e.g., people impersonation, fake news spreading, opinion formation). For this reason, the ability of detecting whether a speech recording is synthetic or pristine is becoming an urgent necessity. In this work, we develop a synthetic speech detector. This takes as input an audio recording, extracts a series of hand-crafted features motivated by the speech-processing literature, and classify them in either closed-set or open-set. The proposed detector is validated on a publicly available dataset consisting of 17 synthetic speech generation algorithms ranging from old fashioned vocoders to modern deep learning solutions. Results show that the proposed method outperforms recently proposed detectors in the forensics literature.


2009 ◽  
Vol 16 (3) ◽  
pp. 339-354 ◽  
Author(s):  
YIANNIS T. CHRISTODOULIDES ◽  
PANTELIS A. DAMIANOU

Sign in / Sign up

Export Citation Format

Share Document