Speaking Faces for Face-Voice Speaker Identity Verification

Girija Chetty, Michael Wagner

Research output: A Conference proceeding or a Chapter in BookConference contributionpeer-review

4 Citations (Scopus)
41 Downloads (Pure)

Abstract

In this paper, we describe an approach for an animated speaking
face synthesis and its application in modeling impostor/replay
attack scenarios for face-voice based speaker verification
systems. The speaking face reported here learns the spatiotemporal
relationship between speech acoustics and MPEG4
compliant facial animation points. The influence of articulatory,
perceptual, and prosodic acoustic features along with auditory
context on prediction accuracy was examined. The results are
indicative of vulnerability of audiovisual identity verification
systems to impostor/replay attacks using synthetic faces. The
level of vulnerability depends on several factors, such as the
type of audiovisual features, the fusion techniques used for the
audio and video features and their relative robustness. Also, the
success of the synthetic impostor depends on the type of coarticulation
models and acoustic features used for the
audiovisual mapping of speaking face synthesis.
Original languageEnglish
Title of host publicationProceedings of the 9th International Conference on Spoken Language Processing Interspeech 2006 - ICSLP
EditorsCarnegie Mellon
Place of PublicationGermany
PublisherInternational Speech Communication Association
Pages513-516
Number of pages4
ISBN (Print)9781604234497
Publication statusPublished - 2006
Event9th International Conference on Spoken Language Processing - Pittsburgh, United States
Duration: 17 Sept 200621 Sept 2006

Conference

Conference9th International Conference on Spoken Language Processing
Country/TerritoryUnited States
CityPittsburgh
Period17/09/0621/09/06

Fingerprint

Dive into the research topics of 'Speaking Faces for Face-Voice Speaker Identity Verification'. Together they form a unique fingerprint.

Cite this