Face-voice authentication based on 3D face models

Girija Chetty, Michael Wagner

Research output: A Conference proceeding or a Chapter in BookConference contributionpeer-review

4 Citations (Scopus)


In this paper we propose fusion of shape and texture information from 3D face models of persons with the acoustic features extracted from spoken utterances, to improve the performance against imposter and replay attacks. Experiments conducted on two multimodal speaking face corpora, VidTIMIT and AVOZES, allowed less than 2 % EERs to be achieved for imposter attacks, and less than 1% for type-1 replay attacks for multimodal feature fusion of acoustic, shape and texture features. For type-2 replay attacks, more difficult type of spoof attacks, less than 7% EER was achieved.

Original languageEnglish
Title of host publicationComputer Vision - ACCV 2006 - 7th Asian Conference on Computer Vision, Proceedings
Number of pages10
ISBN (Electronic)9783540324331
ISBN (Print)9783540312192
Publication statusPublished - 15 Jun 2006
Event7th Asian Conference on Computer Vision, ACCV 2006 - Hyderabad, India
Duration: 13 Jan 200616 Jan 2006

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume3851 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference7th Asian Conference on Computer Vision, ACCV 2006


Dive into the research topics of 'Face-voice authentication based on 3D face models'. Together they form a unique fingerprint.

Cite this