Audio-Visual Mutual Dependency Models for Biometric Liveness Checks

Girija Chetty, Roland Goecke, Michael Wagner

Research output: A Conference proceeding or a Chapter in BookConference contributionpeer-review

1 Citation (Scopus)

Abstract

In this paper we propose liveness checking technique for
multimodal biometric authentication systems based on audiovisual mutual dependency models. Liveness checking ensures
that biometric cues are acquired from a live person who is
actually present at the time of capture for authenticating the
identity. The liveness check based on mutual dependency
models is performed by fusion of acoustic and visual speech
features which measure the degree of synchrony between the
lips and the voice extracted from speaking face video
sequences. Performance evaluation in terms of DET (Detector
Error Tradeoff) curves and EERs(Equal Error Rates) on
publicly available audiovisual speech databases show a
significant improvement in performance of proposed fusion of
face-voice features based on mutual dependency models.
Original languageEnglish
Title of host publicationProceedings of the 2009 Conference on Audio Visual Speech Processing
EditorsBarry-John Theobald, Richard Harvey
Place of PublicationNorwich, UK
PublisherUniversity of East Anglia
Pages32-37
Number of pages6
Volume1
ISBN (Print)9780956345202
Publication statusPublished - 2009
Event2009 Conference on Audio Visual Speech Processing, AVSP 2009 - Norwich, United States
Duration: 10 Sept 200913 Sept 2009

Conference

Conference2009 Conference on Audio Visual Speech Processing, AVSP 2009
Country/TerritoryUnited States
CityNorwich
Period10/09/0913/09/09

Fingerprint

Dive into the research topics of 'Audio-Visual Mutual Dependency Models for Biometric Liveness Checks'. Together they form a unique fingerprint.

Cite this