Investigating Feature Level Fusion for Checking Liveness in Face-Voice Authentication

Girija Chetty, Michael Wagner

Research output: A Conference proceeding or a Chapter in BookConference contributionpeer-review

9 Citations (Scopus)
40 Downloads (Pure)

Abstract

In this paper we propose a feature level fusion approach for checking liveness in face-voice person authentication. Liveness verification experiments conducted on two audiovisual databases, VidTIMIT and UCBN, show that feature-level fusion is indeed a powerful technique for checking liveness in systems that are vulnerable to replay attacks, as it preserves synchronisation between closely coupled modalities, such as voice and face, through various stages of authentication. An improvement in error rate of the order of 25-40% is achieved for replay attack experiments by using feature level fusion of acoustic and visual feature vectors from lip region as compared to classical late fusion approach.
Original languageEnglish
Title of host publicationProceedings of the Eighth International Symposium on Signal Processing and Applications
Place of PublicationPiscataway, New Jersey, USA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages66-69
Number of pages4
ISBN (Print)0-7803-9244-2
DOIs
Publication statusPublished - 2005
EventISSPA-2005 - Sydney, Australia
Duration: 28 Aug 200531 Aug 2005

Conference

ConferenceISSPA-2005
Country/TerritoryAustralia
CitySydney
Period28/08/0531/08/05

Fingerprint

Dive into the research topics of 'Investigating Feature Level Fusion for Checking Liveness in Face-Voice Authentication'. Together they form a unique fingerprint.

Cite this