Audio-Visual Multimodal Fusion for Biometric Person Authentication and Liveness Verification

Girija Chetty, Michael Wagner

Research output: A Conference proceeding or a Chapter in BookConference contribution

Abstract

In this paper we propose a multimodal fusion framework based on novel face-voice fusion techniques for biometric person authentication and liveness verification. Checking liveness guards the system against spoof/replay attacks by ensuring that the biometric data is captured from an authorised live person. The proposed framework based on bi-modal feature fusion, cross-modal fusion as well as 3D shape and texture fusion techniques, allow a significant improvement in system performance against impostor attacks, type-1 replay attacks (still photo and prerecorded audio), and challenging type-2 replay attacks(CG animated video from a still photo and pre-recorded audio) and robustness to pose and illumination variations.

Original languageEnglish
Title of host publicationMMUI '05: Proceedings of the 2005 NICTA
Subtitle of host publicationHCSNet Multimodal User Interaction Workshop - Volume 57
EditorsFang Chen, Julien Epps
Place of PublicationAustralia
PublisherAustralian Computer Society
Pages17-24
Number of pages8
Volume57
ISBN (Print)1920682392
Publication statusPublished - 1 Apr 2006
EventMMUI2005 - Sydney, Australia
Duration: 13 Sep 200514 Sep 2005

Conference

ConferenceMMUI2005
CountryAustralia
CitySydney
Period13/09/0514/09/05

Fingerprint Dive into the research topics of 'Audio-Visual Multimodal Fusion for Biometric Person Authentication and Liveness Verification'. Together they form a unique fingerprint.

  • Cite this

    Chetty, G., & Wagner, M. (2006). Audio-Visual Multimodal Fusion for Biometric Person Authentication and Liveness Verification. In F. Chen, & J. Epps (Eds.), MMUI '05: Proceedings of the 2005 NICTA: HCSNet Multimodal User Interaction Workshop - Volume 57 (Vol. 57, pp. 17-24). Australian Computer Society. https://dl.acm.org/doi/abs/10.5555/1151804.1151808