Audio-Video Person Authenticate Based on 3D Facial Feature Warping

Girija Chetty, Michael Wagner

Research output: A Conference proceeding or a Chapter in BookConference contributionpeer-review

3 Citations (Scopus)
43 Downloads (Pure)

Abstract

In this paper we propose a novel feature warping technique based on thin-plate-spline (TPS) analysis for 3D audio-video person authentication systems. The TPS warp features model information related to non-rigid variations on speaking faces, such as expression lines, gestures, and wrinkles, enhancing the performance of the system against impostor and spoof attacks. Experiments with multimodal fusion of acoustic and TPS shape features for two different speaking face data corpus, VidTIMIT and AVOZES, allowed equal error rates (EERs) of less than 0.5 % for imposter attacks, less than 1 % for type-1 replay attacks (still photo and pre-recorded audio) and less than 2% for more complex type-2 replay attacks (prerecorded video or fake CG animated video).

Original languageEnglish
Title of host publicationProceedings of the Digital Imaging Computing
Subtitle of host publicationTechniques and Applications, DICTA 2005
EditorsBrian Lovell, Anthony Maeder, Terry Caelli, Sebastian Ourselin
Place of PublicationPiscataway, New Jersey USA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages399-406
Number of pages8
Volume2005
ISBN (Print)0769524672, 9780769524672
DOIs
Publication statusPublished - 2005
EventDigital Imaging Computing: Techniques and Applications, DICTA 2005 - Cairns, Australia
Duration: 6 Dec 20058 Dec 2005

Conference

ConferenceDigital Imaging Computing: Techniques and Applications, DICTA 2005
Country/TerritoryAustralia
CityCairns
Period6/12/058/12/05

Fingerprint

Dive into the research topics of 'Audio-Video Person Authenticate Based on 3D Facial Feature Warping'. Together they form a unique fingerprint.

Cite this