Multimodal Fusion for Robust Identity Authentication: Role of Liveness Checks

Girija Chetty, Emdad Hossain

Research output: A Conference proceeding or a Chapter in BookChapter

Abstract

Most of the current biometric identity authentication systems currently deployed are based on modeling the identity of a person based on unimodal information, i.e. face, voice, or fingerprint features. Also, many current interactive civilian remote human computer interaction applications are based on speech based voice features, which achieve significantly lower performance for operating environments with low signal-to-noise ratios (SNR). For a long time, use of acoustic information alone has been a great success for several automatic speech processing applications such as automatic speech transcription or speaker authentication, while face identification systems based visual information alone from faces also proved to be of equally successful. However, in adverse operating environments, performance of either of these systems could be suboptimal. Use of both visual and audio information can lead to better robustness, as they can provide complementary secondary clues that can help in the analysis of the primary biometric signals (Potamianos et al (2004)). The joint analysis of acoustic and visual speech can improve the robustness of automatic speech recognition systems (Liu et al (2002), Gurbuz et al (2002)
Original languageEnglish
Title of host publicationAdvanced Biometric Technologies
EditorsGirija Chetty, Jucheng Yang
Place of PublicationCroatia
PublisherIn-Tech
Pages3-20
Number of pages18
Edition1
ISBN (Print)9789533074870
Publication statusPublished - 2011

Fingerprint

Authentication
Fusion reactions
Biometrics
Acoustics
Speech processing
Transcription
Human computer interaction
Speech recognition
Signal to noise ratio
Identification (control systems)

Cite this

Chetty, G., & Hossain, E. (2011). Multimodal Fusion for Robust Identity Authentication: Role of Liveness Checks. In G. Chetty, & J. Yang (Eds.), Advanced Biometric Technologies (1 ed., pp. 3-20). Croatia: In-Tech.
Chetty, Girija ; Hossain, Emdad. / Multimodal Fusion for Robust Identity Authentication: Role of Liveness Checks. Advanced Biometric Technologies. editor / Girija Chetty ; Jucheng Yang. 1. ed. Croatia : In-Tech, 2011. pp. 3-20
@inbook{16203b9968f640a8b4e79b926b8f6141,
title = "Multimodal Fusion for Robust Identity Authentication: Role of Liveness Checks",
abstract = "Most of the current biometric identity authentication systems currently deployed are based on modeling the identity of a person based on unimodal information, i.e. face, voice, or fingerprint features. Also, many current interactive civilian remote human computer interaction applications are based on speech based voice features, which achieve significantly lower performance for operating environments with low signal-to-noise ratios (SNR). For a long time, use of acoustic information alone has been a great success for several automatic speech processing applications such as automatic speech transcription or speaker authentication, while face identification systems based visual information alone from faces also proved to be of equally successful. However, in adverse operating environments, performance of either of these systems could be suboptimal. Use of both visual and audio information can lead to better robustness, as they can provide complementary secondary clues that can help in the analysis of the primary biometric signals (Potamianos et al (2004)). The joint analysis of acoustic and visual speech can improve the robustness of automatic speech recognition systems (Liu et al (2002), Gurbuz et al (2002)",
keywords = "biometric, vision, imaging",
author = "Girija Chetty and Emdad Hossain",
year = "2011",
language = "English",
isbn = "9789533074870",
pages = "3--20",
editor = "Girija Chetty and Jucheng Yang",
booktitle = "Advanced Biometric Technologies",
publisher = "In-Tech",
edition = "1",

}

Chetty, G & Hossain, E 2011, Multimodal Fusion for Robust Identity Authentication: Role of Liveness Checks. in G Chetty & J Yang (eds), Advanced Biometric Technologies. 1 edn, In-Tech, Croatia, pp. 3-20.

Multimodal Fusion for Robust Identity Authentication: Role of Liveness Checks. / Chetty, Girija; Hossain, Emdad.

Advanced Biometric Technologies. ed. / Girija Chetty; Jucheng Yang. 1. ed. Croatia : In-Tech, 2011. p. 3-20.

Research output: A Conference proceeding or a Chapter in BookChapter

TY - CHAP

T1 - Multimodal Fusion for Robust Identity Authentication: Role of Liveness Checks

AU - Chetty, Girija

AU - Hossain, Emdad

PY - 2011

Y1 - 2011

N2 - Most of the current biometric identity authentication systems currently deployed are based on modeling the identity of a person based on unimodal information, i.e. face, voice, or fingerprint features. Also, many current interactive civilian remote human computer interaction applications are based on speech based voice features, which achieve significantly lower performance for operating environments with low signal-to-noise ratios (SNR). For a long time, use of acoustic information alone has been a great success for several automatic speech processing applications such as automatic speech transcription or speaker authentication, while face identification systems based visual information alone from faces also proved to be of equally successful. However, in adverse operating environments, performance of either of these systems could be suboptimal. Use of both visual and audio information can lead to better robustness, as they can provide complementary secondary clues that can help in the analysis of the primary biometric signals (Potamianos et al (2004)). The joint analysis of acoustic and visual speech can improve the robustness of automatic speech recognition systems (Liu et al (2002), Gurbuz et al (2002)

AB - Most of the current biometric identity authentication systems currently deployed are based on modeling the identity of a person based on unimodal information, i.e. face, voice, or fingerprint features. Also, many current interactive civilian remote human computer interaction applications are based on speech based voice features, which achieve significantly lower performance for operating environments with low signal-to-noise ratios (SNR). For a long time, use of acoustic information alone has been a great success for several automatic speech processing applications such as automatic speech transcription or speaker authentication, while face identification systems based visual information alone from faces also proved to be of equally successful. However, in adverse operating environments, performance of either of these systems could be suboptimal. Use of both visual and audio information can lead to better robustness, as they can provide complementary secondary clues that can help in the analysis of the primary biometric signals (Potamianos et al (2004)). The joint analysis of acoustic and visual speech can improve the robustness of automatic speech recognition systems (Liu et al (2002), Gurbuz et al (2002)

KW - biometric

KW - vision

KW - imaging

M3 - Chapter

SN - 9789533074870

SP - 3

EP - 20

BT - Advanced Biometric Technologies

A2 - Chetty, Girija

A2 - Yang, Jucheng

PB - In-Tech

CY - Croatia

ER -

Chetty G, Hossain E. Multimodal Fusion for Robust Identity Authentication: Role of Liveness Checks. In Chetty G, Yang J, editors, Advanced Biometric Technologies. 1 ed. Croatia: In-Tech. 2011. p. 3-20