Audio-Visual Mutual Dependency Models for Biometric Liveness Checks

Girija Chetty, Roland Goecke, Michael Wagner

    Research output: A Conference proceeding or a Chapter in BookConference contribution

    Abstract

    In this paper we propose liveness checking technique for
    multimodal biometric authentication systems based on audiovisual mutual dependency models. Liveness checking ensures
    that biometric cues are acquired from a live person who is
    actually present at the time of capture for authenticating the
    identity. The liveness check based on mutual dependency
    models is performed by fusion of acoustic and visual speech
    features which measure the degree of synchrony between the
    lips and the voice extracted from speaking face video
    sequences. Performance evaluation in terms of DET (Detector
    Error Tradeoff) curves and EERs(Equal Error Rates) on
    publicly available audiovisual speech databases show a
    significant improvement in performance of proposed fusion of
    face-voice features based on mutual dependency models.
    Original languageEnglish
    Title of host publicationProceedings of the 2009 Conference on Audio Visual Speech Processing
    EditorsBarry-John Theobald, Richard Harvey
    Place of PublicationNorwich, UK
    PublisherUniversity of East Anglia
    Pages32-37
    Number of pages6
    Volume1
    ISBN (Print)9780956345202
    Publication statusPublished - 2009
    Event2009 Conference on Audio Visual Speech Processing, AVSP 2009 - Norwich, United States
    Duration: 10 Sep 200913 Sep 2009

    Conference

    Conference2009 Conference on Audio Visual Speech Processing, AVSP 2009
    CountryUnited States
    CityNorwich
    Period10/09/0913/09/09

    Fingerprint

    Biometrics
    Fusion reactions
    Authentication
    Acoustics

    Cite this

    Chetty, G., Goecke, R., & Wagner, M. (2009). Audio-Visual Mutual Dependency Models for Biometric Liveness Checks. In B-J. Theobald, & R. Harvey (Eds.), Proceedings of the 2009 Conference on Audio Visual Speech Processing (Vol. 1, pp. 32-37). Norwich, UK: University of East Anglia.
    Chetty, Girija ; Goecke, Roland ; Wagner, Michael. / Audio-Visual Mutual Dependency Models for Biometric Liveness Checks. Proceedings of the 2009 Conference on Audio Visual Speech Processing. editor / Barry-John Theobald ; Richard Harvey. Vol. 1 Norwich, UK : University of East Anglia, 2009. pp. 32-37
    @inproceedings{71110003808e420a87b94325081b9ad0,
    title = "Audio-Visual Mutual Dependency Models for Biometric Liveness Checks",
    abstract = "In this paper we propose liveness checking technique formultimodal biometric authentication systems based on audiovisual mutual dependency models. Liveness checking ensuresthat biometric cues are acquired from a live person who isactually present at the time of capture for authenticating theidentity. The liveness check based on mutual dependencymodels is performed by fusion of acoustic and visual speechfeatures which measure the degree of synchrony between thelips and the voice extracted from speaking face videosequences. Performance evaluation in terms of DET (DetectorError Tradeoff) curves and EERs(Equal Error Rates) onpublicly available audiovisual speech databases show asignificant improvement in performance of proposed fusion offace-voice features based on mutual dependency models.",
    keywords = "multimodal, face-voice, Speaker verification, ancillary speaker characteristics",
    author = "Girija Chetty and Roland Goecke and Michael Wagner",
    year = "2009",
    language = "English",
    isbn = "9780956345202",
    volume = "1",
    pages = "32--37",
    editor = "Barry-John Theobald and Richard Harvey",
    booktitle = "Proceedings of the 2009 Conference on Audio Visual Speech Processing",
    publisher = "University of East Anglia",

    }

    Chetty, G, Goecke, R & Wagner, M 2009, Audio-Visual Mutual Dependency Models for Biometric Liveness Checks. in B-J Theobald & R Harvey (eds), Proceedings of the 2009 Conference on Audio Visual Speech Processing. vol. 1, University of East Anglia, Norwich, UK, pp. 32-37, 2009 Conference on Audio Visual Speech Processing, AVSP 2009, Norwich, United States, 10/09/09.

    Audio-Visual Mutual Dependency Models for Biometric Liveness Checks. / Chetty, Girija; Goecke, Roland; Wagner, Michael.

    Proceedings of the 2009 Conference on Audio Visual Speech Processing. ed. / Barry-John Theobald; Richard Harvey. Vol. 1 Norwich, UK : University of East Anglia, 2009. p. 32-37.

    Research output: A Conference proceeding or a Chapter in BookConference contribution

    TY - GEN

    T1 - Audio-Visual Mutual Dependency Models for Biometric Liveness Checks

    AU - Chetty, Girija

    AU - Goecke, Roland

    AU - Wagner, Michael

    PY - 2009

    Y1 - 2009

    N2 - In this paper we propose liveness checking technique formultimodal biometric authentication systems based on audiovisual mutual dependency models. Liveness checking ensuresthat biometric cues are acquired from a live person who isactually present at the time of capture for authenticating theidentity. The liveness check based on mutual dependencymodels is performed by fusion of acoustic and visual speechfeatures which measure the degree of synchrony between thelips and the voice extracted from speaking face videosequences. Performance evaluation in terms of DET (DetectorError Tradeoff) curves and EERs(Equal Error Rates) onpublicly available audiovisual speech databases show asignificant improvement in performance of proposed fusion offace-voice features based on mutual dependency models.

    AB - In this paper we propose liveness checking technique formultimodal biometric authentication systems based on audiovisual mutual dependency models. Liveness checking ensuresthat biometric cues are acquired from a live person who isactually present at the time of capture for authenticating theidentity. The liveness check based on mutual dependencymodels is performed by fusion of acoustic and visual speechfeatures which measure the degree of synchrony between thelips and the voice extracted from speaking face videosequences. Performance evaluation in terms of DET (DetectorError Tradeoff) curves and EERs(Equal Error Rates) onpublicly available audiovisual speech databases show asignificant improvement in performance of proposed fusion offace-voice features based on mutual dependency models.

    KW - multimodal

    KW - face-voice

    KW - Speaker verification

    KW - ancillary speaker characteristics

    M3 - Conference contribution

    SN - 9780956345202

    VL - 1

    SP - 32

    EP - 37

    BT - Proceedings of the 2009 Conference on Audio Visual Speech Processing

    A2 - Theobald, Barry-John

    A2 - Harvey, Richard

    PB - University of East Anglia

    CY - Norwich, UK

    ER -

    Chetty G, Goecke R, Wagner M. Audio-Visual Mutual Dependency Models for Biometric Liveness Checks. In Theobald B-J, Harvey R, editors, Proceedings of the 2009 Conference on Audio Visual Speech Processing. Vol. 1. Norwich, UK: University of East Anglia. 2009. p. 32-37