Linear facial expression transfer with active appearance models

Miles De La Hunty, Akshay Asthana, Roland Goecke

    Research output: A Conference proceeding or a Chapter in BookConference contributionpeer-review

    12 Citations (Scopus)
    47 Downloads (Pure)

    Abstract

    The issue of transferring facial expressions from one person’s face to another’s has been an area of interest for the movie industry and the computer graphics community for quite some time. In recent years, with the proliferation of online image and video collections and web applications, such as Google Street View, the question of preserving privacy through face de-identification has gained interest in the computer vision community. In this paper, we focus on the problem of real-time dynamic facial expression transfer using an Active Appearance Model framework. We provide a theoretical foundation for a generalisation of two well-known expression transfer methods and demonstrate the improved visual quality of the proposed linear extrapolation transfer method on examples of face swapping and expression transfer using the AVOZES data corpus. Realistic talking faces can be generated in real-time at low computational cost.
    Original languageEnglish
    Title of host publication2010 20th International Conference on Pattern Recognition (ICPR)
    Place of PublicationPiscataway, NJ, U.S.A.
    PublisherIEEE, Institute of Electrical and Electronics Engineers
    Pages3789-3792
    Number of pages4
    ISBN (Print)9780769541099
    DOIs
    Publication statusPublished - 2010
    EventICPR 2010: 20th International Conference on Pattern Recognition - Istanbul, Turkey
    Duration: 23 Aug 201026 Aug 2010

    Conference

    ConferenceICPR 2010: 20th International Conference on Pattern Recognition
    Country/TerritoryTurkey
    CityIstanbul
    Period23/08/1026/08/10

    Fingerprint

    Dive into the research topics of 'Linear facial expression transfer with active appearance models'. Together they form a unique fingerprint.

    Cite this