Abstract
This paper presents a new method named text to visual synthesis with appearance models (TEVISAM) for generating videorealistic talking heads. In a first step, the system learns a person-specific facial appearance model (PSF AM) automatically. PSF AM allows modeling all facial components (e.g. eyes, mouth, etc) independently and it will be used to animate die face from the input text dynamically. As reported by other researches, one of the key aspects in visual synthesis is the coarticulation effect. To solve such a problem, we introduce a new interpolation method in the high dimensional space of appearance allowing to create photorealistic and videorealistic avatars. In this work, preliminary experiments synthesizing virtual avatars from text are reported. Summarizing, in this paper we introduce three novelties: first, we make use of color PSFAM to animate virtual avatars; second, we introduce a non-linear high dimensional interpolation to achieve videorealistic animations; finally, this method allows to generate new expressions modeling the different facial elements.
Original language | English |
---|---|
Title of host publication | IEEE International Conference on Image Processing |
Publisher | IEEE |
Pages | 237-240 |
Number of pages | 4 |
Volume | 1 |
ISBN (Print) | 9780780377508 |
DOIs | |
Publication status | Published - 14 Sep 2003 |
Externally published | Yes |
Event | Proceedings: 2003 International Conference on Image Processing, ICIP-2003 - Barcelona, Spain Duration: 14 Sep 2003 → 17 Sep 2003 |
Conference
Conference | Proceedings: 2003 International Conference on Image Processing, ICIP-2003 |
---|---|
Country | Spain |
City | Barcelona |
Period | 14/09/03 → 17/09/03 |
Fingerprint
Cite this
}
Text to visual synthesis with appearance models. / Melenchón, Javier; La Torre, Fernando De; Iriondo, Igfnasi; Alías, Francesc; Martinez, Elisa; Vicent, Luis.
IEEE International Conference on Image Processing. Vol. 1 IEEE, 2003. p. 237-240.Research output: A Conference proceeding or a Chapter in Book › Conference contribution
TY - GEN
T1 - Text to visual synthesis with appearance models
AU - Melenchón, Javier
AU - La Torre, Fernando De
AU - Iriondo, Igfnasi
AU - Alías, Francesc
AU - Martinez, Elisa
AU - Vicent, Luis
PY - 2003/9/14
Y1 - 2003/9/14
N2 - This paper presents a new method named text to visual synthesis with appearance models (TEVISAM) for generating videorealistic talking heads. In a first step, the system learns a person-specific facial appearance model (PSF AM) automatically. PSF AM allows modeling all facial components (e.g. eyes, mouth, etc) independently and it will be used to animate die face from the input text dynamically. As reported by other researches, one of the key aspects in visual synthesis is the coarticulation effect. To solve such a problem, we introduce a new interpolation method in the high dimensional space of appearance allowing to create photorealistic and videorealistic avatars. In this work, preliminary experiments synthesizing virtual avatars from text are reported. Summarizing, in this paper we introduce three novelties: first, we make use of color PSFAM to animate virtual avatars; second, we introduce a non-linear high dimensional interpolation to achieve videorealistic animations; finally, this method allows to generate new expressions modeling the different facial elements.
AB - This paper presents a new method named text to visual synthesis with appearance models (TEVISAM) for generating videorealistic talking heads. In a first step, the system learns a person-specific facial appearance model (PSF AM) automatically. PSF AM allows modeling all facial components (e.g. eyes, mouth, etc) independently and it will be used to animate die face from the input text dynamically. As reported by other researches, one of the key aspects in visual synthesis is the coarticulation effect. To solve such a problem, we introduce a new interpolation method in the high dimensional space of appearance allowing to create photorealistic and videorealistic avatars. In this work, preliminary experiments synthesizing virtual avatars from text are reported. Summarizing, in this paper we introduce three novelties: first, we make use of color PSFAM to animate virtual avatars; second, we introduce a non-linear high dimensional interpolation to achieve videorealistic animations; finally, this method allows to generate new expressions modeling the different facial elements.
KW - Appearance model
KW - Visual synthesis
UR - http://www.scopus.com/inward/record.url?scp=0345097527&partnerID=8YFLogxK
U2 - 10.1109/ICIP.2003.1246942
DO - 10.1109/ICIP.2003.1246942
M3 - Conference contribution
SN - 9780780377508
VL - 1
SP - 237
EP - 240
BT - IEEE International Conference on Image Processing
PB - IEEE
ER -