An objective way to obtain consonant visemes for any given Spanish speaking person is proposed. Its face is recorded while speaking a balanced set of sentences and stored as an audiovisual sequence. Visual and auditory modes are segmented by allophones and a distance matrix is built to find visually similar perceived allophones. Results show high correlation with tedious subjective earlier evaluations regardless of being in English. In addition, estimation between modes is also studied, revealing a tradeoff between performances in both modes: given a set of auditory groups and another of visual ones for each grouping criteria, increasing the estimation performance of one mode is translated to decreasing that of the other one. Moreover, the tradeoff is very similar (< 7% between maximum and minimum values) in all observed examples.
|Published - 2007
|2007 International Conference on Auditory-Visual Speech Processing, AVSP 2007 - Hilvarenbeek, Netherlands
Duration: 31 Aug 2007 → 3 Sept 2007
|2007 International Conference on Auditory-Visual Speech Processing, AVSP 2007
|31/08/07 → 3/09/07