We present the first results from applying a recently proposed novel algorithm for the robust and reliable automatic extraction of lip feature points to an audio-video speech data corpus.This corpus comprises 10 native speakers uttering sequences that cover the range of phonemes and visemes in Australian English. The lip-tracking algorithm is based on stereo vision which has the advantage of measurements being in real-world (3D) coordinates, instead of image (2D) coordinates. Certain lip feature points on the inner lip contour such as the lip corners and the mid-points of upper and lower lip are automatically tracked. Parameters describing the shape of the mouth are derived from these points. The results obtained so far show that there is a correlation between width and height of the mouth opening as well as between the protrusion parameters of upper and lower lips.
|Title of host publication||Proceedings of the 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing ICASSP 2001|
|Subtitle of host publication||Student Forum|
|Publisher||IEEE, Institute of Electrical and Electronics Engineers|
|Number of pages||4|
|Publication status||Published - 7 May 2001|
|Event||2001 IEEE International Conference on Acoustics, Speech, and Signal Processing - Salt Palace Convetion Center, Salt Lake City, United States|
Duration: 7 May 2001 → 11 May 2001
|Conference||2001 IEEE International Conference on Acoustics, Speech, and Signal Processing|
|Abbreviated title||ICASSP 2001|
|City||Salt Lake City|
|Period||7/05/01 → 11/05/01|
Goecke, R., Millar, J. B., Zelinsky, A., & Robert-Ribes, J. (2001). Stereo Vision Lip-Tracking for Audio-Video Speech Processing. In Proceedings of the 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing ICASSP 2001: Student Forum (pp. 1-4). IEEE, Institute of Electrical and Electronics Engineers.