Putting the pieces together: Multimodal analysis of social attention in meetings

Ramanathan Subramanian, Jacopo Staiano, Kyriaki Kalimeri, Nicu Sebe, Fabio Pianesi

Research output: A Conference proceeding or a Chapter in BookConference contributionpeer-review

21 Citations (Scopus)

Abstract

This paper presents a multimodal framework employing eye-gaze, head-pose and speech cues to explain observed social attention patterns in meeting scenes. We first investigate a few hypotheses concerning social attention and characterize meetings and individuals based on ground-truth data. This is followed by replication of ground-truth results through automated estimation of eye-gaze, head-pose and speech activity for each participant. Experimental results show that combining eye-gaze and head-pose estimates decreases error in social attention estimation by over 26%.

Original languageEnglish
Title of host publicationMM'10 - Proceedings of the ACM Multimedia 2010 International Conference
EditorsAlberto del Bimbo, Shih-Fu Chang, Arnold Smeulders
Place of PublicationUnited States
PublisherAssociation for Computing Machinery (ACM)
Pages659-662
Number of pages4
ISBN (Print)9781605589336
DOIs
Publication statusPublished - 2010
Externally publishedYes
Event18th ACM International Conference on Multimedia ACM Multimedia 2010, MM'10 - Firenze, Italy
Duration: 25 Oct 201029 Oct 2010

Publication series

NameMM'10 - Proceedings of the ACM Multimedia 2010 International Conference

Conference

Conference18th ACM International Conference on Multimedia ACM Multimedia 2010, MM'10
Country/TerritoryItaly
CityFirenze
Period25/10/1029/10/10

Cite this