Abstract
We present GAZEDA- eye GAZe-guided EDiting for videos captured by a solitary, static, wide-angle and high-resolution camera. Eye-gaze has been effectively employed in computational applications as a cue to capture interesting scene content; we employ gaze as a proxy to select shots for inclusion in the edited video. Given the original video, scene content and user eye-gaze tracks are combined to generate an edited video comprising cinematically valid actor shots and shot transitions to generate an aesthetic and vivid representation of the original narrative. We model cinematic video editing as an energy minimization problem over shot selection, whose constraints capture cinematographic editing conventions. Gazed scene locations primarily determine the shots constituting the edited video. Effectiveness of GAZED against multiple competing methods is demonstrated via a psychophysical study involving 12 users and twelve performance videos.
Original language | English |
---|---|
Title of host publication | CHI 2020 - Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems |
Editors | Regina Bernhaupt, Florian Mueller, David Verweij, Josh Andres, Joanna McGrenere |
Place of Publication | United States |
Publisher | Association for Computing Machinery (ACM) |
Pages | 1-11 |
Number of pages | 11 |
ISBN (Electronic) | 9781450367080 |
DOIs | |
Publication status | Published - 21 Apr 2020 |
Externally published | Yes |
Event | International Conference on Human Factors in Computing Systems, CHI 2020 - Honolulu, Honolulu, United States Duration: 25 Apr 2020 → 30 Apr 2020 |
Publication series
Name | Conference on Human Factors in Computing Systems - Proceedings |
---|
Conference
Conference | International Conference on Human Factors in Computing Systems, CHI 2020 |
---|---|
Abbreviated title | CHI 2020 |
Country/Territory | United States |
City | Honolulu |
Period | 25/04/20 → 30/04/20 |