TY - GEN
T1 - Active online anomaly detection using dirichlet process mixture model and Gaussian process classification
AU - Varadarajan, Jagannadan
AU - Subramanian, Ramanathan
AU - Ahuja, Narendra
AU - Moulin, Pierre
AU - Odobez, Jean Marc
N1 - Publisher Copyright:
© 2017 IEEE.
Copyright:
Copyright 2017 Elsevier B.V., All rights reserved.
PY - 2017/5/11
Y1 - 2017/5/11
N2 - We present a novel anomaly detection (AD) system for streaming videos. Different from prior methods that rely on unsupervised learning of clip representations, that are usually coarse in nature, and batch-mode learning, we propose the combination of two non-parametric models for our task: i) Dirichlet process mixture models (DPMM) based modeling of object motion and directions in each cell, and ii) Gaussian process based active learning paradigm involving labeling by a domain expert. Whereas conventional clip representation methods adopt quantizing only motion directions leading to a lossy, coarse representation that are inadequate, our clip representation approach results in fine grained clusters at each cell that model the scene activities (both direction and speed) more effectively. For active anomaly detection, we adapt a Gaussian Process framework to process incoming samples (video snippets) sequentially, seek labels for confusing or informative samples and and update the AD model online. Furthermore, the proposed video representation along with a novel query criterion to select informative samples for labeling that incorporates both exploration and exploitation criteria is proposed, and is found to outperform competing criteria on two challenging traffic scene datasets.
AB - We present a novel anomaly detection (AD) system for streaming videos. Different from prior methods that rely on unsupervised learning of clip representations, that are usually coarse in nature, and batch-mode learning, we propose the combination of two non-parametric models for our task: i) Dirichlet process mixture models (DPMM) based modeling of object motion and directions in each cell, and ii) Gaussian process based active learning paradigm involving labeling by a domain expert. Whereas conventional clip representation methods adopt quantizing only motion directions leading to a lossy, coarse representation that are inadequate, our clip representation approach results in fine grained clusters at each cell that model the scene activities (both direction and speed) more effectively. For active anomaly detection, we adapt a Gaussian Process framework to process incoming samples (video snippets) sequentially, seek labels for confusing or informative samples and and update the AD model online. Furthermore, the proposed video representation along with a novel query criterion to select informative samples for labeling that incorporates both exploration and exploitation criteria is proposed, and is found to outperform competing criteria on two challenging traffic scene datasets.
UR - http://www.scopus.com/inward/record.url?scp=85020186679&partnerID=8YFLogxK
U2 - 10.1109/WACV.2017.74
DO - 10.1109/WACV.2017.74
M3 - Conference contribution
AN - SCOPUS:85020186679
SN - 9781509048236
T3 - Proceedings - 2017 IEEE Winter Conference on Applications of Computer Vision, WACV 2017
SP - 615
EP - 623
BT - Proceedings - 2017 IEEE Winter Conference on Applications of Computer Vision, WACV 2017
A2 - Medioni, Gerard
A2 - Michael, David
A2 - Sanderson, Conrad
A2 - Turk, Matthew
PB - IEEE, Institute of Electrical and Electronics Engineers
CY - United States
T2 - 17th IEEE Winter Conference on Applications of Computer Vision, WACV 2017
Y2 - 24 March 2017 through 31 March 2017
ER -