We present a novel anomaly detection (AD) system for streaming videos. Different from prior methods that rely on unsupervised learning of clip representations, that are usually coarse in nature, and batch-mode learning, we propose the combination of two non-parametric models for our task: i) Dirichlet process mixture models (DPMM) based modeling of object motion and directions in each cell, and ii) Gaussian process based active learning paradigm involving labeling by a domain expert. Whereas conventional clip representation methods adopt quantizing only motion directions leading to a lossy, coarse representation that are inadequate, our clip representation approach results in fine grained clusters at each cell that model the scene activities (both direction and speed) more effectively. For active anomaly detection, we adapt a Gaussian Process framework to process incoming samples (video snippets) sequentially, seek labels for confusing or informative samples and and update the AD model online. Furthermore, the proposed video representation along with a novel query criterion to select informative samples for labeling that incorporates both exploration and exploitation criteria is proposed, and is found to outperform competing criteria on two challenging traffic scene datasets.