A smart fusion framework for multimodal object, activity and event detection

Girija CHETTY, Mohammad Yamin

Research output: A Conference proceeding or a Chapter in BookConference contribution

Abstract

With an increasing diffusion of wearable technologies and mobile sensor systems, along with entrenchment of social media networks and crowdsourced information systems in every aspect of modern society, an unavoidable reality is that of continuous, pervasive and ubiquitous sensing, monitoring, surveillance and detection of every type of object, activity, event and incident at a global scale. This rapid proliferation has provided immense opportunities to make use of comprehensive information from a diverse array of multimodal, multi-view, and multisensory data streams for developing efficient and robust, automated computer based decision support systems. Further, with the availability of the complementary and the supplementary information in terms of auxiliary meta-data from the social networks, human experts and the crowdsourced communities, it is possible to obtain better actionable intelligence from these systems. In this paper, we propose a novel computational framework for addressing this gap. The proposed smart fusion framework with particular focus on combining heterogeneous, multimodal real-time big data streams-with information from different types of sensor and auxiliary information drawn from human experts and opinion scores in the loop, allows synergistic fusion to be achieved, leading to better actionable intelligence from the computer based decision support systems. The details of this framework implementation with a component based software platform-the msifStudio, and its evaluation for some of the use case application scenarios is presented here.

Original languageEnglish
Title of host publicationProceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016
EditorsM. N. Hoda
Place of PublicationNew Delhi, India
PublisherIEEE
Pages1417-1422
Number of pages6
Volume1
ISBN (Electronic)9789380544199
ISBN (Print)9789380544199
Publication statusPublished - 2016
Event10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development - New Delhi, New Delhi, India
Duration: 16 Mar 201618 Mar 2016

Publication series

NameProceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016

Conference

Conference10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development
Abbreviated titleINDIACom
CountryIndia
CityNew Delhi
Period16/03/1618/03/16

Fingerprint

Decision support systems
Fusion reactions
Sensors
Metadata
Information systems
Availability
Monitoring
Big data
Wearable technology

Cite this

CHETTY, G., & Yamin, M. (2016). A smart fusion framework for multimodal object, activity and event detection. In M. N. Hoda (Ed.), Proceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016 (Vol. 1, pp. 1417-1422). [7724498] (Proceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016). New Delhi, India: IEEE.
CHETTY, Girija ; Yamin, Mohammad. / A smart fusion framework for multimodal object, activity and event detection. Proceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016. editor / M. N. Hoda. Vol. 1 New Delhi, India : IEEE, 2016. pp. 1417-1422 (Proceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016).
@inproceedings{04a3224593064fe9974b7096ebc0d3c7,
title = "A smart fusion framework for multimodal object, activity and event detection",
abstract = "With an increasing diffusion of wearable technologies and mobile sensor systems, along with entrenchment of social media networks and crowdsourced information systems in every aspect of modern society, an unavoidable reality is that of continuous, pervasive and ubiquitous sensing, monitoring, surveillance and detection of every type of object, activity, event and incident at a global scale. This rapid proliferation has provided immense opportunities to make use of comprehensive information from a diverse array of multimodal, multi-view, and multisensory data streams for developing efficient and robust, automated computer based decision support systems. Further, with the availability of the complementary and the supplementary information in terms of auxiliary meta-data from the social networks, human experts and the crowdsourced communities, it is possible to obtain better actionable intelligence from these systems. In this paper, we propose a novel computational framework for addressing this gap. The proposed smart fusion framework with particular focus on combining heterogeneous, multimodal real-time big data streams-with information from different types of sensor and auxiliary information drawn from human experts and opinion scores in the loop, allows synergistic fusion to be achieved, leading to better actionable intelligence from the computer based decision support systems. The details of this framework implementation with a component based software platform-the msifStudio, and its evaluation for some of the use case application scenarios is presented here.",
keywords = "multimodal, fusion, data-science, Event detection, Multimodal, Incident response, Fusion, Smart, Computational, Information Technology (IT), Social Networking Site, Technology, Online",
author = "Girija CHETTY and Mohammad Yamin",
year = "2016",
language = "English",
isbn = "9789380544199",
volume = "1",
series = "Proceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016",
publisher = "IEEE",
pages = "1417--1422",
editor = "Hoda, {M. N.}",
booktitle = "Proceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016",

}

CHETTY, G & Yamin, M 2016, A smart fusion framework for multimodal object, activity and event detection. in MN Hoda (ed.), Proceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016. vol. 1, 7724498, Proceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016, IEEE, New Delhi, India, pp. 1417-1422, 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, New Delhi, India, 16/03/16.

A smart fusion framework for multimodal object, activity and event detection. / CHETTY, Girija; Yamin, Mohammad.

Proceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016. ed. / M. N. Hoda. Vol. 1 New Delhi, India : IEEE, 2016. p. 1417-1422 7724498 (Proceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016).

Research output: A Conference proceeding or a Chapter in BookConference contribution

TY - GEN

T1 - A smart fusion framework for multimodal object, activity and event detection

AU - CHETTY, Girija

AU - Yamin, Mohammad

PY - 2016

Y1 - 2016

N2 - With an increasing diffusion of wearable technologies and mobile sensor systems, along with entrenchment of social media networks and crowdsourced information systems in every aspect of modern society, an unavoidable reality is that of continuous, pervasive and ubiquitous sensing, monitoring, surveillance and detection of every type of object, activity, event and incident at a global scale. This rapid proliferation has provided immense opportunities to make use of comprehensive information from a diverse array of multimodal, multi-view, and multisensory data streams for developing efficient and robust, automated computer based decision support systems. Further, with the availability of the complementary and the supplementary information in terms of auxiliary meta-data from the social networks, human experts and the crowdsourced communities, it is possible to obtain better actionable intelligence from these systems. In this paper, we propose a novel computational framework for addressing this gap. The proposed smart fusion framework with particular focus on combining heterogeneous, multimodal real-time big data streams-with information from different types of sensor and auxiliary information drawn from human experts and opinion scores in the loop, allows synergistic fusion to be achieved, leading to better actionable intelligence from the computer based decision support systems. The details of this framework implementation with a component based software platform-the msifStudio, and its evaluation for some of the use case application scenarios is presented here.

AB - With an increasing diffusion of wearable technologies and mobile sensor systems, along with entrenchment of social media networks and crowdsourced information systems in every aspect of modern society, an unavoidable reality is that of continuous, pervasive and ubiquitous sensing, monitoring, surveillance and detection of every type of object, activity, event and incident at a global scale. This rapid proliferation has provided immense opportunities to make use of comprehensive information from a diverse array of multimodal, multi-view, and multisensory data streams for developing efficient and robust, automated computer based decision support systems. Further, with the availability of the complementary and the supplementary information in terms of auxiliary meta-data from the social networks, human experts and the crowdsourced communities, it is possible to obtain better actionable intelligence from these systems. In this paper, we propose a novel computational framework for addressing this gap. The proposed smart fusion framework with particular focus on combining heterogeneous, multimodal real-time big data streams-with information from different types of sensor and auxiliary information drawn from human experts and opinion scores in the loop, allows synergistic fusion to be achieved, leading to better actionable intelligence from the computer based decision support systems. The details of this framework implementation with a component based software platform-the msifStudio, and its evaluation for some of the use case application scenarios is presented here.

KW - multimodal

KW - fusion

KW - data-science

KW - Event detection

KW - Multimodal

KW - Incident response

KW - Fusion

KW - Smart

KW - Computational

KW - Information Technology (IT)

KW - Social Networking Site

KW - Technology

KW - Online

UR - http://www.scopus.com/inward/record.url?scp=84997514372&partnerID=8YFLogxK

UR - http://www.mendeley.com/research/social-networking-sites-issues-challenges-ahead

M3 - Conference contribution

SN - 9789380544199

VL - 1

T3 - Proceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016

SP - 1417

EP - 1422

BT - Proceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016

A2 - Hoda, M. N.

PB - IEEE

CY - New Delhi, India

ER -

CHETTY G, Yamin M. A smart fusion framework for multimodal object, activity and event detection. In Hoda MN, editor, Proceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016. Vol. 1. New Delhi, India: IEEE. 2016. p. 1417-1422. 7724498. (Proceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016).