Multi-modal Information Extraction and Fusion with Convolutional Neural Networks

Dinesh Kumar, Dharmendra Sharma

Research output: A Conference proceeding or a Chapter in BookConference contributionpeer-review

6 Citations (Scopus)

Abstract

Developing computational algorithms to model the biological vision system has challenged researchers in the computer vision field for several decades. As a result, state-of-the-art algorithms such as the Convolutional Neural Network (CNN) have emerged for image classification and recognition tasks with promising results. CNNs however remain view-specific, producing good results when the variation between test and train data is small. Making CNNs learn invariant features to effectively recognise objects that undergo appearance changes as a result of transformations such as scaling remains a technical challenge. Recent physiological studies of the visual system are suggesting new paradigms. Firstly, our visual system uses both local features and global features in its recognition function. Secondly, cells tuned to global features respond quickly to visual stimuli for recognising objects. Thirdly, information from modalities that handle local features, global features and color are integrated in the brain for performing recognition tasks. While CNNs rely on aggregation of local features for recognition, these theories provide the potential for using global features to solve transformation invariance problems in CNNs. In this paper we realise these paradigms into a computational model, named as global features improved CNN (GCNN), and test it on classification of scaled images. We experiment combining Histogram of Gradients (HOG) global features, CNN local features and color information and test our technique on benchmark data sets. Our results show GCNN outperforms traditional CNN on classification of scaled images indicating potential effectiveness of our model towards improving scale-invariance in CNN based networks.

Original languageEnglish
Title of host publicationProceedings of the 2020 International Joint Conference on Neural Networks (IJCNN)
EditorsAnish Roy
Place of PublicationUnited States
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages1-9
Number of pages9
ISBN (Electronic)9781728169279
ISBN (Print)9781728169262
DOIs
Publication statusPublished - 30 Sept 2020
Event2020 International Joint Conference on Neural Networks (IJCNN) - Glasgow, United Kingdom
Duration: 19 Jul 202024 Jul 2020

Publication series

NameProceedings of the International Joint Conference on Neural Networks

Conference

Conference2020 International Joint Conference on Neural Networks (IJCNN)
Period19/07/2024/07/20

Fingerprint

Dive into the research topics of 'Multi-modal Information Extraction and Fusion with Convolutional Neural Networks'. Together they form a unique fingerprint.

Cite this