Feature Map Augmentation to Improve Scale Invariance in Convolutional Neural Networks

Dinesh Kumar, Dharmendra Sharma

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)
32 Downloads (Pure)


Introducing variation in the training dataset through data augmentation has been a popular technique to make Convolutional Neural Networks (CNNs) spatially invariant but leads to increased dataset volume and computation cost. Instead of data augmentation, augmentation of feature maps is proposed to introduce variations in the features extracted by a CNN. To achieve this, a rotation transformer layer called Rotation Invariance Transformer (RiT) is developed, which applies rotation transformation to augment CNN features. The RiT layer can be used to augment output features from any convolution layer within a CNN. However, its maximum effectiveness is shown when placed at the output end of final convolution layer. We test RiT in the application of scale-invariance where we attempt to classify scaled images from benchmark datasets. Our results show promising improvements in the networks ability to be scale invariant whilst keeping the model computation cost low.

Original languageEnglish
Pages (from-to)51-74
Number of pages24
JournalJournal of Artificial Intelligence and Soft Computing Research
Issue number1
Publication statusPublished - 1 Jan 2023


Dive into the research topics of 'Feature Map Augmentation to Improve Scale Invariance in Convolutional Neural Networks'. Together they form a unique fingerprint.

Cite this