Feature Map Augmentation to Improve Scale Invariance in Convolutional Neural Networks

Research output: Contribution to journalArticlepeer-review


Introducing variation in the training dataset through data augmentation has been a popular
technique to make Convolutional Neural Networks (CNNs) spatially invariant but leads
to increased dataset volume and computation cost. Instead of data augmentation, augmen-
tation of feature maps is proposed to introduce variations in the features extracted by a
CNN. To achieve this, a rotation transformer layer called Rotation Invariance Transformer
(RiT) is developed, which applies rotation transformation to augment CNN features. The
RiT layer can be used to augment output features from any convolution layer within a
CNN. However, its maximum effectiveness is shown when placed at the output end of fi-
nal convolution layer. We test RiT in the application of scale-invariance where we attempt
to classify scaled images from benchmark datasets. Our results show promising improve-
ments in the networks ability to be scale invariant whilst keeping the model computation
cost low.
Original languageEnglish
Pages (from-to)51-74
Number of pages24
JournalJournal of Artificial Intelligence and Soft Computing Research
Issue number1
Publication statusPublished - 1 Jan 2023

Cite this