Abstract
Introducing variation in the training dataset through data augmentation has been a popular
technique to make Convolutional Neural Networks (CNNs) spatially invariant but leads
to increased dataset volume and computation cost. Instead of data augmentation, augmen-
tation of feature maps is proposed to introduce variations in the features extracted by a
CNN. To achieve this, a rotation transformer layer called Rotation Invariance Transformer
(RiT) is developed, which applies rotation transformation to augment CNN features. The
RiT layer can be used to augment output features from any convolution layer within a
CNN. However, its maximum effectiveness is shown when placed at the output end of fi-
nal convolution layer. We test RiT in the application of scale-invariance where we attempt
to classify scaled images from benchmark datasets. Our results show promising improve-
ments in the networks ability to be scale invariant whilst keeping the model computation
cost low.
technique to make Convolutional Neural Networks (CNNs) spatially invariant but leads
to increased dataset volume and computation cost. Instead of data augmentation, augmen-
tation of feature maps is proposed to introduce variations in the features extracted by a
CNN. To achieve this, a rotation transformer layer called Rotation Invariance Transformer
(RiT) is developed, which applies rotation transformation to augment CNN features. The
RiT layer can be used to augment output features from any convolution layer within a
CNN. However, its maximum effectiveness is shown when placed at the output end of fi-
nal convolution layer. We test RiT in the application of scale-invariance where we attempt
to classify scaled images from benchmark datasets. Our results show promising improve-
ments in the networks ability to be scale invariant whilst keeping the model computation
cost low.
Original language | English |
---|---|
Pages (from-to) | 51-74 |
Number of pages | 24 |
Journal | Journal of Artificial Intelligence and Soft Computing Research |
Volume | 13 |
Issue number | 1 |
DOIs | |
Publication status | Published - 1 Jan 2023 |