TY - GEN
T1 - Feature Map Augmentation to Improve Rotation Invariance in Convolutional Neural Networks
AU - Kumar, Dinesh
AU - Sharma, Dharmendra
AU - Goecke, Roland
PY - 2020
Y1 - 2020
N2 - Whilst it is a trivial task for a human vision system to recognize and detect objects with good accuracy, making computer vision algorithms achieve the same feat remains an active area of research. For a human vision system, objects seen once are recognized with high accuracy despite alterations to its appearance by various transformations such as rotations, translations, scale, distortions and occlusion making it a state-of-the-art spatially invariant biological vision system. To make computer algorithms such as Convolutional Neural Networks (CNNs) spatially invariant one popular practice is to introduce variations in the data set through data augmentation. This achieves good results but comes with increased computation cost. In this paper, we address rotation transformation and instead of using data augmentation we propose a novel method that allows CNNs to improve rotation invariance by augmentation of feature maps. This is achieved by creating a rotation transformer layer called Rotation Invariance Transformer (RiT) that can be placed at the output end of a convolution layer. Incoming features are rotated by a given set of rotation parameters which are then passed to the next layer. We test our technique on benchmark CIFAR10 and MNIST datasets in a setting where our RiT layer is placed between the feature extraction and classification layers of the CNN. Our results show promising improvements in the networks ability to be rotation invariant across classes with no increase in model parameters.
AB - Whilst it is a trivial task for a human vision system to recognize and detect objects with good accuracy, making computer vision algorithms achieve the same feat remains an active area of research. For a human vision system, objects seen once are recognized with high accuracy despite alterations to its appearance by various transformations such as rotations, translations, scale, distortions and occlusion making it a state-of-the-art spatially invariant biological vision system. To make computer algorithms such as Convolutional Neural Networks (CNNs) spatially invariant one popular practice is to introduce variations in the data set through data augmentation. This achieves good results but comes with increased computation cost. In this paper, we address rotation transformation and instead of using data augmentation we propose a novel method that allows CNNs to improve rotation invariance by augmentation of feature maps. This is achieved by creating a rotation transformer layer called Rotation Invariance Transformer (RiT) that can be placed at the output end of a convolution layer. Incoming features are rotated by a given set of rotation parameters which are then passed to the next layer. We test our technique on benchmark CIFAR10 and MNIST datasets in a setting where our RiT layer is placed between the feature extraction and classification layers of the CNN. Our results show promising improvements in the networks ability to be rotation invariant across classes with no increase in model parameters.
KW - Convolutional Neural Network
KW - Data augmentation
KW - Deep learning
KW - Feature maps
KW - Rotation invariance
UR - http://www.scopus.com/inward/record.url?scp=85080950966&partnerID=8YFLogxK
UR - https://www.mendeley.com/catalogue/16a3a5db-38a8-3dcf-9461-7b0ef69cbbea/
U2 - 10.1007/978-3-030-40605-9_30
DO - 10.1007/978-3-030-40605-9_30
M3 - Conference contribution
SN - 9783030406042
VL - 12002
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 348
EP - 359
BT - Advanced Concepts for Intelligent Vision Systems - 20th International Conference, ACIVS 2020, Proceedings
A2 - Blanc-Talon, Jacques
A2 - Delmas, Patrice
A2 - Philips, Wilfried
A2 - Popescu, Dan
A2 - Scheunders, Paul
PB - Springer
CY - Switzerland
T2 - 20th International Conference on Advanced Concepts for Intelligent Vision Systems
Y2 - 10 February 2020 through 14 February 2020
ER -