Image super-resolution as a defense against adversarial attacks

Aamir Mustafa, Salman H. Khan, Munawar Hayat, Jianbing Shen, Ling Shao

Research output: Contribution to journalArticlepeer-review

85 Citations (Scopus)


Convolutional Neural Networks have achieved significant success across multiple computer vision tasks. However, they are vulnerable to carefully crafted, human-imperceptible adversarial noise patterns which constrain their deployment in critical security-sensitive systems. This paper proposes a computationally efficient image enhancement approach that provides a strong defense mechanism to effectively mitigate the effect of such adversarial perturbations. We show that deep image restoration networks learn mapping functions that can bring off-the-manifold adversarial samples onto the natural image manifold, thus restoring classification towards correct classes. A distinguishing feature of our approach is that, in addition to providing robustness against attacks, it simultaneously enhances image quality and retains models performance on clean images. Furthermore, the proposed method does not modify the classifier or requires a separate mechanism to detect adversarial images. The effectiveness of the scheme has been demonstrated through extensive experiments, where it has proven a strong defense in gray-box settings. The proposed scheme is simple and has the following advantages: 1) it does not require any model training or parameter optimization, 2) it complements other existing defense mechanisms, 3) it is agnostic to the attacked model and attack type, and 4) it provides superior performance across all popular attack algorithms. Our codes are publicly available at

Original languageEnglish
Article number8844865
Pages (from-to)1711-1724
Number of pages14
JournalIEEE Transactions on Image Processing
Publication statusPublished - 2020


Dive into the research topics of 'Image super-resolution as a defense against adversarial attacks'. Together they form a unique fingerprint.

Cite this