Deep learning (DL) models have demonstrated impressive performance in sensitive medical applications such as disease diagnosis. However, a backdoor attack embedded in the clean dataset without poisoning the labels poses a severe threat to the integrity of Artificial Intelligence (AI) technology. In literature, a lot of work has been done on backdoor attacks for medical applications, in which most of the authors assumed that the labels of the samples are also poisoned. This compromises the elusiveness of the backdoor attacks because poisoned samples can be identified by visual inspection by finding the mismatch between the labels of the samples. In this paper, an elusive backdoor attack is proposed, that makes the poisoned samples difficult to recognize. In the proposed approach a backdoor signal superimposed into a small portion of the clean dataset during training time is proposed. Moreover, this paper proposes a hybrid attack that further increases the Attack Success Rate (ASR). The proposed approach is evaluated over a Convolutional Neural Network (CNN)-based system for Magnetic Resonance Imaging (MRI) brain tumor classification, which demonstrated the effectiveness of the attacks, thus raising concern regarding using AI in sensitive applications.
Dettaglio pubblicazione
2023, Foresti, G.L., Fusiello, A., Hancock, E. (eds) Image Analysis and Processing – ICIAP 2023. ICIAP 2023. Lecture Notes in Computer Science, vol 14234. Springer, Cham., Pages 332-344
BHAC-MRI: Backdoor and Hybrid Attacks on MRI Brain Tumor Classification Using CNN (02a Capitolo o Articolo)
Imran M., Qureshi H. K., Amerini I.
ISBN: 978-3-031-43152-4; 978-3-031-43153-1
Gruppo di ricerca: Computer Vision, Computer Graphics, Deep Learning
keywords