Author ORCID Identifier


Document Type


Date of Award


Degree Name

Doctor of Philosophy in Computing Sciences - (Ph.D.)


Computer Science

First Advisor

Frank Y. Shih

Second Advisor

Zhi Wei

Third Advisor

Usman W. Roshan

Fourth Advisor

Dimitri Theodoratos

Fifth Advisor

William Graves


Machine learning techniques in medical imaging systems are accurate, but minor perturbations in the data known as adversarial attacks can fool them. These attacks make the systems vulnerable to fraud and deception, and thus a significant challenge has been posed in practice. This dissertation presents the gradient-free trained sign activation networks to detect and deter adversarial attacks on medical imaging AI (Artificial Intelligence) systems. Experimental results show a higher distortion value is required to attack the proposed model than other state-of-the-art models on brain MRI (Magnetic resonance imaging), Chest X-ray, and histopathology image datasets. Moreover, the proposed models outperform the best existing models and are even twice superior. The average accuracy of our model in classifying the adversarial examples is 88.89%, whereas MLP and LeNet are 81.48%, and ResNet18 is 33.89%. It is concluded that the sign network is a solution to defend against adversarial attacks due to high distortion and high accuracy on transferability. In addition, different models have different tolerance abilities on adversarial attacks.

This dissertation develops a novel detecting module to defend against adversarial attacks proactively. The proposed module uses the adaptive noise removal process to reconstruct the input and detect adversarial attacks without modifying the models. Experimental results show that the proposed models can successfully remove most noises and obtain detection accuracies of 97.71% and 92.96%, respectively, by comparing the classification results on adversarial samples of MNIST and two subclasses of ImageNet datasets. Furthermore, the proposed adaptive module can be used as part of an ensemble with different networks to achieve detection accuracies of 70.83% and 71.96%, respectively, on the white-box adversarial attacks of ResNet18 and SCDO1MLP. The best accuracy of 62.5% is obtained for both networks when dealing with black-box attacks.



To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.