Document Type

Dissertation

Date of Award

8-31-2023

Degree Name

Doctor of Philosophy in Electrical Engineering - (Ph.D.)

Department

Electrical and Computer Engineering

First Advisor

Abdallah Khreishah

Second Advisor

Issa Khalil

Third Advisor

Ali N. Akansu

Fourth Advisor

Reza Curtmola

Fifth Advisor

Hai Nhat Phan

Sixth Advisor

Bipin Rajendran

Abstract

Neural network (NN) classifiers have gained significant traction in diverse domains such as natural language processing, computer vision, and cybersecurity, owing to their remarkable ability to approximate complex latent distributions from data. Nevertheless, the conventional assumption of an attack-free operating environment has been challenged by the emergence of adversarial examples. These perturbed samples, which are typically imperceptible to human observers, can lead to misclassifications by the NN classifiers. Moreover, recent studies have uncovered the ability of poisoned training data to generate Trojan backdoored classifiers that exhibit misclassification behavior triggered by predefined patterns.

In recent years, significant research efforts have been dedicated to uncovering the vulnerabilities of NN classifiers and developing defenses or mitigations against them. However, the existing approaches still fall short of providing mature solutions to address this ever-evolving problem. The widely adopted defense mechanisms against adversarial examples are computationally expensive and impractical for certain real-world applications. Likewise, the practical black-box defense against Trojan backdoors has failed to achieve state-of-the-art performance. More concerning is the limited exploration of these vulnerabilities within the context of cooperative attack or Federated learning, leaving NN classifiers exposed to unknown risks. This dissertation aims to address these critical gaps and refine our understanding of these vulnerabilities. The research conducted within this dissertation encompasses both the attack and defense perspectives, aiming to shed light on future research directions for vulnerabilities in NN classifiers.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.