Date of Award

8-2025

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Automotive Engineering

Committee Chair/Advisor

Pierluigi Pisu

Committee Member

Jerome McClendon

Committee Member

Bing Li

Committee Member

Abolfazl Razi

Abstract

This dissertation addresses the critical challenge of adversarial robustness in deep learning systems, focusing on two fundamental domains: time-series prediction and object detection. As these AI systems become increasingly deployed in safety-critical applications from power grid management to autonomous vehicles their vulnerability to adversarial attacks poses significant risks to infrastructure and human safety.

The first contribution introduces a novel stealthy black-box False Data Injection (FDI) attack specifically designed for quasi-periodic time-series data. Unlike existing attacks that produce easily detectable anomalies, our method generates adversarial perturbations that preserve the underlying periodicity and statistical properties of the data, effectively bypassing traditional anomaly detection mechanisms. We rigorously evaluate LSTM models' vulnerability to this attack and develop real-time detection and mitigation strategies that maintain model accuracy on clean data while providing robust defense against sophisticated attacks.

The second contribution presents a comprehensive evaluation of Detection Transformer (DETR) robustness against adversarial attacks. We extend three classic adversarial attacks FGSM, PGD, and C\&W to the object detection domain and conduct extensive experiments on MS COCO and KITTI datasets. Our findings reveal that DETR models exhibit significant vulnerabilities, with PGD attacks reducing average precision to as low as 0.023. We investigate both intra-network and cross-network transferability, discovering that attacks generated on complex models transfer more effectively to simpler architectures. Analysis of self-attention mechanisms demonstrates that adversarial perturbations successfully disrupt DETR's attention patterns, contradicting expectations of inherent transformer robustness.

The third contribution develops a comprehensive framework for adversarial detection in object detection transformers using data manifold theory. Through rigorous statistical analysis, we demonstrate that DETR's logit distributions violate Gaussian assumptions, establishing Kernel Density Estimation (KDE) as superior to parametric methods for modeling class-conditional distributions. We evolve from a fixed-threshold KDE detector achieving 75-92\% detection rates to a sophisticated machine learning framework that reformulates adversarial detection as supervised binary classification. Our optimized logistic regression classifier, leveraging engineered features combining raw logits and KDE-derived statistics, achieves remarkable performance: 95.9\% accuracy, 94.9\% F1-score, and 96.6\% AUC-ROC across combined attack scenarios, while maintaining computational efficiency for real-time deployment.

Collectively, this dissertation advances the state-of-the-art in adversarial robustness by: (1) exposing vulnerabilities in both temporal and spatial deep learning models through novel attack methodologies, (2) providing comprehensive empirical evidence of transformer-based models' susceptibility to adversarial manipulation, and (3) developing practical, efficient defense mechanisms that balance security with operational performance. These contributions establish a foundation for more secure and reliable deep learning systems in adversarially challenging real-world environments.

Author ORCID Identifier

https://orcid.org/0000-0002-7396-2572

Available for download on Monday, August 31, 2026

Share

COinS