Loading Events

« All Events

  • This event has passed.

Ph.D. Thesis {Colloquium}: CDS : ONLINE: “Efficient and Effective Algorithms for Improving the Robustness of Deep Neural Networks.”

11 May @ 11:30 AM -- 12:30 PM


Ph.D. Thesis Colloquium


Speaker                 : Ms. Sravanti Addepalli

S.R. Number         : 06-18-02-17-12-18-1-15587

Title                       : “Efficient and Effective Algorithms for Improving the Robustness of Deep Neural Networks.

Research Supervisor: Prof. Venkatesh Babu​ R

Date & Time         : May 11, 2023 (Thursday) at 11:30 AM

Venue                     : The Thesis Colloquium will be held on MICROSOFT TEAMS.

Please click on the following link to join the colloquium





Deep Neural Networks achieve near-human performance on several benchmark datasets, yet they are not as robust as the human visual system. Their success relies on the proximity of test samples to the distribution of training data, resulting in unexpected behavior with even minor distribution shifts during inference. Deep Networks are also known to be susceptible to adversarial attacks, which are crafted imperceptible perturbations to their inputs, that can lead a classification model to confidently misclassify images into unrelated classes. The rapid adoption of Deep Networks in several critical applications makes it imperative to understand these failure models and develop reliable risk mitigation strategies. This thesis focuses on developing efficient and effective methods for improving the robustness of Deep Networks to both adversarial attacks and distribution shifts. The thesis is organized into four parts. In the first part, we develop Efficient Adversarial Defenses to overcome the large computational overhead of existing adversarial training methods.  We further propose methods for Improving the Effectiveness of Adversarial Training by mitigating the associated robustness-accuracy trade-off that limits their performance and utility. In the third part, we propose efficient and effective algorithms for Self-Supervised Learning of Robust Representations. Finally, we propose methods for Improving Robustness to Distribution Shifts in Data

Efficient Adversarial Defenses: State-of-the-art adversarial defenses are computationally expensive since they use multi-step adversarial training, where the training data is augmented with adversarially perturbed images that are typically generated using ten steps of optimization. To overcome this computational overhead, we first propose Bit Plane Feature Consistency Regularizer (BPFC), which achieves robustness without the generation of adversarial attacks during training, by imposing consistency on the representations of differently quantized images. We further develop methods for improving robustness while maintaining a low computational cost, by using single-step adversarial attacks for training. Single-step adversarial training is known to converge to a degenerate solution with sub-optimal robustness due to the obfuscation of gradients at the data samples, which leads to the generation of weaker attacks during training. To mitigate this, we propose Guided Adversarial Training (GAT) and Nuclear Norm Adversarial Training (NuAT), that explicitly enforce function smoothing in the vicinity of each data sample, thereby preventing obfuscated gradients and resulting in improvements over existing single-step defenses and several multi-step defenses as well. 


Improving the Effectiveness of Adversarial Training: While Adversarial Training improves the robustness of Deep Networks significantly, one of the key challenges that limits its practical use is the associated drop in natural or clean accuracy, referred to as the robustness-accuracy trade-off. To address this, we first propose Feature Level Stochastic Smoothing (FLSS) based classifier, which introduces stochasticity in the network predictions and utilizes the same for smoothing decision boundaries and rejecting low confidence predictions, thereby boosting the robustness and clean accuracy of the accepted samples. We further investigate the reasons for a higher robustness-accuracy trade-off at larger perturbation bounds where some attacks change the perception of a human or an Oracle, while other attacks do not. The proposed Oracle-Aligned Adversarial Training (OAAT) overcomes this trade-off by introducing specific attack and defense losses for Oracle-Sensitive and Oracle-Invariant adversarial examples. While the robustness-accuracy trade-off can be alleviated by using more diverse data for training, complex data augmentations have not been successful with Adversarial Training.  We investigate the reasons for this trend and propose Diverse Augmentation based Joint Adversarial Training (DAJAT) to address the same by using separate batch-normalization layers for simple and complex augmentations and a Jensen-Shannon divergence loss to encourage their joint learning.


Self-Supervised Learning of Robust Representations: Instance-discrimination based Self-Supervised Learning (SSL) methods have shown success in learning transferable representations without using labeled training data. However, these methods are more computationally expensive than supervised training. We investigate the reasons for their slow convergence and propose to improve the same by combining them with pretext tasks such as rotation prediction, which reduce the noise in the training objective. We further utilize these pretrained SSL models in a teacher-student setting for training adversarially robust models without labels. We propose Projected Feature Adversarial Training (ProFeAT), where the pretrained SSL projector of the teacher is utilized in order to obtain significant gains in clean accuracy of the adversarially trained student. The proposed attack loss coupled with the use of strong data augmentations leads to higher attack diversity, improving the robustness-accuracy trade-off further. 

Improving Robustness to Distribution Shifts in Data: Deep Networks are known to be sensitive to even minor distribution shifts during inference. We put forth the (simple) Feature Replication Hypothesis to explain this behavior and propose the Feature Reconstruction Regularizer (FRR) to improve their robustness by ensuring that the learned features can be reconstructed back from the logits, thereby encouraging the use of more diverse features for classification. We further propose Diversify-Aggregate-Repeat Training (DART), which trains diverse models using different augmentations (or domains) to explore the loss basin, and further aggregates their weights repeatedly over training to combine their expertise and obtain improved generalization. We finally aim to utilize the superior generalization of black-box Vision-Language Models (VLMs) for better OOD generalization in vision tasks. We propose Vision-Language to Vision – Align, Distill, Predict (VL2V-ADiP), a teacher-student setting that first aligns and then distills the representations of the teacher to the pretrained student.




11 May
11:30 AM -- 12:30 PM
Event Category: