BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//wp-events-plugin.com//7.2.3.1//EN
TZID:Asia/Kolkata
X-WR-TIMEZONE:Asia/Kolkata
BEGIN:VEVENT
UID:70@cds.iisc.ac.in
DTSTART;TZID=Asia/Kolkata:20240812T110000
DTEND;TZID=Asia/Kolkata:20240812T120000
DTSTAMP:20240805T114329Z
URL:https://cds.iisc.ac.in/events/ph-d-thesis-defense-cds-12-august-2024-e
 fficient-and-effective-algorithms-for-improving-the-robustness-of-deep-neu
 ral-networks/
SUMMARY:Ph.D. Thesis Defense: CDS: 12\, August 2024 "Efficient and Effectiv
 e Algorithms for Improving the Robustness of Deep Neural Networks"
DESCRIPTION:DEPARTMENT OF COMPUTATIONAL AND DATA SCIENCES\nPh.D. Thesis Def
 ense\n\n\n\nSpeaker : Ms. Sravanti Addepalli\nS.R. Number : 06-18-02-17-12
 -18-1-15587\nTitle : "Efficient and Effective Algorithms for Improving the
  Robustness of Deep Neural Networks"\nThesis examiner: Prof. Vineeth Balas
 ubramanian\, IIT Hyderabad\nResearch Supervisor: Prof. Venkatesh Babu​ R
 \nDate &amp\; Time : August 12\, 2024 (Monday) at 11:00 AM\nVenue : # 102 
 CDS Seminar Hall\n\n\n\nABSTRACT\nDeep neural networks (DNNs) have achieve
 d remarkable success across various domains\, yet their vulnerability to a
 dversarial attacks and distribution shifts remains a significant challenge
 . This thesis presents novel methodologies to enhance DNN robustness\, foc
 using on efficiency\, effectiveness\, and practical applicability.\n\nThe 
 first part of the thesis concentrates on developing computationally effici
 ent adversarial defenses. Traditional adversarial training methods are oft
 en computationally intensive due to the generation of adversarial examples
  through multiple optimization steps. To address this\, we introduce Bit P
 lane Feature Consistency (BPFC)\, a regularizer that promotes robustness w
 ithout requiring adversarial examples during training. Furthermore\, we pr
 opose Guided Adversarial Training (GAT) and Nuclear Norm Adversarial Train
 ing (NuAT) to mitigate the gradient masking issue prevalent in single-step
  adversarial training\, leading to improved robustness without sacrificing
  computational efficiency.\n\nThe second part focuses on improving the eff
 ectiveness of adversarial training. While adversarial training enhances ro
 bustness\, it comes at the cost of reduced accuracy on clean data. To addr
 ess this\, we introduce Feature Level Stochastic Smoothing (FLSS)\, a meth
 od that combines adversarial training with detection to boost robustness a
 nd accuracy. Additionally\, we propose Oracle-Aligned Adversarial Training
  (OAAT) to address the robustness-accuracy trade-off at large perturbation
  bounds. To further enhance adversarial training\, we explore the integrat
 ion of data augmentation techniques through Diverse Augmentation based Joi
 nt Adversarial Training (DAJAT).\n\nThe third part of the thesis focuses o
 n improving the efficiency and effectiveness of self-supervised training f
 or robust representation learning. We investigate the potential of combini
 ng the popular instance-discrimination task with auxiliary tasks such as r
 otation prediction to reduce noise in the training objective and improve t
 he quality of learned representations. We further utilize these self-super
 vised pretrained models in a teacher-student distillation setting for trai
 ning adversarially robust models without labels using the proposed method 
 Projected Feature Adversarial Training (ProFeAT).\n\nThe final part of the
  thesis addresses the brittleness of DNNs to distribution shifts. We propo
 se the Feature Replication Hypothesis (FRH) to explain the underlying caus
 es of vulnerability to distribution shifts. To mitigate this\, we introduc
 e the Feature Reconstruction Regularizer (FRR) that encourages the learnin
 g of diverse feature representations. Additionally\, Diversify-Aggregate-R
 epeat Training (DART) is proposed to improve generalization of DNNs by tra
 ining diverse models in parallel\, and aggregating their weights intermitt
 ently over training. We finally propose Vision-Language to Vision - Align\
 , Distill\, Predict (VL2V-ADiP)\, a teacher-student setting to utilize the
  superior generalization of Vision-Language Models (VLMs) for improving th
 e OOD generalization in vision tasks.\n\nThrough these contributions\, thi
 s thesis advances the state-of-the-art in DNN robustness by providing prac
 tical and effective solutions to address the challenges posed by adversari
 al attacks and distribution shifts. The proposed methods demonstrate signi
 ficant improvements in both robustness and accuracy\, paving the way for m
 ore reliable and resilient models.\n\n\n\nALL ARE WELCOME
CATEGORIES:Events,Thesis Defense
END:VEVENT
BEGIN:VTIMEZONE
TZID:Asia/Kolkata
X-LIC-LOCATION:Asia/Kolkata
BEGIN:STANDARD
DTSTART:20230813T110000
TZOFFSETFROM:+0530
TZOFFSETTO:+0530
TZNAME:IST
END:STANDARD
END:VTIMEZONE
END:VCALENDAR