BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//wp-events-plugin.com//7.2.3.1//EN
TZID:Asia/Kolkata
X-WR-TIMEZONE:Asia/Kolkata
BEGIN:VEVENT
UID:181@cds.iisc.ac.in
DTSTART;TZID=Asia/Kolkata:20260205T160000
DTEND;TZID=Asia/Kolkata:20260205T170000
DTSTAMP:20260205T125535Z
URL:https://cds.iisc.ac.in/events/seminar-cds-102-february-05th-0400-decep
 tive-plausibility-investigating-visual-blind-spots-and-llm-hallucinations/
SUMMARY:{Seminar} @ CDS: #102\, February 05th: 04:00: "Deceptive Plausibili
 ty: Investigating Visual Blind Spots and LLM Hallucinations."
DESCRIPTION:Department of Computational and Data Sciences\nDepartment Semin
 ar\n\n\n\nSpeaker : Mr. Gaurang Sriramanan\, PhD candidate at the Universi
 ty of Maryland\,\nTitle : Deceptive Plausibility: Investigating Visual Bli
 nd Spots and LLM Hallucinations\nDate &amp\; Time: February 05th\, 2026 (T
 hursday)\, 04:00 PM\nVenue : # 102\, CDS Seminar Hall\n\n\n\nABSTRACT\nAs 
 AI models are increasingly deployed in safety-critical sectors\, character
 izing their failure modes is essential for building trustworthy systems. T
 his talk explores two critical vulnerabilities: the under-sensitivity of c
 omputer vision models and the phenomenon of hallucinations in Large Langua
 ge Models (LLMs).\n\nFirst\, we analyze model undersensitivity—the failu
 re of a system to recognize meaningful\, large-scale changes in its input.
  While most existing research focuses on oversensitivity to imperceptible 
 noise (adversarial attacks)\, we identify "blind spots" using a novel Leve
 l Set Traversal (LST) algorithm. By navigating the geometry of model level
  sets\, we reveal linearly connected paths between images that a human ora
 cle would deem extremely disparate\, yet the model maintains near-uniform 
 confidence\, exposing a fundamental gap in machine perception.\nSecond\, w
 e address hallucinations in LLMs—outputs that are fabricated yet appear 
 deceptively plausible. We present LLM-Check\, a suite of efficient techniq
 ues that detect these errors by leveraging internal hidden representations
 \, attention similarity maps\, and logit outputs. We demonstrate its effic
 acy across broad settings\, from zero-resource detection to scenarios wher
 e multiple model generations or external databases are available\, all wit
 hout incurring significant computational overhead.\n\nBIO: Gaurang Srirama
 nan is a CS PhD candidate at the University of Maryland\, advised by Prof.
  Soheil Feizi. His research focuses on AI safety and reliability by charac
 terizing model failure modes and developing robust mitigation strategies. 
 Previously a Student Researcher at Meta\, he holds an M.S. from UMD and a 
 B.S./M.Sc. in Mathematics from IISc.\n\nHost Faculty: Prof. Venkatesh Babu
 \n\n\n\nALL ARE WELCOME
CATEGORIES:Events,Talks
END:VEVENT
BEGIN:VTIMEZONE
TZID:Asia/Kolkata
X-LIC-LOCATION:Asia/Kolkata
BEGIN:STANDARD
DTSTART:20250205T160000
TZOFFSETFROM:+0530
TZOFFSETTO:+0530
TZNAME:IST
END:STANDARD
END:VTIMEZONE
END:VCALENDAR