BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//wp-events-plugin.com//7.2.3.1//EN
TZID:Asia/Kolkata
X-WR-TIMEZONE:Asia/Kolkata
BEGIN:VEVENT
UID:148@cds.iisc.ac.in
DTSTART;TZID=Asia/Kolkata:20251007T103000
DTEND;TZID=Asia/Kolkata:20251007T113000
DTSTAMP:20250916T082553Z
URL:https://cds.iisc.ac.in/events/cds-kiac-seminar-cds-102-07th-october-bu
 ilding-language-models-that-learn-remember-and-reason-like-experts/
SUMMARY:CDS-KIAC {Seminar}@ CDS: #102: 07th\, October: "Building Language M
 odels that Learn\, Remember and Reason like Experts"
DESCRIPTION:We welcome you to CDS-KIAC talk on 07th October 2025 (Tuesday).
  The details are as below:\n\n\n\nSpeaker: Dr. Niket Tandon\, Principal Re
 search Scientist at Microsoft Research\, Bangalore\nTitle: Building Langua
 ge Models that Learn\, Remember and Reason like Experts\nDate and Time: Oc
 tober 07\, 2025: 10:30 AM\nVenue: #102\, CDS Seminar Hall.\n\n\n\nAbstract
 : As language models become deeply integrated into our daily workflows\, o
 ur expectations are shifting. We no longer seek systems that merely predic
 t the next word—we want collaborators that can learn from us\, remember 
 what matters\, and reason with the rigor of a domain expert.\n\nThis talk 
 introduces a new class of language models by exploring two converging path
 s toward that future: systems that learn from the world and about the worl
 d.\n\nFirst\, we’ll examine how models can learn from the world—alongs
 ide us—by reflecting on past mistakes and adapting to human feedback. Dr
 awing inspiration from the psychological theory of recursive reminding\, w
 e present a memory architecture that enables models to avoid repeating err
 ors and improve through interaction. This is a step toward making language
  models not just responsive\, but reflective—and even self-reflective.\n
 \nSecond\, we’ll explore how models can learn about the world. Despite t
 heir scale\, today’s models often fail in high-stakes domains like law\,
  medicine\, and finance because their knowledge is static and incomplete -
 - we can mitigate this by injecting more knowledge into their memory witho
 ut compromising trust. To address this\, we’ll discuss emerging memory-b
 ased strategies for injecting curated knowledge directly into models—mov
 ing beyond retrieval-augmented generation (RAG) to build systems that can 
 robustly and efficiently integrate domain expertise while respecting priva
 cy.\n\nTogether\, these approaches point to a new generation of language m
 odels: systems that learn from the world and about the world. Through case
  studies and early results\, we’ll explore how fusing memory and knowled
 ge can create agents that reason better\, fail less often\, and ultimately
  serve us more effectively.\n\nBio of Speaker: Niket Tandon is a Principa
 l Research Scientist at Microsoft Research India\, where he focuses on cus
 tomizing AI copilots with private data. Previously\, he was a Lead Researc
 h Scientist at the Allen Institute for AI\, Seattle\, working on feedback-
 guided reasoning in LLMs as part of the Aristo team—known for building A
 I that aced science exams. He earned his Ph.D. from the Max Planck Institu
 te for Informatics under Prof. Gerhard Weikum\, where he created WebChild\
 , then the largest automatically extracted commonsense knowledge graph. Hi
 s work has been recognized at top venues. Niket also founded PQRS Research
  to support undergraduates from underrepresented institutes and actively c
 ontributes to the NLP community through workshops\, tutorials\, and confer
 ence service.\n\nHost Faculty: Dr. Danish Pruthi\n\n\n\nALL ARE WELCOME
CATEGORIES:Events,Talks
END:VEVENT
BEGIN:VTIMEZONE
TZID:Asia/Kolkata
X-LIC-LOCATION:Asia/Kolkata
BEGIN:STANDARD
DTSTART:20241007T103000
TZOFFSETFROM:+0530
TZOFFSETTO:+0530
TZNAME:IST
END:STANDARD
END:VTIMEZONE
END:VCALENDAR