Department of Computational and Data Sciences
Department Seminar
Speaker : Dr. Tanya Goyal, Cornell University
Title : The case for Limitation-Aware LLMs
Date & Time: January 06th, 2026 (Tuesday), 11:00 AM
Venue : # 102, CDS Seminar Hall
ABSTRACT
As we increasingly integrate large language models (LLMs) into our workflows, it is important for LLMs to act as effective collaborators. This involves a multitude of skills — estimating their own capability and knowledge boundaries to decide when to seek help, estimating collaborator capability, effectively using the signal returned by them, amongst others. In this talk, I will discuss recent work from my lab that teaches LLMs to effectively abstain on questions outside their knowledge boundaries. We train LLMs to learn to seek external help, e.g. via search tools, by rewarding accuracy but simultaneously penalizing this external help. I will discuss results that show that by considering any search invocation as an abstention, we can effectively turn LLMs trained with such a pay-per-search into abstention models. Finally, I will talk about some recent work that benchmarks LLMs robustness when collaborating “off-trajectory”, i.e with out-of-distribution help, and discuss future directions in this space.
BIO: Tanya Goyal is an assistant professor in the Computer Science department at Cornell University. Her research interests include building collaborative large language models (LLMs), reliable and sustainable evaluation frameworks for such LLMs, as well as understanding LLM behaviors as a function of training data and/or alignment strategies. Tanya completed her Ph.D. in Computer Science at UT Austin in 2023 where her thesis was awarded UTCS’s Bert Kay Dissertation award. Her research is supported by NSF and a research award from Google.
Host Faculty: Dr. Danish Pruthi
ALL ARE WELCOME



