BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//wp-events-plugin.com//7.2.3.1//EN
TZID:Asia/Kolkata
X-WR-TIMEZONE:Asia/Kolkata
BEGIN:VEVENT
UID:189@cds.iisc.ac.in
DTSTART;TZID=Asia/Kolkata:20260407T110000
DTEND;TZID=Asia/Kolkata:20260407T120000
DTSTAMP:20260404T141956Z
URL:https://cds.iisc.ac.in/events/ph-d-thesis-defense-cds-april-07-2026-tr
 ansformer-neural-operators-for-learning-generalized-solutions-of-partial-d
 ifferential-equations-and-data-assimilation/
SUMMARY:Postpone: Ph.D. Thesis Defense: CDS: April 07\, 2026 "Transformer N
 eural Operators for Learning Generalized Solutions of Partial Differential
  Equations and Data Assimilation"
DESCRIPTION:Please Note: Mr. Boya Sumanth kumar's Ph.D. thesis Defense whic
 h was originally scheduled on April 07 at 11:00 AM \, has been postponed. 
 Regret for the inconvenience caused. We will broadcast the detailed timing
 s and venue shortly.\n\n\n\nDEPARTMENT OF COMPUTATIONAL AND DATA SCIENCES\
 nPh.D. Thesis Defense\n\n\n\nSpeaker : Mr. Boya Sumanth Kumar\nS.R. Number
  : 06-18-01-10-12-20-1-18435\nTitle : "Transformer Neural Operators for Le
 arning Generalized Solutions of Partial Differential Equations and Data As
 similation"\nResearch Supervisor : Dr. Deepak Subramani\nThesis examiner :
  Prof. Anoop Krishnan\, IIT Delhi\nDate &amp\; Time : April 07\, 2026 (Tue
 sday) at 11:00 AM\nVenue : The Thesis Defense will be held on HYBRID Mode\
 n# 303 CDS Class Room /MICROSOFT TEAMS.\n\nPlease click on the following l
 ink to join the Thesis Defense\nMS Teams link\n\nABSTRACT\nScientific mach
 ine learning is transforming the way we solve physical dynamics governed b
 y partial differential equations (PDEs). Solving nonlinear PDEs across mul
 tiple initial and boundary conditions requires re-running traditional nume
 rical solvers\, and existing physics-informed neural networks require cost
 ly retraining for each new condition. Furthermore\, given the limited avai
 lability of observational data in meteorological and oceanic sciences\, da
 ta assimilation in forward simulations is crucial. The above challenges ar
 e addressed by proposing three neural operators with architectural innovat
 ions: (1) the Physics-Informed Transformer Neural Operator (PINTO) for sim
 ulation-free PDE solving that is trained using only physics loss and gener
 alizes to unseen initial and boundary conditions\, (2) the Physics-Guided 
 Transformer Neural Operator (PGNTO) for finding generalizable PDE solution
 s from simulation data in the absence of governing equations\, and (3) the
  Attention based Coordinate Operator (ACO) for the data assimilation task 
 of reconstructing continuous fields from limited observational data. Colle
 ctively\, the above contributions push the boundaries of scientific machin
 e learning in solving PDEs and data assimilation.\n\nThe first part of thi
 s thesis introduces a novel Physics-Informed Transformer Neural Operator (
 PINTO). Current neural operator approaches\, though capable of learning fu
 nctional mappings between infinite-dimensional spaces\, suffer from two cr
 itical limitations: dependence on substantial simulation data and poor gen
 eralization to unseen conditions. PINTO addresses these fundamental challe
 nges by introducing a novel physics-informed framework that achieves effic
 ient generalization through simulation-free\, physics-only training. The c
 ore innovation of PINTO lies in the development of iterative kernel-integr
 al operator units that leverage cross-attention mechanisms to transform do
 main points into initial- and boundary-condition-aware representation vect
 ors. This attention-based architecture enables context-aware learning that
  fundamentally differs from existing neural operators\, allowing PINTO to 
 learn mappings from input functions (initial/boundary conditions) to compl
 ete PDE solution spaces through a single forward pass. The architecture co
 mprises three key stages: lifting layers that project the solution's domai
 n coordinates into a higher-dimensional representation space\, iterative k
 ernel integration layers for context-aware representation learning\, and p
 rojection layers that map the learned representation to the solution space
 . We demonstrate PINTO's superior performance across critical fluid mechan
 ics and engineering applications\, including linear advection\, the nonlin
 ear Burgers equation\, and the steady and unsteady Navier-Stokes equations
 \, spanning multiple flow scenarios. Under challenging unseen conditions\,
  PINTO achieves relative errors of merely 20 to 33\\% of those obtained by
  state-of-the-art physics-informed neural operator methods when compared w
 ith analytical and numerical solutions. Critically\, PINTO exhibits tempor
 al extrapolation capabilities absent in competing approaches\, accurately 
 solving the advection and Burgers equations in time steps not present duri
 ng training.\n\nIn the second part of the thesis\, we extend our framework
  to physics-guided transformer neural operators (PGNTO)\, trained on simul
 ation data rather than a physics loss\, to address scenarios where governi
 ng PDEs are unavailable and to eliminate the training instabilities that a
 rise from physics loss in high-dimensional PDEs. Our PGNTO successfully so
 lves standard PDE benchmarks\, including the nonlinear Burgers and airfoil
  problem\, and also scales to complex 3D turbulent flows\, specifically pr
 edicting wake dynamics behind wind turbines under varying inlet velocities
 . Across all test cases\, PGNTO maintains superior accuracy\, with relativ
 e errors that are only one-third of those of competing neural operators.\n
 \nFinally\, in the third part of the thesis\, we introduce the Attention-b
 ased coordinate operator (ACO) for neural data assimilation\, enabling the
  continuous reconstruction of fields from sparse observational data. By ap
 plying Gabor filters for coordinate lifting and Fourier filters for value 
 representation\, ACO outperforms existing implicit neural representation n
 etworks on reconstruction tasks. Notably\, a single ACO model generalizes 
 across varying sparsity levels\, eliminating the need for separate models 
 tailored to different data availabilities. The performance of ACO is demon
 strated using four challenging datasets: (i)sea surface height (SSH)\, (ii
 ) chlorophyll concentration (CHL)\, (iii) Global surface temperature (GST)
 \, and (iv) sea surface temperature (SST). The ACO model is compared with 
 other leading deep neural models for physical field reconstruction.\n\nOve
 rall\, this thesis demonstrates that transformer-based neural operators ca
 n effectively address both physics-informed PDE solving and neural data as
 similation\, achieving unprecedented generalization\, computational effici
 ency\, and accuracy while requiring minimal training data. Our suite of tr
 ansformer neural operators opens new avenues for real-time prediction and 
 data assimilation in physics and engineering applications.\n\n\n\nALL ARE 
 WELCOME
CATEGORIES:Events,Thesis Defense
END:VEVENT
BEGIN:VTIMEZONE
TZID:Asia/Kolkata
X-LIC-LOCATION:Asia/Kolkata
BEGIN:STANDARD
DTSTART:20250407T110000
TZOFFSETFROM:+0530
TZOFFSETTO:+0530
TZNAME:IST
END:STANDARD
END:VTIMEZONE
END:VCALENDAR