CDS-KIAC {Seminar}@ CDS: #102: 14th February: “Can We Make Machine Learning Safe for Safety-Critical Systems?”

When

14 Feb 25    
4:00 PM - 5:45 PM

Event Type

We welcome you to CDS-KIAC talk on 14th February 2025 (Friday). The details are as below:


Speaker: Thomas G Dietterich, University Distinguished Professor Emeritus, School of Electrical Engineering and Computer Science, Oregon State University
Title: Can We Make Machine Learning Safe for Safety-Critical Systems?
Date and Time: February 14, 2025; 04:00 to 05:00 PM (lecture) ; 05:15 to 05:45 PM (High tea and informal discussions with lecture attendees)
Venue: #102, CDS Seminar Hall.


Abstract: The impressive new capabilities of systems created using deep learning are encouraging engineers to apply these techniques in safety-critical applications such as medicine, aeronautics, and self-driving cars. This talk will discuss the ways that machine learning methodologies are changing to operate in safety-critical systems. These changes include (a) building high-fidelity simulators for the domain, (b) adversarial collection of training data to ensure coverage of the so-called operational design domain (ODD) and, specifically, the hazardous regions within the ODD, (c) methods for verifying that the fitted models generalise well, and (d) methods for estimating the probability of harms in normal operation. There are many research challenges to achieving these.

But we must do more, because traditional safety engineering only addresses the known hazards. We must design our systems to detect novel hazards as well. We adopt Leveson’s view of safety as an ongoing hierarchical control problem in which controls are put in place to stabilise the system against disturbances. Disturbances include novel hazards but also management changes such as budget cuts, staff turnover, novel regulations, and so on. Traditionally, it has been the human operators and managers who have provided these stabilising controls. Are there ways that artificial intelligence (AI) methods, such as novelty detection, near-miss detection, diagnosis and repair, can be applied to help the human organisation manage these disturbances and maintain system safety?

Bio of Speaker: Thomas G Dietterich is University Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University. Dietterich is one of the pioneers of the field of machine learning and has authored more than 220 refereed publications and two books. His current research topics include robust artificial intelligence, robust human-AI systems, and applications in sustainability.

Dietterich is the 2025 recipient of the Feigenbaum Prize for Applied AI and the 2024 recipient of the IJCAI Award for Research Excellence. Dietterich is also the recipient of the 2022 AAAI Distinguished Service Award and the 2020 ACML Distinguished Contribution Award, both recognising his many years of service to the research community. He is a former President of the Association for the Advancement of Artificial Intelligence and the Founding President of the International Machine Learning Society. Other major roles include Executive Editor of the journal Machine Learning, Co-founder of the Journal for Machine Learning Research, and Program Chair of AAAI 1990 and NIPS 2000. He currently chairs the Computer Science Section of arXiv.org.

Host Faculty: Prof. Jayant R Haritsa


ALL ARE WELCOME