[The 51st TrustML Young Scientist Seminar]

Mon, 30 Jan 2023 17:00 - 18:00 JST
Online Link visible to participants

Registration is closed

Get invited to future events

Free admission
-Passcode: 3pm9LXASXF-Time Zone:JST -The seats are available on a first-come-first-served basis. -When the seats are fully booked, we may stop accepting applications. -Simultaneous interpretation will not be available.


The TrustML Young Scientist Seminars (TrustML YSS) started from January 28, 2022.

The TrustML YSS is a video series that features young scientists giving talks and discoveries in relation with Trustworthy Machine Learning.

Timetable for the TrustML YSS online seminars from Jan. to Feb. 2023.

For more information please see the following site.

This network is funded by RIKEN-AIP's subsidy and JST, ACT-X Grant Number JPMJAX21AF, Japan.

【The 51st Seminar】

Date and Time: January 30th 5:00 pm - 6:00 pm(JST)

Venue: Zoom webinar

Language: English

Speaker: Olivia Wiles (DeepMind)
Title: Rigorous evaluation of machine learning models
Short Abstract
Despite achieving super-human accuracy on benchmarks like ImageNet, machine learning models are still susceptible to a number of issues leading to poor performance in the real world. For example, models are prone to shortcut learning and use spurious correlations, leading to poor performance under distribution shift. I will present two works we have done to expose the fragility of machine learning models. The first work introduces a framework to define different types of distribution shift and evaluates how methods degrade under varying amounts and types of distribution shift. Then we demonstrate how we can go beyond requiring specific datasets to investigate shifts. Instead, we surface human interpretable failures in vision models automatically in an open-ended manner. These works are steps along the path to building comprehensive evaluation tools for reliable AI.

Olivia Wiles is a Senior Researcher at DeepMind working on robustness in machine learning, focussing on how to detect and mitigate failures arising from spurious correlation and distribution shift. Prior to this, she was a PhD student at Oxford with Andrew Zisserman studying self-supervised representations for 3D and spent a summer at FAIR working on view synthesis with Justin Johnson, Georgia Gkioxari and Rick Szeliski.

All participants are required to agree with the AIP Seminar Series Code of Conduct.
Please see the URL below.

RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.

About this community



Public events of RIKEN Center for Advanced Intelligence Project (AIP)

Join community