Doorkeeper

[The 53rd TrustML Young Scientist Seminar]

Mon, 06 Feb 2023 17:00 - 18:00 JST
Online Link visible to participants
Register

Registration is closed

Get invited to future events

Free admission
-Passcode: 3pm9LXASXF-Time Zone:JST -The seats are available on a first-come-first-served basis. -When the seats are fully booked, we may stop accepting applications. -Simultaneous interpretation will not be available.

Description

The TrustML Young Scientist Seminars (TrustML YSS) started from January 28, 2022.

The TrustML YSS is a video series that features young scientists giving talks and discoveries in relation with Trustworthy Machine Learning.

Timetable for the TrustML YSS online seminars from Jan. to Feb. 2023.

For more information please see the following site.
TrustML YSS

This network is funded by RIKEN-AIP's subsidy and JST, ACT-X Grant Number JPMJAX21AF, Japan.


【The 53rd Seminar】


Date and Time: February 6th 5:00 pm - 6:00 pm(JST)

Venue: Zoom webinar

Language: English

Speaker: Seong Joon Oh (University of Tübingen)
Title: Scalable Trustworthy AI -- Beyond "what", towards "how"
Short Abstract
ML models are not trustworthy often because it's focusing too much on "what" than "how". That is, they care only about whether they are solving the task at hand ("what") but not so much about solving it right ("how"). Having recognised this issue, the ML field has been shifting its focus from "what" to "how" for the last five years. Arguably, the most common approach to address "how" is to extend the familiar benchmarking approach that used to work well for the "what" phase: build a benchmark dataset and perform "fair" comparisons by fixing the allowed ingredients. This encourages more and more complex tricks that are likely to simply overfit to the given benchmark (e.g. ImageNet). However, for the "how" problem, I believe it is more important to look for new types of ingredients towards the "how" problem. This will make the fair comparison harder, but I believe this is the only way to make the "how" problem solvable at all. I will give an overview of my previous search for such ingredients that make models more explainable and more robust to distribution shifts. I will then discuss exciting future sources of such ingredients.

Bio:
I have started as an independent group leader at the University of Tübingen leading the group on Scalable Trustworthy AI (STAI) in 2022. I am interested in training trustworthy models (e.g. explainable, robust, and probabilistic models) and obtaining the necessary human supervision and guidance in a cost-effective way. I have been a research scientist at Naver AI Lab (2018-2022). I did my PhD in computer vision and machine learning at MPI-INF with Bernt Schiele and Mario Fritz (2014-2018). I did my masters and bachelors studies in Maths at the University of Cambridge (2010-2014).


All participants are required to agree with the AIP Seminar Series Code of Conduct.
Please see the URL below.
https://aip.riken.jp/event-list/termsofparticipation/?lang=en

RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.


About this community

RIKEN AIP Public

RIKEN AIP Public

Public events of RIKEN Center for Advanced Intelligence Project (AIP)

Join community