Doorkeeper

[The 57th TrustML Young Scientist Seminar]

Wed, 22 Feb 2023 09:00 - 10:00 JST
Online Link visible to participants
Register

Registration is closed

Get invited to future events

Free admission
-Passcode: 3pm9LXASXF-Time Zone:JST -The seats are available on a first-come-first-served basis. -When the seats are fully booked, we may stop accepting applications. -Simultaneous interpretation will not be available.

Description

The TrustML Young Scientist Seminars (TrustML YSS) started from January 28, 2022.

The TrustML YSS is a video series that features young scientists giving talks and discoveries in relation with Trustworthy Machine Learning.

Timetable for the TrustML YSS online seminars from Jan. to Feb. 2023.

For more information please see the following site.
TrustML YSS

This network is funded by RIKEN-AIP's subsidy and JST, ACT-X Grant Number JPMJAX21AF, Japan.


【The 57th Seminar】


Date and Time: February 22nd 9:00 am - 10:00 am(JST)

Venue: Zoom webinar

Language: English

Speaker: Andrew Ilyas (MIT)
Title: Datamodels: Predicting Predictions from Training Data:
Short Abstract
Machine learning models tend to rely on an abundance of training data. Yet, understanding the underlying structure of this data—and models' exact dependence on it---remains a challenge. In this talk, we will present a framework for directly modeling predictions as functions of training data. This framework, given a dataset and a learning algorithm, pinpoints---at varying levels of granularity---the relationships between train and test point pairs through the lens of the corresponding model class. Even in its most basic version, our framework enables many applications, including discovering data subpopulations, quantifying model brittleness via counterfactuals, and comparing learning algorithms. Based on joint work with Sung Min Park, Logan Engstrom, Harshay Shah, Guillaume Leclerc, and Aleksander Madry.

Bio:
Andrew Ilyas is a fourth-year PhD student at MIT, advised by Aleksander Madry and Constantinos Daskalakis. His research focuses on robust and reliable machine learning, with an emphasis on the ways in which (often unintended) correlations present in training data can manifest at test-time. He is supported by an Open Philanthropy Project AI Fellowship.


All participants are required to agree with the AIP Seminar Series Code of Conduct.
Please see the URL below.
https://aip.riken.jp/event-list/termsofparticipation/?lang=en

RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.


About this community

RIKEN AIP Public

RIKEN AIP Public

Public events of RIKEN Center for Advanced Intelligence Project (AIP)

Join community