Registration is closed
The TrustML Young Scientist Seminars (TrustML YSS) started from January 28, 2022.
The TrustML YSS is a video series that features young scientists giving talks and discoveries in relation with Trustworthy Machine Learning.
For more information please see the following site.
This network is funded by RIKEN-AIP's subsidy and JST, ACT-X Grant Number JPMJAX21AF, Japan.
【The 31st Seminar】
Date and Time: September 6th 9:00 am - 11:00 am(JST)
Venue: Zoom webinar
Speaker: Sheng Liu (New York University)
Title: Understanding Probability Estimation and Noisy Label Learning: From the Early Learning Perspective
Recently, over-parameterized deep networks or large models, with increasingly more network parameters than training samples, have dominated the performances in modern machine learning. However, it has been well-known that over-parameterized networks tend to overfit and not generalize when trained on finite data. In probability estimation, the network is trained on observed outcomes of an event to estimate the probabilities of that event, leading to the network memorizing the observed outcomes completely and the estimated probabilities collapse to 0 or 1. Similarly, when learning with noisy labels, the network memorizes the wrong labels resulting in non-optimal decision rules. Yet before overfitting, the networks can learn useful information, known as early learning. Estimating probabilities reliably and being robust to noisy labels during training is of crucial importance in providing trustworthy predictions in many real-world applications with inherent uncertainty and poor label quality. In this talk, we will discuss the early learning phenomenon in probability estimation and noisy label learning, and how it can be utilized to prevent overfitting.
Speaker: Sharon Y. Li (University of Wisconsin Madison)
Title: Challenges and Opportunities in Out-of-distribution Detection
The real world is open and full of unknowns, presenting significant challenges for machine learning (ML) systems that must reliably handle diverse, and sometimes anomalous inputs. Out-of-distribution (OOD) uncertainty arises when a machine learning model sees a test-time input that differs from its training data, and thus should not be predicted by the model. As ML is used for more safety-critical domains, the abilities to handle out-of-distribution data are central in building open-world learning systems. In this talk, I will talk about challenges, research progress and future opportunities in detecting OOD samples for safe and reliable predictions in an open world.
All participants are required to agree with the AIP Seminar Series Code of Conduct.
Please see the URL below.
RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.
Public events of RIKEN Center for Advanced Intelligence Project (AIP)Join community