申し込み受付は終了しました
The TrustML Young Scientist Seminars (TrustML YSS) started from January 28, 2022.
The TrustML YSS is a video series that features young scientists giving talks and discoveries in relation with Trustworthy Machine Learning.
Timetable for the TrustML YSS online seminars from Sep. to Oct. 2022.
For more information please see the following site.
TrustML YSS
This network is funded by RIKEN-AIP's subsidy and JST, ACT-X Grant Number JPMJAX21AF, Japan.
【The 36th Seminar】
Date and Time: October 26th 3:00 pm - 5:00 pm(JST)
Venue: Zoom webinar
Language: English
Speaker: Zhuowei Wang (Commonwealth Scientific and Industrial Research Organization, Australia)
Title: A Framework of Noisy-Label Learning by Semi-Supervised Learnings
Short Abstract
The success of deep learning depends on the correct annotation of all training samples for training robust models, where samples are difficult and expensive to obtain. The incorrect or incomplete information brought by wrongly annotated labels may cause catastrophic effects depending on the real-world applications. Therefore, this talk introduces a framework using semi-supervised learning to tackle two different kinds of weakly supervised learning, noisy label learning (NLL) and positive unlabeled learning (PUL), to improve the model robustness under incorrect labels in the dataset. Moreover, in real-world scenarios, most data are not collected and stored in a centralized way. Instead, data are distributed over various institutions protected by privacy restrictions. Federated learning (FL) has been proposed to leverage isolated data without violating privacy. However, data labels in different institutions are not annotated according to the same criterion so they inevitably contain different noises across silos. The frame can also be applied to tackle NLL problem in the FL setting.
Speaker: Nikola Konstantinov (ETH AI Center)
Title: Statistical Aspects of Trustworthy Machine Learning
Short Abstract
Modern machine learning methods often require large amounts of labeled data for training. Therefore, it has become a standard practice to collect data from external sources, e.g. via crowdsourcing and by web crawling. Unfortunately, the quality of these sources is not always guaranteed and this may results in noise, biases and even systematic manipulations entering the training data. In this talk I will present some results on the statistical limits of learning in the presence of training data corruption. In particular, I will speak about the hardness of achieving fairness when a subset of the data is prone to adversarial manipulations. I will also discuss several results on the sample complexity of learning from multiple unreliable data sources.
All participants are required to agree with the AIP Seminar Series Code of Conduct.
Please see the URL below.
https://aip.riken.jp/event-list/termsofparticipation/?lang=en
RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.