Registration is closed
The TrustML Young Scientist Seminars (TrustML YSS) started from January 28, 2022.
The TrustML YSS is a video series that features young scientists giving talks and discoveries in relation with Trustworthy Machine Learning.
Timetable for the TrustML YSS online seminars from Nov. to Dec. 2022.
For more information please see the following site.
TrustML YSS
This network is funded by RIKEN-AIP's subsidy and JST, ACT-X Grant Number JPMJAX21AF, Japan.
【The 43rd Seminar】
Date and Time: Dec. 7th 4:00 pm - 6:00 pm(JST)
Venue: Zoom webinar
Language: English
Speaker: Shiqi Yang (Autonomous University of Barcelona)
Title: Model Adaptation under Domain and Category Shift
Abstract
In recent years, large amount of works emerged in the domain adaptation community, aiming to address the domain shift between training and test data. While there still remain plenty of challenging problems for further investigation. For example, 1) demanding source data during adaptation is impossible in some privacy-sensitive applications (e.g., surveillance or medical applications), and 2) unseen categories may exist in the test data in the real-world scenarios. In this talk, I will first introduce a method to address domain adaptation without source data, from the perspective of unsupervised clustering, and meanwhile we could relate several methods in domain adaptation via the view of discriminability and diversity. Then, we propose to deploy an attention based regularization to avoid forgetting on the source domain after model adaptation. Finally, I will present an elegantly simple method to address domain and category shift simultaneously during model adaptation.
Speaker: Oğuzhan Fatih Kar (EPFL)
Title: 3D Common Corruptions and Data Augmentation
Abstract
Computer vision models deployed in the real world will encounter naturally occurring distribution shifts from their training data. These shifts range from lower-level distortions, such as motion blur and illumination changes, to semantic ones, like object occlusion. Each of them represents a possible failure mode of a model and has been frequently shown to result in profoundly unreliable predictions. Thus, understanding model failures against these shifts and developing better robustness mechanisms are critical before deploying these models in the real world. Our work presents a set of image transformations that can be used as corruptions to evaluate the robustness of models as well as data augmentation mechanisms for training neural networks. The primary distinction of the proposed transformations is that, unlike existing approaches such as Common Corruptions, the geometry and semantics of the scene is incorporated in the transformations -- thus leading to corruptions that are more likely to occur in the real world. In this talk, I will discuss several properties of these transformations, e.g. these transformations are 'efficient' (can be computed on-the-fly), 'extendable' (can be applied on most image datasets), expose vulnerability of existing models, and can effectively make models more robust when employed as `3D data augmentation' mechanisms.
All participants are required to agree with the AIP Seminar Series Code of Conduct.
Please see the URL below.
https://aip.riken.jp/event-list/termsofparticipation/?lang=en
RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.
Public events of RIKEN Center for Advanced Intelligence Project (AIP)
Join community