Doorkeeper

Talks by Prof. Tongliang Liu and Prof. Bo Han

Thu, 13 Aug 2020 13:30 - 15:30 JST
Online Link visible to participants
Register

Registration is closed

Get invited to future events

Free admission

Description

1st Talk

Speaker: Prof. Tongliang Liu (University of Sydney, Australia)

Title:
Estimating the Transition Matrix for Label-Noise Learning

Abstract:
Label noise is ubiquitous in the era of big data. Deep learning algorithms can easily overfit the noise and thus cannot generalize well without properly modelling the noise. The label noise transition matrix, which denotes the probabilities that clean labels flip into noisy labels, plays a central role in building statistically consistent classifiers. In this talk, we will discuss how to estimate the transition matrix. Specifically, an anchor point assumption is introduced to build an unbiased estimator. However, the assumption may not always hold practically. When there are no anchor points, the transition matrix would be poorly estimated, and those previously designed consistent classifiers may significantly degenerate. We then discuss how to remedy this problem. In the end, we will also envision potential directions for estimating the transition matrix, e.g., in the instance-dependent setting.

=============================================
2nd Talk

Speaker: Prof. Bo Han (Hong Kong Baptist University, Hong Kong)

Title:
Trustworthy Representation Learning: A Synergistic Tale of Labels, Examples and Beyond

Abstract:
Trustworthy representation learning (TRL) is an emerging and critical topic in modern machine learning, since most real-world data are easily imperfect and corrupted, such as online transactions, healthcare, cyber-security, and robotics. Intuitively, trustworthy learning system should behave more human-like, which can learn useful knowledge from even imperfect data. Therefore, in this talk, I will introduce TRL from three human-inspired views, including reliability, robustness and imitation. Specifically, reliability will consider uncertain cases, namely deep learning with noisy labels. Meanwhile, robustness will discuss adversarial conditions, namely training with adversarial examples. Last, imitation will focus on non-expert scenarios, namely imitation learning with diverse demonstrations.

About this community

RIKEN AIP Public

RIKEN AIP Public

Public events of RIKEN Center for Advanced Intelligence Project (AIP)

Join community