1st Talk (45mins):
Speaker:
Liyuan Xu (University Collage London)
https://scholar.google.co.jp/citations?user=-DLyhSoAAAAJ&hl=ja
Title:
Uncoupled Regression from Pairwise Comparison Data
Abstract:
The speaker talks about the uncoupled regression problem, which aims to learn a model from unlabeled data and the set of target values while the correspondence between them is unknown. While existing methods for uncoupled regression puts strong assumptions on the target function, our novel framework does not rely on such assumptions. The key idea is to utilize pairwise comparison data, which consists of pairs of unlabeled data that we know which one has a larger target value. The talk is mainly based on the paper "Uncoupled Regression from Pairwise Comparison Data" [Xu et al. NeurIPS2019], but additional contents including the connection to pairwise learn-to-rank methods and modeling the cumulative density function will be covered.
==============================================
2nd Talk (45mins):
Speaker:
Nan Lu (The University of Tokyo)
https://scholar.google.co.jp/citations?user=KQUQlG4AAAAJ&hl=en
Title:
Supervised classification from unlabeled datasets
Abstract:
In machine learning, fully supervised classification from big data is successful. However, collecting massive labeled data can be expensive and time-consuming due to the laborious manual annotation, which is a critical bottleneck in real-world applications. This has led to the development of research on learning from cheap and abundant unlabeled data in the past decades.
In this talk, I will introduce our recent works [Lu+, ICLR2019] [Lu+, AISTATS2020] on supervised classification from unlabeled datasets. We propose a risk-write approach to obtain an equivalent expression of the classification risk, which can be estimated from two unlabeled datasets with different class priors. Further, we improve it by the consistent risk correction. Our proposed methods are based on the empirical risk minimization, and work as if supervised classification methods, compatible with any model and optimizer.