Doorkeeper

Talks by Hanjun Dai (Google Brain) and Feng Liu (Australian Artificial Intelligence Institute, UTS)

Tue, 09 Mar 2021 10:00 - 12:00
Online Link visible to participants
Register
Free admission
Registration closes 09 Mar 12:00
There is room for 243 more people

Description

Talk1: Improved Generative Modeling of Structure Data
Speaker: Dr. Hanjun Dai (Google Brain)

Abstract:
Generative modeling remains challenging for discrete structured data like program trees or
molecule graphs. The most commonly used model for such data is the autoregressive one, thanks
to its tractability. However, in some situations it may suffer from scalability and expressiveness
issues, due to its sequential nature and parameter sharing in deep models.
In this talk, we will share our recent works on addressing these two issues that come from
autoregressive modeling. In the first part [1], we introduce a scalable autoregressive model for
generating graph structures, where it reduces the training synchronizations from O(n) to O(log n),
and inference cost from O(n^2) to O(n*log n). In the second part [2], we propose a local search
model with latent variables that extends the autoregressive model in the context of learning
energy based models for discrete structured data. We demonstrate the effectiveness of current
works with real-world applications, including data augmentation, program synthesis and software
testing.

References:
[1] Scalable Deep Generative Modeling for Sparse Graphs, Dai et.al, ICML 2020
[2] Learning Discrete Energy-based Models via Auxiliary-variable Local Exploration, Dai et.al,
NeurIPS 2020

Bio:
Hanjun Dai is currently a research scientist at Google Research, Brain Team. He obtained his PhD
from Georgia Institute of Technology, advised by Prof. Le Song. His research focuses on deep
learning with structured data, combinatorial optimization, generative modeling, and the
applications in chemistry, bioinformatics, programming and natural languages. During his PhD he
has also extended his research work through internships with Amazon AI, OpenAI and DeepMind.
He has published over 30 papers in top-tier conferences and journals, while his work has been
recognized by AISTATS 2016 best student paper, best paper in Recsys Workshop on Deep Learning
for Recommender System 2016 and best paper in NIPS 2017 Workshop on Machine Learning for
Molecules and Materials.

Talk2: Towards Trustworthy Transfer Learning: Learning from the Wild
Speaker: Dr. Feng Liu (Australian Artificial Intelligence Institute, UTS)

Abstract:
Transfer learning aims to leverage knowledge from domains with abundant labels (i.e., source
domains) to help train a good classifier or clustering model for the domains with insufficient/no
labels (i.e., target domains). Although recent research on transfer learning has shown its ability to
transfer knowledge from a source domain to a target domain, most works require some unrealistic
assumptions to ensure their efficacy. Namely, existing transfer learning methods still face several
unsolved and challenging problems in the real world.
In this talk, I will first present three orthogonal directions of trustworthy transfer learning,
including 1) the necessity of transfer learning, 2) transfer learning under the imperfection of
source domains, 3) transfer learning under the imperfection of target domains. Then, I will
introduce recent advances in the three directions. Finally, promising future works are presented
towards the trustworthy transfer learning.

Bio:
Dr. Feng Liu is a machine learning researcher with research interests in transfer learning and
hypothesis testing. His long-term goal is to develop intelligent systems that can learn knowledge
from massive related but different domains automatically.
Currently, he is a postdoctoral researcher at the Australian Artificial Intelligence Institute (AAII),
University of Technology Sydney (UTS), Australia, and the recipient of Australian Laureate
postdoctoral fellowship. He received his Ph.D. degree in computer science at UTS-AAII in 2020,
advised by Dist. Prof. Jie Lu and Prof. Guangquan Zhang.
He was a research intern with the AI Residency Program at RIKEN Center for Advanced Intelligence
Project (RIKEN-AIP), working on the trustworthy domain adaptation project with Prof. Masashi
Sugiyama, Dr. Gang Niu, and Dr. Bo Han. He visited Gatsby Computational Neuroscience Unit at
UCL and worked on the hypothesis testing project with Prof. Arthur Gretton, Dr. Danica J.
Sutherland and Wenkai Xu.
He has served as program committee (PC) members for NeurIPS, ICML, ICLR, AISTATS, ACML. He
also serves as a reviewer for many academic journals, such as IEEE-TPAMI, IEEE-TNNLS, IEEE-TFS,
and AMM. He has received the AAII best student paper award (2020), UTS-FEIT HDR Research
Excellence Award (2019), Best Student Paper Award of FUZZ-IEEE (2019), and UTS Research
Publication Award (2018).

About this community

RIKEN AIP Public

RIKEN AIP Public

Public events of RIKEN Center for Advanced Intelligence Project (AIP)

Join community