Doorkeeper

[The 65th TrustML Young Scientist Seminar]

Thu, 30 Mar 2023 11:00 - 12:00 JST
Online Link visible to participants
Register

Registration is closed

Get invited to future events

Free admission
-Passcode 3HUJ6BgcB1 -Time Zone:JST -The seats are available on a first-come-first-served basis. -When the seats are fully booked, we may stop accepting applications. -Simultaneous interpretation will not be available.

Description

The TrustML Young Scientist Seminars (TrustML YSS) started from January 28, 2022.

The TrustML YSS is a video series that features young scientists giving talks and discoveries in relation with Trustworthy Machine Learning.

Timetable for the TrustML YSS online seminars from March to April. 2023.

For more information please see the following site.
TrustML YSS

This network is funded by RIKEN-AIP's subsidy and JST, ACT-X Grant Number JPMJAX21AF, Japan.


【The 65th Seminar】


Date and Time: March 30th 11:00 am - 12:00 am(JST)

Speaker: Chong Liu (UC Santa Barbara)
Title: Global Optimization with Parametric Function Approximation:

Short Abstract
To build tough materials, scientists need to sequentially select configurations ahead of time and then conduct expensive experiments to calculate its formation energy. To tune hyperparameters of deep learning, engineers need to carefully decide hyperparameters for training. However, in both cases, people cannot observe performances of unselected parameters and the experimental cost can be huge. These two challenges hinder new material design and hyperparameter tuning, and call for our actions. Existing work usually models this kind of problem as black-box optimization and relies on Gaussian processes or other non-parametric family, which suffers from the curse of dimensionality. In this talk, I will present my research on solving black-box optimization with parametric functions where parametric functions can be deep neural networks. Under a realizable assumption and a few other mild geometric conditions, the new GO-UCB algorithm achieves a sublinear cumulative regret. At the core of GO-UCB is a carefully designed uncertainty set over parameters based on gradients that allows optimistic exploration. Synthetic and real-world experiments illustrate GO-UCB works better than existing approaches in high dimensions, even if the model is misspecified. I'll also include some future directions in the end. Reference: https://arxiv.org/pdf/2211.09100.pdf.

Bio:
Chong Liu is a Ph.D. candidate in Computer Science at the University of California, Santa Barbara. His research interests include machine learning and AI for science, with emphasis on global optimization, bandits, active learning, and experimental design. He is an editorial board reviewer of JMLR and serves on program committees of several conferences, including AAAI, AISTATS, ICML, KDD, and NeurIPS. Part of his research has been deployed at Amazon.


All participants are required to agree with the AIP Seminar Series Code of Conduct.
Please see the URL below.
https://aip.riken.jp/event-list/termsofparticipation/?lang=en

RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.


About this community

RIKEN AIP Public

RIKEN AIP Public

Public events of RIKEN Center for Advanced Intelligence Project (AIP)

Join community