Doorkeeper

[The 62nd TrustML Young Scientist Seminar]

2023-03-27(月)09:00 - 11:00 JST
オンライン リンクは参加者だけに表示されます。
申し込む

申し込み受付は終了しました

今後イベント情報を受け取る

参加費無料
-Passcode: 3HUJ6BgcB1 -Time Zone:JST -The seats are available on a first-come-first-served basis. -When the seats are fully booked, we may stop accepting applications. -Simultaneous interpretation will not be available.

詳細

The TrustML Young Scientist Seminars (TrustML YSS) started from January 28, 2022.

The TrustML YSS is a video series that features young scientists giving talks and discoveries in relation with Trustworthy Machine Learning.

Timetable for the TrustML YSS online seminars from March to April. 2023.

For more information please see the following site.
TrustML YSS

This network is funded by RIKEN-AIP's subsidy and JST, ACT-X Grant Number JPMJAX21AF, Japan.


【The 62nd Seminar】


Date and Time: March 27th 9:00 am - 11:00 am(JST)

9:00 am - 10:00 am(JST)
Speaker 1: Han Zhao (University of Illinois at Urbana-Champaign)
Title 1: Provable Domain Generalization via Invariant-Feature Subspace Recovery

10:00 am - 11:00 am(JST)
Speaker 2: Sanghyun Hong (Oregon State University)
Title 2: Great Haste Makes Great Waste: Exploiting and Attacking Efficient Deep Learning

Venue: Zoom webinar

Language: English

Speaker 1: Han Zhao (University of Illinois Urbana-Champaign)
Title 1: Provable Domain Generalization via Invariant-Feature Subspace Recovery
Short Abstract 1
Domain generalization asks for models trained over a set of training environments to perform well in unseen test environments. Recently, a series of algorithms such as Invariant Risk Minimization (IRM) has been proposed for domain generalization. However, it has been shown that (Rosenfeld et al. 2021) IRM and its extensions cannot generalize to unseen environments with less than d_s+1 training environments, where d_s is the dimension of the spurious-feature subspace. In this talk, I will present our recent work that proposes to achieve domain generalization with Invariant-feature Subspace Recovery (ISR). Our first algorithm, ISR-Mean, can identify the subspace spanned by invariant features from the first-order moments of the class-conditional distributions, and achieve provable domain generalization with d_s+1 training environments. Our second algorithm, ISR-Cov, further reduces the required number of training environments to O(1) by using the information of second-order moments. Notably, unlike IRM, our algorithms bypass non-convexity issues and enjoy global convergence guarantees. Then, I will also talk about extensions of our algorithm to the general multi-class classification and regression settings as well. Empirically, we show that ISRs can obtain superior performance compared with IRM on synthetic benchmarks. In addition, on three real-world image and text datasets, we show that both ISRs can be used as simple yet effective post-processing methods to improve the worst-case accuracy of (pre-)trained models against spurious correlations and group shifts. Our code is publicly available at https://github.com/Haoxiang-Wang/ISR.

Bio 1:
Han Zhao is a tenure-track assistant professor in the Department of Computer Science, also affiliated with the Department of Electrical and Computer Engineering, at the University of Illinois Urbana-Champaign. Prior to joining UIUC, he was a machine learning researcher at the D. E. Shaw Group. He received his Ph.D. degree in Computer Science from the Machine Learning Department at Carnegie Mellon University. He works in the field of machine learning and artificial intelligence, with a focus on trustworthy machine learning, including domain generalization, algorithmic fairness, adversarial robustness and multi-task and meta-learning.

Speaker 2: Sanghyun Hong (Oregon State University)
Title 2: Great Haste Makes Great Waste: Exploiting and Attacking Efficient Deep Learning
Short Abstract 2
Recent increases in the computational demands of deep neural networks have sparked interest in efficient deep learning mechanisms, such as neural network quantization or input-adaptive multi-exit inferences. Those mechanisms provide significant computational savings while preserving a model's accuracy, making it practical to run commercial-scale models in resource-constrained settings. However, most methods focus on "hastiness"—i.e., how fast and efficiently they get correct predictions—and it overlooks the security vulnerability that can "waste" their practicality. In this talk, I will revisit efficient deep learning from a security perspective and introduce emerging research on exploiting and attacking them to achieve malicious objectives. First, I will show how an adversary can exploit neural network quantization to induce malicious behaviors. An adversary can manipulate a pre-trained model to behave maliciously upon quantization. Next, I will show how input-adaptive mechanisms, such as multi-exit models, fail to promise computational efficiency in adversarial settings. By adding human-imperceptible input perturbations, an attacker can completely offset the computational savings provided by these input-adaptive models. Finally, I will conclude my talk by encouraging the audience to examine efficient deep learning practices with an adversarial lens and discuss future research directions for building defense mechanisms. I believe that this is the best moment to listen to Benjamin's advice: "Take time for all the things."

Bio 2:
Sanghyun Hong is an Assistant Professor of Computer Science at Oregon State University. He works on building trustworthy and socially responsible AI systems for the future. He is the recipient of the Samsung Global Research (GRO) Award 2022 and was selected as a DARPA Riser 2022. He was also an invited speaker at USENIX Enigma 2021, where he talked about practical hardware attacks on deep learning. He earned his Ph.D. at the University of Maryland, College Park, under the guidance of Prof. Tudor Dumitras. He was also a recipient of the Ann G. Wylie Dissertation Fellowship. He received his B.S. at Seoul National University.


All participants are required to agree with the AIP Seminar Series Code of Conduct.
Please see the URL below.
https://aip.riken.jp/event-list/termsofparticipation/?lang=en

RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.


コミュニティについて

RIKEN AIP Public

RIKEN AIP Public

Public events of RIKEN Center for Advanced Intelligence Project (AIP)

メンバーになる