Doorkeeper

Imperfect Information Learning Team Seminar (Talk by Hong Liu, National Institute of Informatics).

2023-12-01(金)16:00 - 17:00 JST
オンライン リンクは参加者だけに表示されます。
申し込む

申し込み受付は終了しました

今後イベント情報を受け取る

参加費無料

詳細

This is an online seminar. Registration is required.
【 Imperfect Information Learning Team】
【Date】2023/December/1(Fri) 16:00-17:00(JST)
*【Speaker】Hong Liu, National Institute of Informatics,Digital Content and Media Sciences Research Division *

Title: Understanding Adversarial Training via Model Calibration

Abstract:
Deep models have shown remarkable success in computer vision tasks, but they appear to be vulnerable to small, imperceptible changes over test instances. In this talk, I will provide a brief overview of our recent work in understanding adversarial training. Firstly, I will evaluate the defense performances of several model calibration methods on various robust models. Secondly, I will discuss some intriguing findings about adversarial training that show its connection to robust overfitting. Next, I will present our work on designing a simple yet effective regularization technique. Finally, I will conclude my talk by sharing some insights into adversarial training.

Bio: Hong LIU is currently a researcher working at the National Institute of Informatics. Before that, he received his Ph.D. degree in computer science from Xiamen University. His research interests include ML safety/reliability and large-scale visual search. He was awarded the Japan Society for the Promotion of Science (JSPS) International Fellowship, the Outstanding Doctoral Dissertation Awards of both the China Society of Image and Graphics (CSIG) and Fujian Province, and the Top-100 Chinese New Stars in Artificial Intelligence by Baidu Scholar.

コミュニティについて

RIKEN AIP Public

RIKEN AIP Public

Public events of RIKEN Center for Advanced Intelligence Project (AIP)

メンバーになる