This is an online seminar. Registration is required.
【 Imperfect Information Learning Team】
【Date】2023/December/1(Fri) 16:00-17:00(JST)
*【Speaker】Hong Liu, National Institute of Informatics,Digital Content and Media Sciences Research Division *
Title: Understanding Adversarial Training via Model Calibration
Abstract:
Deep models have shown remarkable success in computer vision tasks, but they appear to be vulnerable to small, imperceptible changes over test instances. In this talk, I will provide a brief overview of our recent work in understanding adversarial training. Firstly, I will evaluate the defense performances of several model calibration methods on various robust models. Secondly, I will discuss some intriguing findings about adversarial training that show its connection to robust overfitting. Next, I will present our work on designing a simple yet effective regularization technique. Finally, I will conclude my talk by sharing some insights into adversarial training.
Bio: Hong LIU is currently a researcher working at the National Institute of Informatics. Before that, he received his Ph.D. degree in computer science from Xiamen University. His research interests include ML safety/reliability and large-scale visual search. He was awarded the Japan Society for the Promotion of Science (JSPS) International Fellowship, the Outstanding Doctoral Dissertation Awards of both the China Society of Image and Graphics (CSIG) and Fujian Province, and the Top-100 Chinese New Stars in Artificial Intelligence by Baidu Scholar.