Doorkeeper

[The 38th TrustML Young Scientist Seminar]

Mon, 31 Oct 2022 11:00 - 12:00 JST
Online Link visible to participants
Register

Registration is closed

Get invited to future events

Free admission
-Passcode sBQ5r635NF -Time Zone:JST -The seats are available on a first-come-first-served basis. -When the seats are fully booked, we may stop accepting applications. -Simultaneous interpretation will not be available.

Description

The TrustML Young Scientist Seminars (TrustML YSS) started from January 28, 2022.

The TrustML YSS is a video series that features young scientists giving talks and discoveries in relation with Trustworthy Machine Learning.

Timetable for the TrustML YSS online seminars from Sep. to Oct. 2022.

For more information please see the following site.
TrustML YSS

This network is funded by RIKEN-AIP's subsidy and JST, ACT-X Grant Number JPMJAX21AF, Japan.


【The 38th Seminar】


Date and Time: Oct. 31th 11:00 am - 12:00 pm(JST)

Venue: Zoom webinar

Language: English

Speaker: Gaurang Sriramanan (University of Maryland)
Title: Toward Efficient Evaluation and Training of Adversarially Robust Neural Networks
Short Abstract
While current Machine Learning models achieve excellent performance on standard data, they are overwhelmingly susceptible to imperceptible perturbations to their inputs, known as adversarial attacks. Efficient and effective attacks are crucial for reliable evaluation of defenses, and also for developing robust models. In this talk, I will present some of our research work that focuses on addressing both these directions. We first propose Guided Adversarial Margin Attack, wherein we introduce a relaxation term to the standard loss that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training. In the latter part of the talk, I shall present our work on utilizing Nuclear Norm regularization that uses the joint statistics of adversarial samples across a minibatch to enhance optimization. We further demonstrate how Nuclear Norm based training can be extended to achieve robustness under a union of threat models simultaneously, while utilizing only single-step adversaries during the training regime. Using the techniques so mentioned, we demonstrate equivalent or superior robustness when compared to multi-step adversarial defenses, while requiring a significantly lower computational cost.


All participants are required to agree with the AIP Seminar Series Code of Conduct.
Please see the URL below.
https://aip.riken.jp/event-list/termsofparticipation/?lang=en

RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.


About this community

RIKEN AIP Public

RIKEN AIP Public

Public events of RIKEN Center for Advanced Intelligence Project (AIP)

Join community