Doorkeeper

Talk by Dr. Shinichi Nakajima (TU Berlin)

Wed, 03 Apr 2019 15:00 - 16:00 JST

RIKEN AIP Open Space

Nihonbashi 1-chome Mitsui Building, 15th floor, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan

Register

Registration is closed

Get invited to future events

Free admission

Description

title:
Robustifying Models Against Adversarial Attacks by Langevin Dynamics

abstract:
Adversarial attacks on deep learning models have compromised their
performance considerably.
As remedies, a lot of defense methods were proposed, which however, have
been broken down by newer attacking strategies.
In the midst of this ensuing arms race, the problem of robustness
against adversarial attacks remains unsolved even on the toy MNIST dataset.
This paper proposes a novel, simple yet effective defense strategy,
where adversarial samples are relaxed onto the underlying manifold
of the (unknown) target class distribution. Specifically, given an
off-manifold adversarial sample, our algorithm drives the adversarial
samples
towards high density regions of the data generating distribution of the
target class by Metroplis-adjusted Langevin algorithm (MALA) with
perceptual boundary taken into account. Although the motivation is
similar to projection methods, e.g., Defenese-GAN, our method, called
MALA for defense (MALADE) is equipped with significant
obfuscation---projection is distributed broadly, and therefore any
whitebox attack cannot accurately align the input
so that the MALADE moves it to a targeted untrained spot where the model
predicts a wrong label. In our experiment, MALADE exhibited
state-of-the-art
performance against various elaborate attacking strategies.

About this community

RIKEN AIP Public

RIKEN AIP Public

Public events of RIKEN Center for Advanced Intelligence Project (AIP)

Join community