Doorkeeper

[AIP Seminar] Talk by Prof. Adi Shamir (Weizmann Institute of Science) on "A New Theory of Adversarial Examples in Machine Learning"

Wed, 17 Jan 2024 10:30 - 12:00 JST
Online Link visible to participants
Register

Registration is closed

Get invited to future events

Free admission
-Passcode: 43DyMdFs0e -Time Zone:JST -The seats are available on a first-come-first-served basis. -When the seats are fully booked, we may stop accepting applications. -Simultaneous interpretation will not be available.

Description

Date and Time:
January 17, 2024: 10:30 am - 12:00 am (JST)
Venue: Online and Open Space at the RIKEN AIP Nihonbashi office*
*The Open Space; AIP researchers are only available.

TITLE: A New Theory of Adversarial Examples in Machine Learning

SPEAKER: Prof. Adi Shamir, Weizmann Institute of Science

ABSTRACT:
The extreme fragility of deep neural networks when presented with tiny perturbations in their inputs was independently discovered by several research groups in 2013. Due to their mysterious properties and major security implications, these adversarial examples had been studied extensively over the last nine years, but in spite of enormous effort they remained a baffling phenomenon with no clear explanation. In particular, it was not clear why a tiny distance away from almost any cat image there are images which are recognized with a very high level of confidence as cars, planes, frogs, horses, or any other desired class, why the adversarial modification which turns a cat into a car does not look like a car at all, and why a network which was adversarially trained with randomly permuted labels (so that it never saw any image which looks like a cat being called a cat) still recognizes most cat images as cats.

The goal of this talk is to introduce a new theory of adversarial examples, which we call the Dimpled Manifold Model. It can easily explain in a simple and intuitive way why they exist and why they have all the bizarre properties mentioned above. To experimentally support this theory, we recently ran a large number of experiments, and we show with a variety of graphs and movies how decision boundaries actually evolve during the training process of deep neural networks in order to fit the given training examples. In particular, we demonstrate how they take advantage of additional “useless” dimensions which cannot possibly help them in classifying the inputs.


All participants are required to agree with the AIP Seminar Series Code of Conduct.
Please see the URL below.
https://aip.riken.jp/event-list/termsofparticipation/?lang=en

RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.


About this community

RIKEN AIP Public

RIKEN AIP Public

Public events of RIKEN Center for Advanced Intelligence Project (AIP)

Join community