Doorkeeper

[The 61st TrustML Young Scientist Seminar]

Fri, 24 Mar 2023 10:00 - 11:00 JST
Online Link visible to participants
Register

Registration is closed

Get invited to future events

Free admission
-Passcode: 3pm9LXASXF -Time Zone:JST -The seats are available on a first-come-first-served basis. -When the seats are fully booked, we may stop accepting applications. -Simultaneous interpretation will not be available.

Description

The TrustML Young Scientist Seminars (TrustML YSS) started from January 28, 2022.

The TrustML YSS is a video series that features young scientists giving talks and discoveries in relation with Trustworthy Machine Learning.

Timetable for the TrustML YSS online seminars from March to April. 2023.

For more information please see the following site.
TrustML YSS

This network is funded by RIKEN-AIP's subsidy and JST, ACT-X Grant Number JPMJAX21AF, Japan.


【The 61st Seminar】


Date and Time: March 24th 10:00 am - 11:00 am(JST)

Speaker: Dan Hendrycks (UC Berkeley)
Title: ML Safety

Short Abstract
Machine learning (ML) systems are rapidly increasing in size, are acquiring new capabilities, and are increasingly deployed in high-stakes settings. As with other powerful technologies, safety for ML should be a leading research priority. In response to emerging safety challenges in ML, such as those introduced by recent large-scale models, we provide a new roadmap for ML Safety and refine the technical problems that the field needs to address. We present four problems ready for research, namely withstanding hazards ("Robustness"), identifying hazards ("Monitoring"), reducing inherent model hazards ("Alignment"), and reducing systemic hazards ("Systemic Safety"). Throughout, we clarify each problem's motivation and provide concrete research directions..

Bio:
Dan Hendrycks (UC Berkeley)

I recently received my PhD from UC Berkeley where I was advised by Dawn Song and Jacob Steinhardt. I am now the director of the Center for AI Safety. I am interested in ML Safety. In 2018 I received my BS from UChicago. My research is supported by the NSF GRFP and the Open Philanthropy AI Fellowship. I helped contribute the GELU activation function (the most-used activation in state-of-the-art models including BERT, GPT, Vision Transformers, etc.), the out-of-distribution detection baseline, and distribution shift benchmarks.


All participants are required to agree with the AIP Seminar Series Code of Conduct.
Please see the URL below.
https://aip.riken.jp/event-list/termsofparticipation/?lang=en

RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.


About this community

RIKEN AIP Public

RIKEN AIP Public

Public events of RIKEN Center for Advanced Intelligence Project (AIP)

Join community