[The 64th TrustML Young Scientist Seminar]

Wed, 29 Mar 2023 18:00 - 19:00 JST
Online Link visible to participants

Registration is closed

Get invited to future events

Free admission
-Time Zone:JST -The seats are available on a first-come-first-served basis. -When the seats are fully booked, we may stop accepting applications. -Simultaneous interpretation will not be available.


The TrustML Young Scientist Seminars (TrustML YSS) started from January 28, 2022.

The TrustML YSS is a video series that features young scientists giving talks and discoveries in relation with Trustworthy Machine Learning.

Timetable for the TrustML YSS online seminars from March to April. 2023.

For more information please see the following site.

This network is funded by RIKEN-AIP's subsidy and JST, ACT-X Grant Number JPMJAX21AF, Japan.

【The 64th Seminar】

Date and Time: March 29th 6:00 pm - 7:00 pm(JST)

Speaker: Xuanli He (University College London)
Title: Imitation Attacks and Defenses

Short Abstract
Due to the breakthrough in deep learning, commercial APIs have gained credence. However, these APIs suffer from a severe security concern, in which malicious users can bypass the subscriptions via an imitation attack. This talk will first introduce the imitation attack. Then I will show that the vulnerability of the imitation attack has been underestimated. In addition to the violation of intellectual property (IP), the imitation models can ease adversarial attacks on black-box APIs and incur privacy leakage. Finally, I will present two novel watermarking methods for protecting IP of text generation APIs under the imitation attack, which has been underdeveloped in the literature.

Xuanli He is a Research Fellow at University College London. He received his Ph.D. from Monash University (Australia). His recent research lies in an intersection between deep learning and natural language processing, with an emphasis on robustness and security in NLP models, including privacy leakage and protection, backdoor attack and defense, and imitation attack and defense. He has published more than 20 papers in top-tier machine learning and natural language processing conferences (e.g., NeurIPS, AAAI, ACL, EMNLP, etc.)

All participants are required to agree with the AIP Seminar Series Code of Conduct.
Please see the URL below.

RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.

About this community



Public events of RIKEN Center for Advanced Intelligence Project (AIP)

Join community