Doorkeeper

[The 81st TrustML Young Scientist Seminar] Talk by Sayak Ray Chowdhury (Microsoft Research, India) "Provably Robust DPO: Aligning Language Models with Noisy Feedback"

Mon, 10 Jun 2024 15:00 - 16:00 JST
Online Link visible to participants
Register

Registration is closed

Get invited to future events

Free admission

Description

Date and Time:
June 10, 2024: 3:00 pm - 4:00 pm (JST)
Venue: Online only

Title:
Provably Robust DPO: Aligning Language Models with Noisy Feedback

Speaker:
Sayak Ray Chowdhury (Microsoft Research, India)

Abstract:
Learning from preference-based feedback has recently gained traction as a promising approach to align language models with human interests. These aligned models demonstrate impressive capabilities across various tasks. However, noisy preference data can negatively impact alignment. Practitioners have recently proposed heuristics to mitigate the effect, but theoretical underpinnings of these methods have remained elusive. In this work, we aim to bridge this gap by introducing a general framework for policy optimization in the presence of random preference flips. We propose rDPO, a robust version of the popular direct preference optimization method, show that it is provably tolerant to noise, and characterize its sub-optimality gap as a function of noise rate, dimension of the policy parameter, and sample size. Experiments on two real datasets show that rDPO is robust to noise in preferences compared to vanilla DPO and heuristics proposed by practitioners.

This is a joint work with Anush Kini and Nagarajan Natarajan.

Bio:
Sayak Ray Chowdhury is a postdoctoral researcher at Microsoft Research, India. Prior to this he was a postdoctoral fellow at Boston University, USA. He obtained his PhD from the Dept of ECE, Indian Institute of Science, where he was a recipient of Google PhD fellowship. His research interests include reinforcement learning, Bayesian optimization, multi-armed bandits and differential privacy. Recently, he has been working towards mathematical and empirical understandings of language models. More details about his research can be found here.

About this community

RIKEN AIP Public

RIKEN AIP Public

Public events of RIKEN Center for Advanced Intelligence Project (AIP)

Join community