Nihonbashi 1-chome Mitsui Building, 15th floor, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
Talk by Prof. Hsuan-Tien Lin (National Taiwan University, Taiwan / Appier)
Active Learning by Bandit Learning
Active learning is an important technique that helps reduce labeling efforts in machine learning applications. Currently, most active learning strategies are constructed based on some human-designed philosophy; that is, they reflect what human beings assume to be "good labeling questions.” However, given that a single human-designed philosophy is unlikely to work on all scenarios, choosing and blending those strategies under different scenarios is an important but challenging practical task. This paper tackles this task by letting the machines adaptively ”learn” from the performance of a set of given strategies. We will present two examples, both based on connecting active learning with the well-known multi-armed bandit problem. The first example leverages bandit learning to identify the best strategy on the fly during the process of active learning; the second example transfers the active-learning experience from one dataset to another. Extensive empirical studies of the resulting algorithms confirm that they perform better than strategies that are based on human-designed philosophy.
Public events of RIKEN Center for Advanced Intelligence Project (AIP)Join community