Meeting Room 3 at RIKEN Center for Advanced Intelligence Project (AIP)
Nihonbashi 1-chome Mitsui Building, 15th floor, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
Topic: Improve Low-Shot Visual Recognition by Bridging Visual-Semantic Gap
Abstract: In this talk, Yao-Hung will discuss learning visual and semantic embeddings for improving low-shot visual object recognition. First, Yao-Hung will introduce a learning architecture that combines unsupervised representation learning models (i.e., auto-encoders) with cross-domain learning criteria (i.e., Maximum Mean Discrepancy loss). The learned architecture enables us to obtain more robust joint embeddings from visual and semantic features. Second, Yao-Hung will introduce another learning system that maximizes the dependency between semantic relationships between visual objects and the output embedding of any arbitrary deep regression model. If time permits, Yao-Hung will also talk about his recent work on recovering order in the non-sequenced data.
Short Bio: Yao-Hung Hubert Tsai is a second-year Ph.D. in Machine Learning Department at Carnegie Mellon University working with Ruslan Salakhutdinov. His research interests lie in general Deep Learning and its applications on Transfer Learning.
Public events of RIKEN Center for Advanced Intelligence Project (AIP)
Join community