会議室 3 理化学研究所 革新知能統合研究センター (AIP)
〒103-0027 東京都中央区日本橋1-4-1 日本橋一丁目三井ビルディング 15階
Topic: Improve Low-Shot Visual Recognition by Bridging Visual-Semantic Gap
Abstract: In this talk, Yao-Hung will discuss learning visual and semantic embeddings for improving low-shot visual object recognition. First, Yao-Hung will introduce a learning architecture that combines unsupervised representation learning models (i.e., auto-encoders) with cross-domain learning criteria (i.e., Maximum Mean Discrepancy loss). The learned architecture enables us to obtain more robust joint embeddings from visual and semantic features. Second, Yao-Hung will introduce another learning system that maximizes the dependency between semantic relationships between visual objects and the output embedding of any arbitrary deep regression model. If time permits, Yao-Hung will also talk about his recent work on recovering order in the non-sequenced data.
Short Bio: Yao-Hung Hubert Tsai is a second-year Ph.D. in Machine Learning Department at Carnegie Mellon University working with Ruslan Salakhutdinov. His research interests lie in general Deep Learning and its applications on Transfer Learning.