[23rd AIP Open Seminar] Talks by Nonconvex Learning Theory Team

Wed, 28 Apr 2021 15:00 - 17:00
Online Link visible to participants
Free admission
Registration closes 28 Apr 16:00
-The seats are available on a first-come-first-served basis. -When the seats are fully booked, we may stop accepting applications. -Simultaneous interpretation will not be available.
There is room for 229 more people


Nonconvex Learning Theory Team ( at RIKEN AIP

Speaker 1: Takafumi Kanamori (25 min)
Title: Overview and Recent Development in the research activity of Non-Convex Learning Theory Team
Abstract: The main target of our team is to develop learning algorithms to deal with complex data and to establish theoretical foundations of statistical learning. In this talk, we will introduce recent development including statistical inference of probability distribution on complex domains, kernel-based feature extraction method for complex data, the fundamental theory of transfer learning, and so on.

Speaker 2: Kosaku Takanashi (25 min)
Title: Nonlinear Ensemble Methods for Time Series Data
Abstract: We propose a class of ensemble methods that nonlinearly synthesizes multiple sources of information– such as predictive distributions– in a sequential, time series context. To understand its finite sample properties, we develop a theoretical strategy based on stochastic processes, where the ensembled processes are expressed as stochastic differential equations, evaluated using Itô’s lemma. We determine the conditions and mechanism for which this class of nonlinear synthesis outperforms linear ensemble methods. Further, we identify a specific form of nonlinear synthesis that produces exact minimax predictive distributions for Kullback-Leibler risk and, under certain conditions, quadratic risk. A finite sample simulation study is presented to illustrate our results. This is a joint work with Kenichiro McAlinn from Temple university.

Speaker 3: Yuichiro Wada (25 min)
Title: Spectral Embedded Deep Clustering
Abstract: We propose a clustering method based on a deep neural network. Given an unlabeled dataset and the number of clusters, our method groups the dataset into the given number of clusters in the original space. We use a conditional discrete probability distribution defined by a deep neural network as a statistical model. Our strategy is first to estimate the cluster labels of unlabeled data points selected from a high-density region. Then secondly, by using the estimated labels and the remaining unlabeled data points, we conduct semi-supervised learning to train the model. Lastly, by using the trained model, we estimate cluster labels of the remaining unlabeled data points. We conduct numerical experiments on five commonly used datasets to confirm the effectiveness of the proposed method. This talk is based on a paper Entropy 2019, 21(8), 795, with Shugo Miyamoto, Takumi Nakagawa, Leo Andeol, Wataru Kumagai and Takafumi Kanamori.

Speaker 4: Hironori Fujisawa (25 min)
Title: Transfer learning via L1 regularization
Abstract: Machine learning algorithms typically require abundant data under a stationary environment. However, environments are nonstationary in many real-world applications. Critical issues lie in how to effectively adapt models under an ever-changing environment. We propose a method for transferring knowledge from a source domain to a target domain via L1 regularization in high dimension. We incorporate L1 regularization of differences between source parameters and target parameters, in addition to an ordinary L1 regularization. Hence, our method yields sparsity for both the estimates themselves and changes of the estimates. The proposed method has a tight estimation error bound under a stationary environment, and the estimate remains unchanged from the source estimate under small residuals. Moreover, the estimate is consistent with the underlying function, even when the source estimate is mistaken due to nonstationarity. Empirical results demonstrate that the proposed method effectively balances stability and plasticity. This is a joint work with Masaaki Takada (Toshiba Cooperation). This talk is based on a paper accepted at NeurIPS 2020.

All participants are required to agree with the AIP Open Seminar Series Code of Conduct.
Please see the URL below.

RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.

About this community



Public events of RIKEN Center for Advanced Intelligence Project (AIP)

Join community