Title: Tensor Networks for Fast Stable Algorithms for Canonical Polyadic Decomposition and Compression of Deep Learning
In this talk we discuss this problem by converting one Tensor Networks to another one, especially to Canonical Polyadic Decomposition (CPD) , with fast, robust and stable algorithms for very high order tensors. We present a novel method for the CPD for higher-order tensors, particularly suited for data tensors for which tensor rank exceeds the tensor dimensions, a prohibitive case for the existing algorithms. Our simple approach is to first approximate a data tensor by a set of inter-connected core tensors of orders not higher than 3 (for example, Tensor Train). We demonstrate that the factor matrices within the CPD of the original higher-order tensor can be efficiently estimated from the compressed Tensor Networks model.
In general, Tensor decompositions (TD) and their generalizations tensor networks (TNs) are promising, and emerging tools in Machine Learning (ML) and big data mining, since many data can be naturally represented and described as higher-order tensors and efficient representation in tensor networks formats allows us to reduce the number of parameters and extract desired futures. We will also present a brief overview of tensor decomposition and tensor networks architectures
and associated learning algorithms. We graphically illustrate of Tensor Train and other related tensor network models for higher order tensors. Tensor Train and Hierarchical Tucker (HT) models will be naturally extended to MERA (Multiscale Entanglement Renormalization Ansatz) models, PEPS/PEPO and other 2D/3D tensor networks with improved expressive power of deep neural networks (DNN).
Bio: Andrzej Cichocki received the M.Sc. (with honors), Ph.D. and Dr.Sc. (Habilitation) degrees, all in electrical engineering from Warsaw University of Technology (Poland). He spent several years at University Erlangen (Germany) as an Alexander-von-Humboldt Research Fellow and Guest Professor. He was a Senior Team Leader and Head of the laboratory for Advanced Brain Signal Processing, at RIKEN Brain Science Institute (Japan) and now he is a Professor in the Skolkovo Institute of Science and Technology - SKOLTECH (Russia). He is author of more than 500 technical journal papers and 5 monographs in English (two of them translated to Chinese). He served as Associated Editor of, IEEE Trans. on Signals Processing, IEEE Trans. on Neural Networks and Learning Systems, IEEE Trans on Cybernetics, Journal of Neuroscience Methods and he was as founding Editor in Chief for Journal Computational Intelligence and Neuroscience. Currently, his research focus on multiway blind source separation, tensor decompositions, tensor networks for big data analytics, and Brain Computer Interface. His publications currently report over 36,000 citations according to Google Scholar, with an h-index of 82. He is Fellow of the IEEE since 2013.
Public events of RIKEN Center for Advanced Intelligence Project (AIP)Join community