Doorkeeper

Mathematical Seminar

Fri, 31 May 2019 13:00 - 16:30 JST
Register

Registration is closed

Get invited to future events

Free admission

Description

We are planning a Mathematical Seminar on Friday 5/31,
13:00-14:50 at the meeting room 3 & 4 of the Nihonbashi Office and 15:00-16:30 at the open space of the Nihonbashi Office. These talks will be in English.

1.
Speaker : Takanori Maehara
AIP Discrete Optimization Unit
Time:13:00-14:30

Title: Submodular Maximization over Lattices

Abstract: Maximizing submodular set functions is one of the fundamental discreteoptimization problems and has many applications in economics, machine learning, and data mining. In general, the problem is NP-hard; however, for many constraints, it admits polynomial time constant factor approximation algorithms.
In this study, we consider "lattices" (a special class of posets) as more general domains. Optimizing submodular functions on lattices has many applications. For example, the set of stable matchings forms a distributive lattice and submodular maximization problem generalizes the egatalian matching problem. Also, the set of vector spaces forms a modular lattice, and the submodular maximization problem generalizes the principal component analysis. Hence, establishing the theory of submodular maximization on lattices is very important problem in both theory and appllications.
Recently, the speakers showed that the fundamental submodular maximization algorithms, greedy algorithm and the continuous greedy algorithm, can be generalized to modular lattices (AAAI'19, HJ'19) and distributive lattices (under preparation), respectively.
In this talk, I give an overview of the set submodular maximization problem, and explain how the theory is generalized to lattices. I will also present some open problems in this area. This is a joint work with So Nakashima (Tokyo University).

2.
Speaker : Chao Li
AIP Tensor Learning Unit
Time:15:00-16:30
Title: From Reshuffled Tensor Decomposition to Structure Learning of Higher-order Tensor Networks

Abstract: In this talk, I’d like to share our most recent two works submitted to Neurips this year.

In the first half, we present a novel tensor decomposition called reshuffled-TD, which generalize the classic latent convex tensor decomposition proposed by Tomioka in 2010. Compared to the previous model, we generalize the model by imposing the notion “reshuffling” instead of the conventional tensor folding and unfolding operations. More importantly, the reshuffling operation makes the model be easier to achieve exact recovery of the latent components, which is of importance in various real-world applications. In our work, we not only develop a simple algorithm for this model but rigorously prove a sufficient condition for the exact-recovery property via a type of incoherence measures. In the numerical experiments, we apply the new model to a classic computer vision task called image steganography. The results on the applications demonstrate the effectiveness of our method in practice.

In the second half of the talk, we consider a question whether it is possible to learn the optimal graphical structure of tensor network (TN) decomposition from a given high-order tensor. In this paper, we propose a meta-algorithm to attempt to solve this problem practically. In the work, we formulate the structure learning of (TN) an a combinational optimization problem on hamming space, and employ the well-developed genetic algorithm (GA) to seek the possible solution. On one side, th advantage of GA is able to automatically estimate all possible ``solutions'' even though it lacks the theory of the existence and uniqueness. On the other side, its prohibitive computational requirements can be fortunately handled due to the recent development of general-purpose computing on Graphics Processing Units (GPUs). In the numerical experiments, we test our algorithm on both synthetic and real-world data, and the experimental results demonstrate that our method can give more efficient representation for a tensor compared with the conventional tensor decompositions.

About this community

RIKEN AIP Public

RIKEN AIP Public

Public events of RIKEN Center for Advanced Intelligence Project (AIP)

Join community