Doorkeeper

Mathematical Seminar

Fri, 26 Apr 2019 13:00 - 16:30 JST

RIKEN, Center for Advanced Intelligence Project

Nihonbashi 1-chome Mitsui Building, 15th floor,1-4-1 Nihonbashi, Chuo-ku, Tokyo

Register

Registration is closed

Get invited to future events

Free admission

Description

1.
Speaker:Tomoyuki Shirai
Professor, Institute of Mathematics for Industry Kyushu University
Visiting Scientist, Topological Data Analysis Team, RIKEN AIP

Time:13:00-14:30
Title: Probabilistic aspects of persistent homology

Abstract: Persistent homology appeared around 2000 as an algebraic method which measures topological features of objects or point cloud data. Recently, much attention has been paid to it in the context of Topological Data Analysis (TDA). Persistent homology describes, roughly speaking, the birth and death of topological feature (connected components, holes, voids, and so on) of an increasing sequence of topological objects. Connected components of random objects have been studied for long time in probability theory, especially, in percolation theory and random geometric graph theory. Persistent homology theory sheds new light on such topics in several ways.

In this talk, we would like to discuss some topics on persistent homology for random objects like point cloud data, random simplicial complexes, in particular, limit theorems for functionals of persistent homology.

2.
Speaker:Masaaki Imaizumi
Assistant Professor at The Institute of Statistical Mathematics
Time:15:00-16:30
Title: Generalization Analysis for Mechanism of Deep Neural Networks via Nonparametric Statistics

Abstract: We theoretically investigate a mechanism of deep neural networks (DNNs) which perform better than other models from the viewpoint of statistics. While DNNs have empirically shown higher performance than other standard methods, understanding its mechanism is still a challenging problem. From an aspect of the nonparametric statistics, it is known many standard methods attain the optimal rate of generalization errors for smooth functions in large sample asymptotics, and thus it has not been straightforward to find theoretical advantages of DNNs. Our study fills this gap by extending a class for true functions generating observations. We mainly consider the following two points; non-smoothness and intrinsic low-dimensionality. We derive the generalization error of estimators by DNNs with a ReLU activation, and show that convergence rates of the generalization by DNNs are almost optimal to estimate the components. In addition, our theoretical result provides guidelines for selecting an appropriate number of layers and edges of DNNs. We provide numerical experiments to support the theoretical results.

About this community

RIKEN AIP Public

RIKEN AIP Public

Public events of RIKEN Center for Advanced Intelligence Project (AIP)

Join community