Doorkeeper

[27th AIP Open Seminar] Talks by Causal Inference Team

2021-06-02(水)15:00 - 17:00 JST
オンライン リンクは参加者だけに表示されます。
申し込む

申し込み受付は終了しました

今後イベント情報を受け取る

参加費無料
-Time Zone:JST -The seats are available on a first-come-first-served basis. -When the seats are fully booked, we may stop accepting applications. -Simultaneous interpretation will not be available.

詳細

2021-06-02(Wed)15:00 - 17:00 JST
(14:00-16:00 in Beijing)
(9:00-11:00 in Helsinki)
(7:00-9:00 in London)

Causal Inference Team (https://www.riken.jp/en/research/labs/aip/generic_tech/cause_infer/) at RIKEN AIP

Speaker 1: (10 min) Shohei SHIMIZU:
Title: Overview of the Causal Inference Team
Abstract:
The causal inference team consists of methodologists and scientists and aims to develop data-driven statistical methods for learning causal structures from theoretical and philosophical perspectives. In this talk, I briefly give an overview of the causal inference team.

Speaker 2: (30 min) Takashi Nicholas MAEDA:
Title: Causal discovery in the presence of unobserved variables
Abstract: Causal discovery methods are aimed at inferring causal relations between observed variables. Most of the existing methods assume the absence of unobserved variables. However, in most cases, these assumptions are rarely met. In this talk, I will introduce recent studies on causal discovery in the presence of unobserved variables.

Speaker 3: (30 min) Yan ZENG:
Title: Causal discovery with multi-domain LiNGAM for latent factors
Abstract:
Discovering causal structures among latent factors from observed data is a particularly challenging problem, in which many empirical researchers are interested. Despite its success in certain degrees, existing methods focus on the single-domain observed data only, while in many scenarios data may be originated from distinct domains, e.g. in neuroinformatics. In this talk, we propose Multi-Domain Linear Non-Gaussian Acyclic Models for LAtent Factors (abbreviated as MD-LiNA model) to identify the underlying causal structure between latent factors (of interest), tackling not only single-domain observed data but multiple-domain ones, and provide its identification results. In particular, we first locate the latent factors and estimate the factor loadings matrix for each domain separately. Then to estimate the structure among latent factors (of interest), we derive a score function based on the characterization of independence relations between external influences and the dependence relations between multiple-domain latent factors and latent factors of interest, enforcing acyclicity, sparsity, and elastic net constraints. The resulting optimization thus produces asymptotically correct results. It also exhibits satisfactory capability in regimes of small sample sizes or highly-correlated variables and simultaneously estimates the causal directions and effects between latent factors. Experimental results on both synthetic and real-world data demonstrate the efficacy of our approach. This work will appear in IJCAI2021.

Speaker 4: (30 min) Jun OTSUKA:
Title: Causal modeling from a philosophical perspective
Abstract:
Causal modeling is distinguished from other probabilistic/machine learning methods in that it aims to model change in probability states induced by hypothetical interventions, rather than states themselves. Modeling such counterfactual transitions calls for stronger assumptions than those of conventional statistical models. In this talk I characterize this difference as one concerning underlying ontology: that is, statisticians and causal modelers see the same target phenomenon as different kinds of entities. In particular, causal modeling adopts a stronger ontological assumption (that is, assumes more “things”) than that of statistical modeling, which allows for stronger (counterfactual) reasoning but also presents a harder epistemological challenge in estimating the causal structure. After the philosophical discussion I will introduce, if time permitted, our recent work on the nature of such causal kinds, in particular the formal identity criteria for determining when two causal models represent the same “thing.”


All participants are required to agree with the AIP Open Seminar Series Code of Conduct.
Please see the URL below.
https://aip.riken.jp/event-list/termsofparticipation/?lang=en

RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.


コミュニティについて

RIKEN AIP Public

RIKEN AIP Public

Public events of RIKEN Center for Advanced Intelligence Project (AIP)

メンバーになる