Doorkeeper

Machine Intelligence for Medical Engineering Team (Talk by Zhenqiang LI).

Fri, 18 Feb 2022 16:00 - 17:00 JST
Online Link visible to participants
Register

Registration is closed

Get invited to future events

Free admission

Description

This is an online seminar. Registration is required.
【Machine Intelligence for Medical Engineering Team】
【Date】2022/Feb/18(Fri) 1600pm-1700pm (JST)

【Speaker】Zhenqiang LI

Title:
Interpretable Neural Networks for Human Action Understanding
Abstract:
Human action understanding is one of the primary tasks for implementing intelligent human-machine interaction systems on interactive robots or augmented reality devices etc. Recent decades have witnessed the achievement of neural networks in implementing human action understanding. However, the black-box nature of most neural networks makes the method lack interpretability, i.e., users can hardly understand what information in videos is captured and why specific decisions or predictions have been made by the networks. In this presentation, I will introduce my past works towards resolving this challenge, i.e., how to achieve the neural networks that are more comprehensible for humans about their predictions or decisions. The works are grouped into two groups: (1) the post-hoc explanations to the predictions of a trained action understanding model by input attribution methods; (2) the design of neural networks for action understanding whose predictions are more understandable for humans.

About this community

RIKEN AIP Public

RIKEN AIP Public

Public events of RIKEN Center for Advanced Intelligence Project (AIP)

Join community