[EVENT] Munich AI Lecture Series - René Vidal

Prof. René Vidal (John Hopkins University) will be giving the next Munich AI Lecture on 8 March.

 

 

The Munich AI Lectures series, a joint initiative of MDSI, CAS, ELLIS Munich, and MCML, invites top-level AI researchers to give presentations and participate in Q&A sessions to showcase insights and ideas in the field. The next speaker in the series is René Vidal, Herschel Seder Professor of Biomedical Engineering, and the Director of the Mathematical Institute for Data Science (MINDS) at Johns Hopkins University. His current research focuses on the foundations of deep learning and its applications in computer vision and biomedical data science. Prof. Vidal will be speaking on  “Explainable AI via Semantic Information Pursuit.”. You can find the abstract and a short bio below.

The lecture will take place virtually on March 8, 2023 at 5 pm CET. Please find more details on our website

 

Title: Explainable AI via Semantic Information Pursuit

 

Abstract: There is a significant interest in developing ML algorithms whose final predictions can be explained in terms understandable to a human. Providing such an “explanation” of the reasoning process in domain-specific terms can be crucial for the adoption of ML algorithms in risk-sensitive domains such as healthcare. This has motivated a number of approaches that seek to provide explanations for existing ML algorithms in a post-hoc manner. However, many of these approaches have been widely criticized for a variety of reasons and no clear methodology exists in the field for developing ML algorithms whose predictions are readily understandable by humans. To address this challenge, we develop a method for constructing high performance ML algorithms which are “explainable by design”. Namely, our method makes its prediction by asking a sequence of domain- and task-specific yes/no queries about the data (akin to the game “20 questions”), each having a clear interpretation to the end-user. We then minimize the

expected number of queries needed for accurate prediction on any given input. This allows for human interpretable understanding of the prediction process by construction, as the questions which form the basis for the prediction are specified by the user as interpretable concepts about the data. Experiments on vision and NLP tasks demonstrate the efficacy of our approach and its

superiority over post-hoc explanations. Joint work with Aditya Chattopadhyay, Stewart Slocum, Benjamin Haeffele and Donald Geman.

Bio: Dr. René Vidal is the Herschel Seder Professor of Biomedical Engineering, and the Director of the Mathematical Institute for Data Science (MINDS), the NSF-Simons Collaboration on the Mathematical Foundations of Deep Learning and the NSF TRIPODS Institute on the Foundations of Graph and Deep Learning at Johns Hopkins University. He is also an Amazon Scholar, Chief Scientist at NORCE, and Associate Editor in Chief of TPAMI. His current research focuses on the foundations of deep learning and its applications in computer vision and biomedical data science. He is an AIMBE Fellow, IEEE Fellow, IAPR Fellow and Sloan Fellow, and has received numerous awards for his work, including the IEEE Edward J. McCluskey Technical Achievement Award, D’Alembert Faculty Award, J.K. Aggarwal Prize, ONR Young Investigator Award, NSF CAREER Award as well as best paper awards in machine learning, computer vision, controls, and medical robotics.