AISB 2021 Symposium
Overcoming Opacity in Machine Learning

April 8th, 2021

boxes

AISB 2020 Symposium
Overcoming Opacity in Machine Learning

April 8th, 2021

Boxes


Computing systems are opaque when their behavior cannot be explained or understood. This is the case when it is difficult to know how or why inputs are transformed into corresponding outputs, and when it is not clear which environmental features and regularities are being tracked. The widespread use of machine learning has led to a proliferation of opaque computing systems, giving rise to the so-called Black Box Problem in AI. Because this problem has significant practical, theoretical, and ethical consequences, research efforts in Explainable AI aim to solve the Black Box Problem through post hoc analysis, or to evade the Black Box Problem through the use of interpretable systems. Nevertheless, questions remain about whether or not the Black Box Problem can actually be solved or evaded, and if so, what it would take to do so.

This symposium brings together researchers from Artificial Intelligence, Cognitive Science, and Philosophy to investigate the nature, causes, and consequences of opacity in different scientific, technical, and social domains, as well as to explore and evaluate recent efforts to overcome opacity in Explainable AI. Among other things, the contributed papers report on novel methods and technical solutions in Explainable AI, present philosophical work on the Black Box Problem and its solution, and/or explore the social, legal, and ethical ramifications of opacity.


Schedule & Proceedings

The symposium will take place on April 8th, 2021 over Zoom. All times British Summer Time (BST)!

13:00 Carlos Zednik (Eindhoven) & Hannes Boelsen (Magdeburg): Introduction to the Symposium: Overcoming Opacity in Machine Learning.
13:25 Kathleen Creel (Stanford): Function and User-Satisfaction in Explainable AI.
14:00 Julie Schweer, Paul Grünke & Rafaela Hillerbrand (Karlsruhe): Beyond Opacity — Epistemic Risks in Machine Learning.
14:25 Lok Chan (Duke): Explainable AI as Epistemic Representation.
15:00 David Watson (UCL): No Explanation Without Inference.
15:25 Eunjin Lee, Harrison Taylor, Liam Hiley & Richard Tomsett (Cardiff / IBM): Technical Barriers to the Adoption of Post-Hoc Explanation Methods for Black Box AI Models.
16:00 Andrés Páez (Los Andes): Robot Mindreading and the Problem of Trust.
16:25 Vincent Müller (Eindhoven): Deep Opacity Undermines Data Protection and Explainable Artificial Intelligence.

Extended abstracts of all contributions are available for download in the proceedings volume (1.5MB PDF).

Consult the overall AISB convention schedule for detailed information about keynote lectures and other symposia taking place from April 7th to 9th, 2021.


Venue & Registration

The symposium is one of several to be held at the annual convention of the Society for the Study of Artificial Intelligence and Simulation of Behavior (AISB). Because of the Covid-19 pandemic, the convention has been postponed from its original dates in 2020 to April 7th-9th, 2021.

The convention and symposium will take place entirely online, over Zoom, and you must be an AISB member to attend! See details about EU/UK membership and international membership. After becoming a member, you will be able to access all relevant information to log in to the symposium meeting room. Unfortunately, no non-member registration will be available, but see here for additional details.


Symposium Organizers

Organizing Committee:

Carlos Zednik, Eindhoven University of Technology
Hannes Boelsen, Otto-von-Guericke University Magdeburg

Program Committee:

Colin Allen, University of Pittsburgh
Cameron Buckner, University of Houston
Nir Fresco, Ben-Gurion University of the Negev
Frank Jäkel, Technical University Darmstadt
Holger Lyre, Otto-von-Guericke University Magdeburg
Saskia Nagel, RWTH Aachen University
Constantin Rothkopf, Technical University Darmstadt