Explainable AI via Argumentation: Theory & Practice

AbstractTop of Page

Explanations play a central role in AI either in providing some form of transparency to black-box machine learning systems or more generally in supporting the results of an AI system in order to help users to understand, accept and trust the operation of the system. The course will present how Argumentation can serve as a basis for Explainable AI (XAI) and how this can be applied to Decision Making and Machine Learning for AI applications. It will present the role and basic quality requirements of explanations of AI systems and how these can be met in argumentation-based systems. It will cover the necessary theory of argumentation, a software methodology for argumentation-based explainable systems and the use of practical tools in argumentation for realizing such systems. Students will have hands on experience on using these tools and the development of a realistic XAI decision making system.

MotivationTop of Page

Providing explanations for the results of AI systems is today a major requirement for any AI system. Systems need not only to perform well in the accuracy of their output but also in the interpretability and understandability of their results. Explanations of conclusions drawn from a learned theory or decisions taken by an AI system facilitate their usability by other systems (artificial or human) and contribute significantly towards building a high level of trust towards information systems. They have a role to play both at the level of developing a system and at the level of acceptance of the system in its application environment. Argumentation is a form of reasoning that is naturally linked to informative explanations. Arguments supporting a claim or conclusion provide an attributive part of an explanation, while other arguments that help defend the supporting arguments from any of their counter-arguments provide contrastive elements of the explanation. Thus, decision making systems based on argumentation can be developed to be highly explainable in a way that their results can be verified and contested by the users. In the context of machine learning it is appropriate to consider a learned theory or a post-hoc explanation model of a black-box sub-symbolic learner, as a flexible theory of arguments rather than a strict definite theory of logical rules. This, then, can facilitate both the process of learning but also the process of revision of a theory through new information, not available at the initial time of learning.

Learning OutcomesTop of Page

In this course, you will learn to

  1. Enumerate and explain different elements of explanation about system decisions, i.e. attributive, contrastive and actionable elements
  2. Use Structured Argumentation for Knowledge Representation for Decision Making problems
  3. Use a methodological approach to model real-life Decision-Making problems with the Gorgias framework
  4. Model a real life decision problem using an online no-code/low-code platform, the rAIson platform of Argument Theory start-up
  5. Use APIs to build applications with automated explainable decision making in your application (running on computers, smartphones or the web)

RequirementsTop of Page

This course requires general background on Computing and AI, e.g.:

  1. Basic understanding of first-order logic (predicates, unification)
  2. Basic understanding of APIs use in the internet (client-server architeture)

The students should bring along their laptops or tablets for hands-on experience and work on small projects.

Short CVs of TutorsTop of Page

Antonis Kakas

Antonis C. Kakas is a Professor at the Department of Computer Science of the University of Cyprus. He obtained his Ph.D. in Theoretical Physics from Imperial College London in 1984. His interest in Computing and AI started in 1989 under the group of Professor Kowalski. Since then, his research has concentrated on computational logic in AI with particular interest in argumentation, abduction and induction and their application to machine learning and cognitive systems. Currently, he is working on the development of a new framework of Cognitive Programming as an environment for developing Human-centric AI systems that can be naturally used by developers and human users at large. He was the National Contact Point for Cyprus in the flagship EU project on AI, AI4EU. He has recently co-founded a start-up company in Paris, called Argument Theory, which offers solutions to real-life application decision taking problems based on AI Argumentation Technology.

Nikos Spanoudakis

Nikolaos I. Spanoudakis is an Assistant Professor at the Hellenic Mediterranean University. He has a PhD in Informatics from the Paris Descartes University. His main research interests are in Information Systems, Engineering Multi-agent Systems, Intelligent Systems, and Applications of Argumentation. He is a senior member of both the Institute of Electrical and Electronics Engineers (IEEE) and the Association for Computing Machinery (ACM), member of the European Association for Multi-Agent Systems (EURAMAS, currently member of the board), the Hellenic Artificial Intelligence Society (EETN) and the Technical Chamber of Greece (TEE-TCG). Nikolaos is a co-founder of the Argument Theory start up company in Paris/France. Argument Theory uniquely offers a no-code/low-code platform for knowledge elicitation and argumentation-based automated decision making as a service.

Course material and readingTop of Page

Below you will find an initial list of useful papers. Course lecture notes and project ideas will be posted here in due course.

  1. Vassiliades A, Bassiliades N, Patkos T. (2021): Argumentation and explainable artificial intelligence: a survey. The Knowledge Engineering Review. 2021;36:e5.

  2. Kristijonas Čyras, Antonio Rago, Emanuele Albini, Pietro Baroni, Francesca Toni. (2021): Argumentative XAI: A Survey. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence Survey Track. Pages 4392-4399.

  3. Spanoudakis N.I., Gligoris G., Koumi A., Kakas A.C. (2023): Explainable argumentation as a service. J. Web Semant. 76: 100772

  4. Kakas A.C., Moraitis P., Spanoudakis N.I.(2019): GORGIAS: Applying argumentation. Argument Comput. 10(1): 55-81.

  5. Prentzas N., Pattichis C.N., Kakas A.C. (2023): Explainable Machine Learning via Argumentation. In proceedings of Explainable Artificial Intelligence - First World Conference, XAI, Springer series: Communications in Computer and Information Science, Volume 1903, 371-398.

  6. Dietz, E., Kakas, A., & Michael, L. (2022). Argumentation: a calculus for human-centric AIFrontiers in Artificial Intelligence5, 955579.