Each year, the NMR workshop brings together researchers from the field of nonmonotonic reasoning to present their work and discuss their research. This series of online talks on nonmonotonic reasoning aims to continue and extend the exchange in the 11 months between the workshops.
The series consists of regular online talks followed by a Q&A/discussion phase. The aim of the series is to offer a platform for the presentation of new and developing ideas in NMR. We encourage both the presentation of preliminary and ongoing work as well as mature results.
The scope of the series is nonmonotonic reasoning and all its subtopics, e.g., belief revision, uncertain reasoning, argumentation, and answer set programming.
The talks are given online via Zoom and are announced on the Planet KR mailing list. While there is no strict time limit, we aim for up to one hour for each talk (including questions and discussion).
In this talk, we will discuss the role of conditionals in Explainable AI. I will focus on two classes of conditionals that align well with my research interests, namely probabilistic conditionals and counterfactual conditionals. The literature is currently mostly driven by ideas from machine learning and may benefit from insights about non-monotonic and quantitative reasoning to improve analytical guarantees of explanations and to design better-informed algorithms.
The role of probabilistic conditionals in Explainable AI emerged from rule-based explanations. While rule-based explanations have a long tradition in machine learning, recently an interesting direction evolved that applies propositional-logical reasoning technology to infer provably correct rules from machine learning models. While deterministic rules are interesting, they are unlikely to explain much of what a machine learning model learned because rules usually come with exceptions. This strand of work has already been expanded to probabilistic rules but mostly makes use of classical logical workarounds or purely numerical ideas. I will give an introduction to the area and discuss how some ideas from the literature on reasoning about (probabilistic) conditionals can enrich the current Landscape.
Another interesting direction in Explainable AI are counterfactual explanations. Here, explanations take the form of counterfactual conditionals of the form “If it had not been for X, then the decision would not have been Y”. For example, in a loan application scenario, such a counterfactual explanation may explain the reasons for a denied application to an applicant. For example, the conditional could take the form “If your debt-to-income ratio would be lower, then your application would not have been rejected.” An obvious problem here is that there is typically not a unique explanation. This causes multiple problems. I will again introduce the area and discuss some ideas how methods for reasoning about conditionals may be helpful to balance completeness and the number of counterfactual explanations, and to improve the robustness of counterfactual explainers.This talk presents a broad view on inductive reasoning by embedding it in theories of epistemic states, conditionals, and belief revision. More precisely, we consider nonmonotonic inductive reasoning as a specific case of belief revision on epistemic states which include conditionals as a basic means for representing beliefs. We present a general framework for inductive reasoning from conditional belief bases that induces a variety of different revision scenarios quite naturally and particularly allows for taking background beliefs into account. As a proof of concept, we instantiate this framework by ranking-theoretic reasoning based on so-called c-revisions. We illustrate the constructive usefulness of our approach, as well as its integrating power.
This talk presents a broad view on inductive reasoning by embedding it in theories of epistemic states, conditionals, and belief revision. More precisely, we consider nonmonotonic inductive reasoning as a specific case of belief revision on epistemic states which include conditionals as a basic means for representing beliefs. We present a general framework for inductive reasoning from conditional belief bases that induces a variety of different revision scenarios quite naturally and particularly allows for taking background beliefs into account. As a proof of concept, we instantiate this framework by ranking-theoretic reasoning based on so-called c-revisions. We illustrate the constructive usefulness of our approach, as well as its integrating power.
This talk is based on the paper "Inductive Reasoning, Conditionals, and Belief Dynamics" by Gabriele Kern-Isberner and Wolfgang Spohn, Journal of Applied Logics, Special Issue on Foundations, Applications, and Theory of Inductive Logic, Guest Editors Martin Adamcík and Matthias Thimm, Volume 11, Number 1: January 2024, p. 89–127. https://www.collegepublications.co.uk/ifcolog/?00063
This seminar series is currently organized by Jonas Haldimann and Giovanni Casini. For questions, or if you are interested in giving a talk, please contact us at jonas@haldimann.de or giovanni.casini@isti.cnr.it.