The Nordic Seminar on eXplainable AI (XAI).
The seminar aims to update each other on ongoing XAI research in Norway and create and build a community for XAI in Norway. We have created an exciting program covering XAI's social, legal, and societal aspects..

Program

The seminar is organised over two days (March 29 - March 30) as a joint effort between the Department of Information Science and Media Studies,UiB and Department of Computer Science, NTNU. This is an in-person event and we do not plan to record or stream the presentations.

Day 1 (Location: Room 454 ITV)

The first day we will have presentation on ongoing research projects on XAI in Norway. In three sessions (session 1, session 2, session 3) speakers present their ongoing or upcoming work.

Time Presentation (Session 1)
08:30 Registration and Coffee
09:00 Welcome and Introduction
Kerstin Bach and Bjørnar Tessem
09:30 Bjørn Aslak Juliussen (University of Tromsø, Norway): Legal Requirements for Explainable AI under the GDPR and the Proposed AIA
As artificial intelligence (AI) becomes increasingly sophisticated, ensuring that it can be explained and understood is crucial for building trust and preventing unintended consequences. In the European Union, the General Data Protection Regulation (GDPR) and the proposed Artificial Intelligence Act (AIA) have some legal requirements that are relevant for explaining decision making and high-risk AI. This presentation will explore the legal obligations for explainability for automated decision-making under the GDPR and transparency under the AIA and discuss the connection between these legal requirements and current XAI methods.
10:00 - 10:30 Guohui Xiao (University of Bergen, Norway): Provenance Explanation in Knowledge Graphs
The provenance of a query refers to the origin of the data, information, or knowledge used to derive a query result within a system. Provenance has been used to represent an explanation of the result of a query over a data source or a collection of data sources. In this talk, we discuss the use of provenance to explain Knowledge Graphs in the context of data integration and analysis.
10:30 - 11:00 Coffee Break
11:00 - 11:30 Pekka Parviainen (University of Bergen): Machine Teaching for Explainable AI
Machine teaching is an emerging field that has recently attracted attention in AI. Briefly, machine teaching can be considered as an inverse problem to machine learning where the goal is for the teacher to find the smallest (optimal) training set that, using a learning algorithm, produces a target concept. We study how machine teaching can be utilised to produce example-based explanations, with a focus on the simplicity of examples.
11:30 - 12:00 Martin Jullum (Norwegian Computing Center): MCCE: A ridiculously simple approach to counterfactual explanations
Counterfactual explanations are explanations of individual model predictions which provide one or more examples of (minimal) changes in the model features that lead to a different decision than the original feature set. There exists a fair number of methods for generating counterfactual explanations. Some are based on formulating the problem as an optimization problem and applying different types of optimization routines (stochastic gradient descent, integer programming and genetic algorithms). Others are modeling-based and attempt to model the underlying data distribution (often based on variational autoencoders) to then base the feature changes on that. We propose a ridiculously simple approach for generating valid and realistic counterfactual explanations, called MCCE (Monte Carlo sampling of Counterfactual Explanations). This is a modeling-based approach which first models the joint distribution of the features and the decision by iteratively fitting conditional distributions using decision trees. We then simply sample a large number of observations from this model and remove the samples that do not obey certain criteria. The novel decision modeling improves the efficiency of MCCE significantly. In the talk, I will introduce the concept of counterfactual explanations, our simple MCCE approach to generating such, and showcase its superior performance compared to a range of state-of-the-art alternatives both in terms of computation speed and quality of the generated counterfactual explanations.
12:00 - 13:00 Lunch Break
Time Presentation (Session 2)
13:00 - 13:20 Betül Bayrak (NTNU): PertCF: A Perturbation-Based Counterfactual Generation Approach
Post-hoc explanation systems offer valuable insights to help understand the predictions made by black-box models. Counterfactual explanations, which are instance-based post-hoc explanation methods, aim to demonstrate how a model’s prediction can be changed with minimal effort by presenting a hypothetical example. Feature attribution techniques such as SHAP (SHapley Additive exPlanations) are also effective for providing insights into black-box models. In this paper, we propose PertCF, a perturbation-based counterfactual generation method that benefits from the feature attributions generated by SHAP. Our method combines the strengths of perturbation-based counterfactual generation and feature attributions to generate high-quality, stable, and interpretable counterfactuals.
13:20 - 13:40 Helge Langseth (NTNU): Why causal reasoning is an important part of XAI
In this short talk, we motivate why causal reasoning is an important part of XAI, and give some very preliminary findings of the state of the art when it comes to causal reasoning.
13:40 - 14:00 Pooja Mohanty (NTNU): Explainable AI from users' perspective: A field study
Explanations for opaque and complex models have created trust issues for industry adoption. From an ethnographic study, we explore the role of users' interpretation, understanding of the AI systems and how this translation affects their work practises.
14:00 - 14:30 Reza Arghandeh (Western Norway University of Applied Sciences): The Real Explainable AI by Putting the “Cause” into “Because”
Explainable artificial intelligence (AI) attracts much interest in many domains, from finance to energy to medicine. Technically, the problem of explainability is as old as AI itself. The AI weakness was in dealing with uncertainties of the real world. By introducing probabilistic learning, AI applications became increasingly successful but still indistinct. Explainable AI implements transparency and traceability of statistical black-box machine learning methods, particularly deep learning (DL). However, there is a need to go beyond explainable AI and reach causability. Models built with Causal inference or Causal AI are also highly transparent since they reveal the systematic and causal relationships between input features and target variables discovered in the data.
14:30 - 14:50
14:50 - 15:15 Coffee Break
Time Presentation (Session 3)
15:15 - 15:45 Vinay Setty (University of Stavanger): XAI for Automated Fact Checking
Automated fact-checking using AI models has shown promising results to combat misinformation, thanks to several large-scale datasets which are available. However, most models are opaque and do not provide reasoning behind their predictions. In this talk, I will enumerate the existing approaches w.r.t XAI for fact-checking and discuss the latest trends in this topic. The talk will also delve into what makes a good explanation in the context of fact-checking, and identify potential avenues for future research to address the current limitations.
15:45 - 16:05 Aida Ashrafi (University of Bergen): Explainable AI for sustainable fisheries
In our work, we exploit deep learning models to help the sustainable use of fish resources. This work aims at supporting the surveillance of fishing vessels, by exploiting data about the fishing vessels’ movements combined with their reports on fishing activities. The data are from Norwegian waters and are provided by the Norwegian Directorate of Fisheries (NDF), who is a potential future user of the models. Since this is applied Machine Learning (ML) research, we need to provide an explanation of the data and also the output of deep learning models for the experts besides the ML community. We visualize different features, try to find patterns in fisheries data, and interpret the output of the models with the help of experts. The most important features in our datasets are the speed and the position of fishing vessels. Showing the tracks of them and specifying the fishing activities captured by deep learning models on a map is a way of verifying the results of these black box models. The data include irregular fishing activity reports, sometimes the fishermen have substantial delays in reporting activities. With the help of experts, we develop heuristics to filter these reports during the training stage and later test the model on them in the evaluation stage. We then compare the visualizations for regular and irregular activities to check the performance of the models and the possibility of correcting the reports using the models.
16:05 - 16:25 Kimji Pellano (NTNU): Towards Interpretability of AI-based Early Prediction System for Cerebral Palsy (CP) in Infants
Early prediction of Cerebral Palsy (CP) in infants is a critical step in order to proceed with targeted early intervention and surveillance solutions to give proper post-diagnostic support. Our work in DeepInMotion has shown that training a deep learning model to detect CP in infants through video recordings is a promising automized solution that has the potential to be embedded into the clinical practice. For this to happen, an important problem that needs to be addressed is the explainability of the AI's prediction so that clinicians can trust an AI-based diagnostic tool. Moreover, medical devices need to follow strict regulatory requirements. In our work, we extract the location of key body points of the infant in 2D space as a time series graph and use it as an input to our CP-prediction model. However, there is a lack in available explainable AI (XAI) methods for graph-based data that captures the spatial and temporal information of human subjects. Fortunately, we can utilize existing XAI methods such as class activation mapping (CAM) to highlight important body points that contributed to the model's final prediction. The hypothesis is that this can also be used to find movement patterns that are indicative of CP in infants, as well as in discovering previously unknown movement biomarkers that CP researchers can study.
Time Dinner
18:30 Grafen Trondheim
Kongens gate 8, 7011 Trondheim

Day 2 (Location: Room 454 ITV)

Time Round Table Discussions (Session 4)
9:00 - 11:00 Kerstin Bach and Bjørnar Tessem
The goal of this session is to explore collaboration opportunities, networking, and future XAI events.
Taskcard
11:00 - 12:00 Exiting Lunch

Venue

The seminar will take place on campus Gløshaugen at NTNU. Please note there is some construction work around the IT building. It is recommended to use the main entrance and then walk up the stairs (or take the elevator) to the fourth floor.


Map by MazeMap

Accomodation

For those joining us from outside of Trondheim, here are some hotel suggestions:

Comfort Hotel Park - Trondheim - Nordic Choice Hotels
The Comfort Hotel Park is located between the city center and NTNU with a bus stop for the airport bus and city bus (Nidarosdomen) very close by. Also NTNU Gløshaugen (20 min) and the city center (5-10 min) are in walking distance.

Quality Hotel Prinsen - Nordic Choice Hotels
The Quality Hotel Prinsen is located in the city center with a bus stop for the airport bus and city bus (Prinsens Gate) very close by. NTNU Gløshaugen (30 min) and the city center (5 min) are in walking distance.

Scandic Nidelven
The Scandic Nidelven hotel is a favourite among visitors and well-known for their excellent breakfast buffet. The city center is 10-15 min walking away and NTNU Gløshaugen about 45 min by foot. The closest bus stop is Trondheim S.

If you arrive at Trondheim Airport Værnes, there are three options to get to the city: You can go by train (43 Kr using the AtB app; train goes once an hour), the airport bus (tickets are between 199 Kr and 229 Kr one way), or by Taxi (460 kr, one way, for a shared pre-ordered Taxi). Buses go usually every 20-30 minutes. Taxis are available outside the airport as well.

Registration

[March 27, 2023] The registration is now closed. We are looking forward to meeting the attendees during the seminar.

Organizers