What is explainable AI (XAI)?
Sponsored

What is explainable AI (XAI)?

What are the basics of artificial intelligence (AI) and explainable AI (XAI)?

In light of the idea that artificial intelligence (AI) systems may function as a black box and are therefore not transparent, explainable AI (XAI) has emerged as a subfield focused on developing systems humans can understand and explain. 

To understand XAI’s basics and goal, one must grasp what AI is. While artificial intelligence as a field of science has a long history and embraces an expanding set of technological applications, a globally accepted definition for AI is absent. Europe is at the forefront of developing various legal frameworks and ethical guidelines for deploying and developing AI. A ground-breaking proposal from the European Commission (EC) in 2021 set out the regulation for AI’s first legally binding definition.  

Per this proposal, AI can be defined as a system that generates outputs such as content, predictions, recommendations or decisions influencing the environments they interact with. Such AI systems are developed in line with one or more techniques and approaches, as discussed below. 

First, they work with machine learning (ML) models (supervised, unsupervised, reinforcement and deep learning), all of which are ML categories. It is important to note that ML is an imperative of AI, but not all AI systems work with advanced ML techniques, such as deep learning. ML systems can learn and adapt without following explicit instructions. Indeed, not all ML work toward a preset external goal, with some systems being engineered to “reason” toward abstract objectives and thus function without constant human input.

Moreover, AI systems may work with or combine logic and knowledge-based approaches such as knowledge representation or inductive (logic) programming. The former refers to encoding information in a way that an AI system can use (for instance, by defining rules and relationships between concepts). The latter refers to ML models that learn rules or hypotheses from a set of examples. 

An AI system may deploy other methods, such as statistical approaches (techniques deployed to learn patterns or relationships in data) and search and optimization methods, which seek the best solution to a particular problem by searching a large space of possibilities. 

In addition, AI has also been described as “the ability of a non-natural entity to make choices by an evaluative process,” as defined by Jacob Turner, a lawyer, AI lecturer and author. Taking both the definitions of Turner and of the EC, one can deduce that AI systems can often “learn” and, in this matter, influence their environment. Beyond software, AI may also be captured in different forms or be embodied, such as in robotics. 

So what are the other basics of AI? Since AI systems are data-driven, software code and data are two crucial components of AI. In this context, it can be argued that progress in AI is taking place in an era alongside phenomena including “software eating the world” (meaning that societies and the economy as a whole have seen an immense and ongoing digital transformation) and the “‘datafication’ of the world,” which argues  that said digital transformation went along an ever-increasing amount of data being generated and collected.

But why should one care? Crucially, the capturing and processing of data correlate with how an AI’s set of algorithms is designed. That said, algorithms are guidelines that decide how to perform a task through a sequence of rules.

Why is all of this important? AI makes “choices” or generates output based on the data (input) and the algorithms. Moreover, AI may move its decisions away from human input due to its learning nature and the abovementioned techniques and approaches. Those two features contribute to the idea that AI often functions as a black box. 

The term “black box” refers to the challenge of comprehending and controlling the decisions and actions of AI systems and algorithms, potentially making control and governance over these systems difficult. Indeed, it brings about various transparency and accountability issues with different corresponding legal and regulatory implications. 

Black Box problem: Internal behavior of the code is unknown 

This is where explainable AI (XAI) comes into play. XAI aims to provide human-understandable explanations of how an AI system arrives at a particular output. It is specifically aimed at providing transparency in the decision-making process of AI systems.

2.

Why is explainable AI (XAI) important?

XAI involves designing AI systems that can explain their decision-making process through various techniques. XAI should enable external observers to understand better how the output of an AI system comes about and how reliable it is. This is important because AI may bring about direct and indirect adverse effects that can impact individuals and societies. 

Just as explaining what comprehends AI, explaining its results and functioning can also be daunting, especially where deep-learning AI systems come into play. For non-engineers to envision how AI learns and discovers new information, one can hold that these systems utilize complex circuits in their inner core that are shaped similarly to neural networks in the human brain. 

The neural networks that facilitate AI’s decision-making are often called “deep learning” systems. It is debated to what extent decisions reached by deep learning systems are opaque or inscrutable, and to which extent AI and its “thinking” can and should be explainable to ordinary humans.

There is debate among scholars regarding whether deep learning systems are truly black boxes or completely transparent. However, the general consensus is that most decisions should be explainable to some degree. This is significant because the deployment of AI systems by state or commercial entities can negatively affect individuals, making it crucial to ensure that these systems are accountable and transparent.

For instance, the Dutch Systeem Risico Indicatie (SyRI) case is a prominent example illustrating the need for explainable AI in government decision-making. SyRI was an automated decision-making system using AI developed by Dutch semi-governmental organizations that used personal data and other tools to identify potential fraud via untransparent processes later classified as black boxes.

The system came under scrutiny for its lack of transparency and accountability, with national courts and international entities expressing that it violated privacy and various human rights. The SyRi case illustrates how governmental AI applications can affect humans by replicating and amplifying biases and discrimination. SyRi unfairly targeted vulnerable individuals and communities, such as low-income and minority populations. 

SyRi aimed to find potential social welfare fraudsters by labeling certain people as high-risk. SyRi, as a fraud detection system, has only been deployed to analyze people in low-income neighborhoods since such areas were considered “problem” zones. As the state only deployed SyRI’s risk analysis in communities that were already deemed high-risk, it is no wonder that one found more high-risk citizens there (respective to other neighborhoods that are not considered “high-risk”). 

This label, in turn, would encourage stereotyping and reinforce a negative image of the residents who lived in those neighborhoods (even if they were not mentioned in a risk report or qualified as a “no-hit”) due to comprehensive cross-organizational databases in which such data entered and got recycled across public institutions. The case illustrates that where AI systems produce unwanted adverse outcomes such as biases, they may remain unnoted if transparency and external control are lacking.

Besides states, private companies develop or deploy many AI systems with transparency and explainability outweighed by other interests. Although it can be argued that the present-day structures enabling AI wouldn’t exist in their current forms if it were not for past government funding, a significant proportion of the progress made in AI today is privately funded and is steadily increasing. In fact, private investment in AI in 2022 was 18 times higher than in 2013.

Commercial AI “producers” are primarily responsible to their shareholders, thus, may be heavily focused on generating economic profits, protecting patent rights and preventing regulation. Hence, if commercial AI systems’ functioning is not transparent enough, and enormous amounts of data are privately hoarded to train and improve AI, it is essential to understand how such a system works. 

Ultimately, the importance of XAI lies in its ability to provide insights into the decision-making process of its models, enabling users, producers, and monitoring agencies to understand how and why a particular outcome was created. 

This arguably helps to build trust in governmental and private AI systems. It increases accountability and ensures that AI models are not biased or discriminatory. It also helps to prevent the recycling of low-quality or illegal data in public institutions from adverse or comprehensive cross-organizational databases intersecting with algorithmic fraud-detection systems.

3.

How does explainable AI work?

The principles of XAI surround the idea of designing AI systems that are transparent, interpretable and can provide clear justifications for their decisions. In practice, this involves developing AI models that humans understand, which can be audited and reviewed, and are free from unintended consequences, such as biases and discriminatory practices.

Explainability is in making transparent the most critical factors and parameters shaping AI decisions. While it can be argued that it is impossible to provide full explainability at all times due to the internal complexity of AI systems, specific parameters and values can be programmed into AI systems. High levels of explainability are achievable, technically valuable and may drive innovation. 

The importance of transparency and explainability in AI systems has been recognized worldwide, with efforts to develop XAI underway for several years. As noted, XAI has several benefits: Arguably, it is possible to discover how and why it made a decision or acted (in the case of embodied AI) the way it did. Consequently, transparency is essential because it builds trust and understanding for users while allowing for scrutiny simultaneously. 

Explainability is a prerequisite for ascertaining other “ethical” AI principles, such as sustainability, justness and fairness. Theoretically, it allows for the monitoring of AI applications and AI development. This is particularly important for some use cases of AI and XAI, including applications in the justice system, (social) media, healthcare, finance, and national security, where AI models are used to make critical decisions that impact people’s lives and societies at large. 

Several ML techniques can serve as examples of XAI. Such techniques increase explainability, like decision trees (which can provide a clear, visual representation of the decision-making process of an AI model), rule-based systems (algorithmic rules are defined in a human-understandable format in cases in which rules and interpretation are less flexible), Bayesian networks (probabilistic models representing causalities and uncertainties), linear models (models showing how each input contributes to the output), and similar techniques to the latter in the case of neural networks. 

AIs black box problem vs. XAIs transparency

Various approaches to achieving XAI include visualizations, natural language explanations and interactive interfaces. To start with the latter, interactive interfaces allow users to explore how the model’s predictions change as input parameters are adjusted. 

Visualizations like heat maps and decision trees can help individuals visualize the model’s decision-making process. Heat maps showcase color gradients and visually indicate the importance of certain input features, which is the information the (explainable) ML model uses to generate its output or decision. 

Decision trees show an ML’s decision-making process according to different branches that intersect, much like the name suggests. Finally, natural language explanations can provide textual justifications for the AI model’s predictions, making it easier for non-technical users to understand. 

Essential to note that where one is focused on the subfield of machine learning, explainable machine learning (XML) specifically concentrates on making ML models more transparent and interpretable, going beyond the broader field of XAI, which encompasses all types of AI systems. 

4.

What are the limitations of explainable AI?

XAI has several limitations, some of them relating to its implementation. For instance, engineers tend to focus on functional requirements, and even if not, large teams of engineers often develop algorithms over time. This complexity makes a holistic understanding of the development process and values embedded within AI systems less attainable. 

Moreover, “explainable” is an open term, bringing about other crucial notions when considering XAI’s implementation. Embedding in or deducting explainability from AI’s code and algorithms may be theoretically preferable but practically problematic because there is a clash between the prescribed nature of algorithms and code on the one hand and the flexibility of open-ended terminology on the other.

Indeed, when AI’s interpretability is tested by looking at the most critical parameters and factors shaping a decision, questions such as what amounts to “transparent” or “interpretable” AI arise. How high are such thresholds?

Finally, it is widely recognized that AI development happens exponentially. Combining this exponential growth with unsupervised and deep learning systems, AI could, in theory, find ways to become generally intelligent, opening doors to new ideas, innovation and growth. 

To illustrate this, one can consider published research on “generative agents” where large language models were combined with computational, interactive agents. This research introduced generative agents in an interactive sandbox environment consisting of a small town of twenty-five agents using natural language. Crucially, the agents produced believable individual and interdependent social behaviors. For example, starting with only a single user-specified notion that one agent wants to throw a party, the agents autonomously spread invitations to the party to one another.

Why is the word “autonomously” important? One might argue that when AI systems exhibit behavior that cannot be adequately traced back to its individual components, one must consider that black swan risks or other adverse effects may emerge that cannot be accurately predicted or explained. 

The concept of XAI is of somewhat limited use in these cases, where AI quickly evolves and improves itself. Hence, XAI appears insufficient to mitigate potential risks, and additional preventive measures in the form of guidelines and laws might be required. 

As AI continues to evolve, the importance of XAI will only continue to grow. AI systems may be applied for the good, the bad and the ugly. To which extent AI can shape humans’ future relies partly on who deploys it and for which purposes, how it is combined with other technologies, and to which principles and rules it is aligned. 

XAI could prevent or mitigate some of an AI system’s potential adverse effects. Regardless of the possibility of explaining every decision of an AI system, the existence of the notion of XAI implies that, ultimately, humans are responsible for decisions and actions stemming from AI. And that makes AI and XAI subject to all sorts of interests

Leave feedback about this

  • Quality
  • Price
  • Service

PROS

+
Add Field

CONS

+
Add Field
Choose Image
Choose Video
X