For Germany alone, it is expected that services and products based on the use of artificial intelligence (AI) will generate revenues of 488 billion euros in 2025 - this would represent 13 percent of Germany’s gross domestic product. In important application sectors, the explainability of decisions made by AI is a prerequisite for acceptance by users, for approval and certification procedures, or for compliance with the transparency obligations required by the GDPR. The explainability of AI products is therefore one of the most important market success factors, at least in the European context.
The core of AI-based applications - by which we essentially mean machine learning applications here - is always the underlying AI models. These can be divided into two classes: White-box and black-box models. White-box models, such as decision trees based on comprehensible input variables, allow the basic comprehension of their algorithmic relationships. They are thus self-explanatory with respect to their mechanisms of action and the decisions they make. In the case of black-box models such as neural networks, it is usually no longer possible to understand the inner workings of the model due to their interconnectedness and multi-layered structure. However, at least for the explanation of individual decisions (local explainability), additional explanatory tools can be used in order to subsequently increase comprehensibility. Depending on the specific requirements, AI developers can fall back on established explanation tools, e.g. LIME, SHAP, Integrated Gradients, LRP, DeepLift or GradCAM, which, however, require expert knowledge. For mere users of AI, only few good tools exist so far that provide intuitively understandable decision explanations (saliency maps, counterfactual explanations, prototypes or surrogate models).
The participants in the survey conducted as part of this study use popular representatives of white-box models (statistical/probabilistic models, decision trees) and black-box models (neural networks) to roughly the same extent today. In the future, however, according to the survey, a greater use of black-box models is expected, especially neural networks. This means that the importance of explanatory strategies will continue to increase in the future, while they are already an essential component of many AI applications today. The importance of explainability varies greatly depending on the industry. It is considered by far the most important in the healthcare sector, followed by the financial sector, the manufacturing sector, the construction industry and the process industry.
Four use cases were analyzed in more detail through in-depth interviews with proven experts. The use cases comprise image analysis of histological tissue sections as well as text analysis of doctors' letters from the health care domain, machine condition monitoring in manufacturing, and AI-supported process control in the process industry. Among these, model explanations that make the model-internal mechanisms of action comprehensible (global explainability) are only indispensable for the process control case as a strict approval requirement. In the other use cases, local explainability is sufficient as a minimum requirement. Global explainability, however, plays a key role in the acceptance of AI-supported products in the considered use cases related to manufacturing industries.
Furthermore, the use case analyses show that the selection of a suitable explanation strategy depends on the target groups, the data types used and the AI model used. The study analyzes the advantages and disadvantages of the established tools along these criteria and offers a corresponding decision support. Since white-box models are self-explanatory in terms of model action mechanisms and individual decisions, they should be preferred for all applications that place high demands on comprehensibility - whenever possible. Especially if they perform similarly well, or at least sufficiently well, compared to black-box models.
It can be assumed that with the increasing use of AI in business, the need for reliable and intuitive explanation strategies will also increase significantly in the future. In order to meet this demand, the following technical and non-technical challenges currently need to be overcome:
· New and further development of suitable "hybrid" approaches that combine data- and knowledge-driven approaches, or white- and black-box modelling approaches respectively.
· Consideration of aspects from behavioural and cognitive science - such as the measurability of the quality of an explanation from the user's point of view, automated adaptations of explanations to users, explainability of holistic AI systems - in order to improve explainable AI systems
· Definition of application and risk classes from which the basic necessity of an explanation for given use cases can be derived
· Definition of uniform requirements for the explainability of AI and thus the creation of clear regulatory specifications and approval guidelines corresponding to the application and risk classes
· Creation of approval and (re)certification frameworks for systems continuously learning during operational deployment
· Provision and implementation of comprehensive education and training programs for examiners and inspectors to verify the explainability of AI.