Figure 16 - uploaded by Giulia Vilone
Content may be subject to copyright.
Summary of the pros and cons associated to each explanation format, namely numeric, rules, textual, visual and mixed explanations.

Summary of the pros and cons associated to each explanation format, namely numeric, rules, textual, visual and mixed explanations.

Source publication
Article
Full-text available
Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. This systematic review a...

Similar publications

Article
Full-text available
Misophonia is a scarcely known disorder. This systematic review (1) offers a quantitative and qualitative analysis of the literature since 2001, (2) identifies the most relevant aspects but also controversies, (3) identifies the theoretical and methodological approaches, and (4) highlights the outstanding advances until May 2022 as well as aspects...

Citations

... By integrating XAI techniques into smart healthcare, automation, explanations can be provided for AI output, enhancing understanding, accountability, and ethical use of AI in healthcare. XAI benefits healthcare professionals, empowers patients, and ensures fairness in healthcare decision-making (Vilone & Longo, 2021a). ...
... AI algorithms can identify patterns and risk factors associated with various diseases by analyzing patient data, including medical records, lab results, genetic information, and lifestyle factors. This enables early detection, more accurate diagnosis, and personalized treatment recommendations (Vilone & Longo, 2021a). AI can also predict disease progression, helping healthcare providers tailor treatment plans and interventions based on individual patient characteristics. ...
... In a case study, Watson for Oncology was used at Manipal Hospitals in India to assist oncologists in treatment planning for breast cancer patients. The AI system provided treatment recommendations that aligned with the expert opinions of oncologists in 90% of cases (Vilone & Longo, 2021a). In addition, it helped reduce the time required for treatment planning and provided additional insights for consideration, enhancing the decision-making process and potentially improving patient outcomes (Başağaoğlu et al., 2022). ...
Chapter
The rapid integration of artificial intelligence (AI) into the healthcare sector has opened up new opportunities for smart healthcare automation, transforming medical diagnosis, treatment, and overall patient care. However, the widespread adoption of AI algorithms in healthcare comes with challenges, particularly regarding transparency and explainability. This chapter explores the concept of explain-able AI (XAI) and its crucial role in smart healthcare automation. The authors discuss the significance of XAI, various techniques for achieving explainability, and their potential applications in healthcare. Through case studies and success stories, the authors showcase real-world applications of XAI in radiology and chronic disease management. Lastly, they highlight future directions in XAI research for smart healthcare automation and emphasize the implications for healthcare providers and policymakers. By embracing XAI, the healthcare industry can unlock the full potential of AI while ensuring transparency, fairness, and improved patient outcomes.
... Based on the literature, the concepts of XAI within different application domains are categorised as [18]- [20]: ...
...  Stage of Explainability: The stage of explainability refers to the phase of the AI process when a model generates the explanation for the decision it provides. According to [19], [20], the stages are as follows:  Ante-hoc methods: Involves generating the explanation for the decision from the very beginning of the training phase [18]. It can also be devided into premodelling and during modelling explainability [13], [21]. ...
... On the other hand, local scope refers to explicitly explaining a single instance of inference to the user.  Input and Output formats: Alongside core concepts, stages, and scopes, input and output formats are significant in the development of XAI methods [19], [20]. The mechanisms of explainable models unquestionably differ when learning different input data types, such as images, numbers, texts, etc. ...
Article
Full-text available
This research focusses on the potential application of artificial intelligence (AI) techniques in the analysis of behavioural addictions, specifically addressing problematic Internet use among adolescents. Using tabular data from a representative sample from Serbian high schools, the authors investigated the feasibility of employing eXplainable AI (XAI) techniques, placing special emphasis on feature selection and feature importance methods. The results indicate a successful application to tabular data, with global interpretations that effectively describe predictive models. These findings align with previous research, which confirms both relevance and accuracy. Interpretations of individual predictions reveal the impact of features, especially in cases of misclassified instances, underscoring the significance of XAI techniques in error analysis and resolution. Although AI’s influence on the medical domain is substantial, the current state of XAI techniques, although useful, is not yet advanced enough for the reliable interpretation of predictions. Nevertheless, XAI techniques play a crucial role in problem identification and the validation of AI models.
... Also, the significance of trust in human interactions with machine learning systems and the importance of providing explanations for individual predictions to foster that trust are emphasized (Ribeiro et al, 2016). Several surveys have explored different aspects of XAI, commonly classifying methods based on their operational scope local or global, procedural stage ante-hoc or post-hoc, and output format numerical, visual, textual, or hybrid (Angelov et al, 2021;Tjoa and Guan, 2020;Vilone and Longo, 2021;Minh et al, 2022;Speith, 2022). Previous literature (Yang et al, 2023) explores XAI's applications in fields like medicine and cybersecurity, addressing model limitations, proposing human-centered research directions, introducing a taxonomy for classifying XAI methods, and suggesting promising avenues such as context-aware XAI, interactive explanations, and hybrid models. ...
Preprint
Full-text available
Explainable Artificial Intelligence (XAI) is a crucial domain within research and industry, aiming to develop AI models that provide human-understandable explanations for their decisions. While the challenges in AI, deep learning, and big data have been extensively explored, the specific concerns of XAI developers have received limited attention. To address this gap, we analyzed discussions on Stack Exchange websites to delve into these issues. Through a combination of automated and manual analysis, we identified 6 overarching categories, 10 distinct topics, and 40 sub-topics commonly discussed by developers. Our examination revealed a steady rise in discussions on XAI since late 2015, initially focusing on conceptualization and practical applications, with a notable surge in activity across all topic categories since 2019. Notably, Concepts and Applications, Tools Troubleshooting, and Neural Networks Interpretation emerged as the most popular topics. Troubleshooting challenges were commonly encountered with tools like Shap, Eli5, and Aif360, while Visualization issues were prevalent with Yellowbrick and Shap. Furthermore, our analysis suggests that addressing questions related to XAI poses greater difficulty compared to other machine-learning questions.
... A novel explanation method based on LIME for the explanation of predictions made by a classifier was proposed [9], and the best practices for the usage of these interpretable machine learning models were also discussed [10][11][12][13][14]. ...
Article
More than ever, robust, and interpretable toxic comment recognition methods are required to manage the growing frequency of toxic comments on online platforms. The research tries to incorporate techniques in Explainable Artificial Intelligence (XAI) to improve the transparency and comprehensibility of toxic comment classification. Using a comprehensive dataset, we designed a model architecture which includes the latest practices in XAI. Through rigorous experimentation, our study proves the usefulness of such methods as tools that not only increase classification accuracy but also illuminate model decision-making processes. One view is that by adding LIME and Eli5 to toxic comment classification, model performance improves both in terms of accuracy and interpretation for decisions. Our results provide valuable insights into the model's strengths and areas for refinement, contributing to the transparency and interpretability of toxic comment classification. This research contributes to the evolving landscape of interpretable machine learning, offering a pathway to more accountable and trustworthy toxic comment moderation systems. Keywords: Explainable artificial intelligence, Model interpretability, toxic comment classification, LIME, Eli5
... However, this success has been driven by accepting their complexity and adopting "black box" AI models that lack transparency. On the other hand, eXplainable AI (XAI), which enhances the transparency of AI and facilitates its wider adoption in critical domains, has been attracting increasing attention [1][2][3][4][5][6][7][8][9][10]. ...
Article
Full-text available
Although machine learning models are widely used in critical domains, their complexity and poor interpretability remain problematic. Decision trees (DTs) and rule-based models are known for their interpretability, and numerous studies have investigated techniques for approximating tree ensembles using DTs or rule sets, even though these approximators often overlook interpretability. These methods generate three types of rule sets: DT based, unordered, and decision list based. However, very few metrics exist that can distinguish and compare these rule sets. Therefore, the present study proposes an interpretability metric to allow for comparisons of interpretability between different rule sets and investigates the interpretability of the rules generated by the tree ensemble approximators. We compare these rule sets with the Recursive-Rule eXtraction algorithm (Re-RX) with J48graft to offer insights into the interpretability gap. The results indicate that Re-RX with J48graft can handle categorical and numerical attributes separately, has simple rules, and achieves a high interpretability, even when the number of rules is large. RuleCOSI+, a state-of-the-art method, showed significantly lower results regarding interpretability, but had the smallest number of rules.
... When designing the hypothetical herd management system, we followed the classification of Valone et al. [VL21]. The authors classify techniques from the field of explainable AI (XAI), among others, by their scope (local, global) and their output format (numerical, rule-based, textual, visual). ...
Conference Paper
Full-text available
In this paper, we examine different approaches to explaining decision support in herd management systems for their effects on comprehensibility and trust. To this end, we present a hypothetical system for assessing the risk of mastitis, a common infectious disease in dairy cattle. For this system, we design four explanation formats to present risk assessments to farmers. We collect their feedback in a survey to get suggestions for designing systems that are well accepted. In our work, it was not possible to identify one explanation format that is preferable to all others. Rather, a finding was that herd management systems should optimally support multiple explanation formats and allow switching between them depending on the situation.
... As mentioned above, this choice among multiple options can be posed at a high level of design principle or approach, or at a more implementational level, that is, at the level of the individual feature or solution to be integrated into the system, or technique and method to be implemented. Examples of XAI solutions abound, depending on the context [5,11]. Obviously, the choice of which solutions are better, and hence not a waste of time and money (or, worse, potentially harmful) if implemented in some XAI setting, should be grounded on the strongest evidence available; according to the hierarchy of evidence (see Table 1) these come from empirically-grounded studies and well-designed user studies. ...
... However, this success has been driven by accepting their complexity and adopting "black box" AI models that lack transparency. On the other hand, eXplainable AI (XAI), which enhances the transparency of AI and facilitates its wider adoption in critical domains, has been attracting increasing attention [1][2][3][4][5][6][7][8][9][10]. ...
Preprint
Full-text available
Machine learning models are increasingly being used in critical domains, but their complexity, lack of transparency, and poor interpretability remain problematic. Decision trees (DTs) and rule-based approaches are well-known examples of interpretable models, and numerous studies have investigated techniques for approximating tree ensembles using DTs or rule sets; however, tree ensemble approximators do not consider interpretability. These methods are known to generate three main types of rule sets: DT-based, unordered-based, and decision list-based. However, no known metric has been devised to distinguish and compare these rule sets. Therefore, the present study proposes an interpretability metric to allow comparisons of interpretability between different rule sets, such as decision list- and DT-based rule sets, and investigates the interpretability of the rules generated by the tree ensemble approximators. To provide new insights into the reasons why decision list-based and inspired classifiers do not work well for categorical datasets consisting of mainly nominal attributes, we compare objective metrics and rule sets generated by the tree ensemble approximators and the \textit{Recursive-Rule eXtraction algorithm (Re-RX) with J48graft}. The results indicated that \textit{Re-RX with J48graft} can handle categorical and numerical attributes separately, has simple rules, and achieves high interpretability, even when the number of rules is large.
... To subsequently optimize quality, it is crucial to get insights into the trained model (Cramer et al. 2021). Model-agnostic methods allow to detect to what extent the model prediction depends on the different input variables as well as to compare different types of models (Vilone and Longo 2021). A systematic investigation of the methods with regard to their applicability in the context of PQ has not yet been conducted (Goldman et al. 2021). ...
Chapter
Full-text available
In short-term production management of the Internet of Production (IoP) the vision of a Production Control Center is pursued, in which interlinked decision-support applications contribute to increasing decision-making quality and speed. The applications developed focus in particular on use cases near the shop floor with an emphasis on the key topics of production planning and control, production system configuration, and quality control loops. Within the Predictive Quality application, predictive models are used to derive insights from production data and subsequently improve the process- and product-related quality as well as enable automated Root Cause Analysis . The Parameter Prediction application uses invertible neural networks to predict process parameters that can be used to produce components with desired quality properties. The application Production Scheduling investigates the feasibility of applying reinforcement learning to common scheduling tasks in production and compares the performance of trained reinforcement learning agents to traditional methods. In the two applications Deviation Detection and Process Analyzer, the potentials of process mining in the context of production management are investigated. While the Deviation Detection application is designed toidentify and mitigate performance and compliance deviations in production systems, the Process Analyzer concept enables the semi-automated detection of weaknesses in business and production processes utilizing event logs. With regard to the overall vision of the IoP, the developed applications contribute significantly to the intended interdisciplinary of production and information technology. For example, application-specific digital shadows are drafted based on the ongoing research work, and the applications are prototypically embedded in the IoP.
... The minimum experience replay size allowed is not known but explainability can help find it. [14,15,16]. Custom explainers exist [17,18] to understand simulation events but not Experience Replay. ...
Conference Paper
Full-text available
Explainable Reinforcement Learning (xRL) faces challenges in debugging and interpreting Deep Reinforcement Learning (DRL) models. A lack of understanding for internal components like Experience Replay, which samples and stores data from the environment, risks burdening resources. This paper presents an xRL-based Deep Q-Learning (DQL) system using SHAP (SHapley Additive exPlanations) to explain input feature contributions. Data is sampled from Experience Replay, creating SHAP Heatmaps to understand how it influences the neural network Q-value approximator's actions. The xRL-based system aids in determining the smallest Experience Replay size for 23 simulations of varying complexities. It contributes an xRL optimization method, alongside traditional approaches, for tuning the Experience Replay size hyperparameter. This visual and creative approach achieves over 40% reduction in Experience Replay size for 18 of the 23 tested simulations, smaller than the commonly used sizes of 1 million transitions or 90% of total environment transitions.