Content-type: text/html
[] []
All Ethical Issues

Explainability

Category: Social and Cultural Issues

Explainability is closely related to transparency. "Explainability is particularly important for systems that might 'cause harm,' have 'a significant effect on individuals,' or impact 'a person's life, quality of life, or reputation.'... if an AI system has a "substantial impact on an individual's life" and cannot provide 'full and satisfactory explanation' for its decisions, then the system should not be deployed." (Fjeld, et.al., 2020:43)

In the case of analytics, explainability seems to be inherently difficult. Zeide (2019) writes, "Unpacking what is occurring within AI systems is very difficult because they are dealing with so many variables at such a complex level. The whole point is to have computers do things that are not possible for human cognition. So trying to break that down ends up creating very crude explanations of what is happening and why."

This is unsatisfactory. That is why, for example, the EU has added a "right to explanation" within the GDPR. "The Article 29 Working Party [28] state that human intervention implies that the human-in-the-loop should refer to someone with the appropriate authority and capability to change the decision," write  (Hamon, Junklewitz & Sanchez (2020, p.8). "It is clear how the requirement of explainability is relevant for the envisaged safeguards. Human supervision can only be effective if the person reviewing the process can be in a position to assess the algorithmic processing carried out."

But GDPR implementation should be viewed as a regulatory experiment (Eckersley, et.al., 2017). "That might mean adopting clearer and stronger incentives for explainability, if the GDPR rules appear to be bearing fruit in terms of high-quality explanatory technologies, or it might mean moving in the direction of different types of rules for explainability, if that technical research program appears unsuccessful."

But we're not sure whether we'll be able to provide explanations. As Eckersley, et.al. (2017) say, "Providing good explanations of what machine learning systems are doing is an open research question; in cases where those systems are complex neural networks, we don't yet know what the trade-offs between accurate prediction and accurate explanation of predictions will look like."

Examples and Articles

Introducing AI Explainability 360
IBM announces " AI Explainability 360, a comprehensive open source toolkit of state-of-the-art algorithms that support the interpretability and explainability of machine learning models. We invite you to use it and contribute to it to help advance the theory and practice of responsible and trustworthy AI." Direct Link


Do you have another example of Explainability? Suggest it here

Force:yes