Content-type: text/html
[] []
All Ethical Issues

Opacity and Transparency

Category: Social and Cultural Issues

Analytics is ethically problematic in society when it is not transparent. When a decision-making system is opaque, it is not possible to evaluate whether it is making the right decision. We desire transparency in analytics. "The principle of 'transparency' is the assertion that AI systems should be designed and implemented in such a way that oversight of their operations are possible" (Fjeld, et.al., 2020:42).

Proponents argue, in particular, that people should be aware of when analytics is employed in a decision-making capacity.  For example, "the Electronic Frontier Foundation told us of the need for transparency regarding the use of AI for dynamic  or variable pricing systems, which allow businesses to vary their prices in real time." (Eckersley, et.al., 2017).

This is expressed as the 'principle of notification'. As the Select Committee on Artificial Intelligence writes, "The definition of the principle of 'notification when an AI system makes a decision about an individual' is facially fairly clear: where an AI has been employed, the person to whom it was subject should know. The AI in UK document stresses the importance of this principle to allow individuals to "experience the advantages of AI, as well as to opt out of using such products should they have concerns." (Fjeld, et.al., 2020:45)

Additionally, transparency applies to the model or algorithm applied in analytics. "Transparency of models: it relates to the documentation of the AI processing chain, including the technical principles of the model, and the description of the data used for the conception of the model. This also encompasses elements that provide a good understanding of the model, and related to the interpretability and explainability of models" (Hamon, Junklewitz & Sanchez, 2020, p.2).

That is why the the Montreal Declaration describes the use of open source software and open data sets as a "socially equitable objective" (University of Montreal, 2018). Additionally, the ICCPR Human Rights Committee states that "every individual should have the right to ascertain in an intelligible form, whether, and if so, what personal data is stored in automatic data files, and for what purposes." (UC Berkeley, 2019)

Examples and Articles

Digital Citizenship Toolkit
Have you ever wondered if your phone is listening to you? Do you ever look to the Internet for the answer to a question, and hours later, find that you are more confused than before? Have you argued with a friend or relative about a meme? Have you been tempted to share your own thoughts and feelings online, but resisted for fear of trolls? This book delves into these issues and more. Direct Link

,

Interaction Design for Explainable AI: Workshop Proceedings
"Decisions will need to be justified due to ethical concerns as well as trust, but achieving this has become difficult due to the `black-box' nature many AI models have adopted" Direct Link


Do you have another example of Opacity and Transparency? Suggest it here

Force:yes