Accountability and Explicability
The principles of accountability and explicability arise differently in computing and AI codes than it does in other ethical codes. In the case of academic and medical research, accountability is typically delegated to a process undertaken by a research ethics board (REB). Similarly, the Information and Privacy Commissioner of Ontario asserts that compliance with privacy rules and restrictions should be subject to independent scrutiny and that "the state must remain transparent and accountable for its use of intrusive powers through subsequent, timely, and independent scrutiny of their use" (Cavoukian, 2013).
In other disciplines, a range of additional processes describe practices such as predictability, auditing and review (Raden, 2019: 9). As the U.S. Department of Health and Welfare argued, data should only be used for the purposes for which it was collected. And this information, however used, should be accurate; there needs to be a way for individuals to correct or amend a record of identifiable information about themselves, and organizations must assure the reliability of the data and prevent misuse of the data. These, write the authors, "define minimum standards of fair information practice" (Ware, et.al., 1973:xxi).
In digital technology, accountability also raises unique challenges. The AI4People code, for example, adds a fifth principle to the four described by Beauchamp & Childress (1992), "explicability, understood as incorporating both intelligibility and accountability" where we should be able to obtain "a factual, direct, and clear explanation of the decision-making process" (Floridi et al. 2018). As (Fjeld, 2020) summarizes, "mechanisms must be in place to ensure AI systems are accountable, and remedies must be in place to fix problems when they're not." Also, "AI systems should be designed and implemented to allow oversight."
Finally, says Fjeld, "important decisions should remain under human review." Or as Robbins (2019) says, \'Meaningful human control\' is now being used to describe an ideal that all AI should achieve if it is going to operate in morally sensitive contexts." As Robbins argues, "we must ensure that the decisions are not based on inappropriate considerations. If a predictive policing algorithm labels people as criminals and uses their skin color as an important consideration then we should not be using that algorithm."
Found In...
Examples and Articles
Do you have another example of Accountability and Explicability? Suggest it here
- Course Outline
- Course Newsletter
- Activity Centre
- -1. Getting Ready
- 1. Introduction
- 2. Applications of Learning Analytics
- 3. Ethical Issues in Learning Analytics
- 4. Ethical Codes
- 5. Approaches to Ethics
- 6. The Duty of Care
- 7. The Decisions We Make
- 8. Ethical Practices in Learning Analytics
- Videos
- Podcast
- Course Events
- Your Feeds
- Submit Feed
- Privacy Policy
- Terms of Service