Lack of Appeal
Category: When Analytics Works
There is widespread aversion to being subject to decisions made by machines without possibility of appeal. This was the first question raised in a recent French debate on ethics in AI: "Will the prestige and trust placed in machines, often assumed to be 'neutral' and fail-proof tempt us to hand over to machines the burden of responsibility, judgment and decision-making?" (Demiaux and Abdallah, 2017)
What is required, according to many, is an ability to appeal, "the possibility that an individual who is the subject of a decision made by an AI could challenge that decision" (Fjeld, et.al., 2020:32). The Access Now report calls for "a human in the loop in important automated decision-making systems, which adds a layer of accountability" (Access Now, 2018). There is additionally a need for a principle of "remedy for automated decision" that is "fundamentally a recognition that as AI technology is deployed in increasingly critical contexts, its decisions will have real consequences, and that remedies should be available just as they are for the consequences of human actions" (Fjeld, et.al., 2020:33).
Examples and Articles
Society-in-the-Loop: Programming the Algorithmic Social Contract
"I propose an agenda I call society-in-the-loop (SITL), which combines the HITL control paradigm with mechanisms for negotiating the values of various stakeholders affected by AI systems."
Direct Link
Do you have another example of Lack of Appeal? Suggest it here
- Course Outline
- Course Newsletter
- Activity Centre
- -1. Getting Ready
- 1. Introduction
- 2. Applications of Learning Analytics
- 3. Ethical Issues in Learning Analytics
- 4. Ethical Codes
- 5. Approaches to Ethics
- 6. The Duty of Care
- 7. The Decisions We Make
- 8. Ethical Practices in Learning Analytics
- Videos
- Podcast
- Course Events
- Your Feeds
- Submit Feed
- Privacy Policy
- Terms of Service