Content-type: text/html
AI4People - AI4People


Nov 03, 2021


[Graph Issues]

Summary

This report starts from the premise that "An ethical framework for AI must be designed to maximise these opportunities and minimise the related risks" (Floridi, et.al., 2018:7). These are based on what they consider to be the "four fundamental points in the understanding of human dignity and flourishing: who we can become (autonomous self-realisation); what we can do (human agency); what we can achieve (individual and societal capabilities); and how we can interact with each other and the world (societal cohesion)" (Ibid:9). The recommendations are further informed by a study of the commonalities and differences in previous documentation of AI ethics (Ibid: 15).
As a model, the authors look to bioethics. They argue, "Bioethics is the one that most closely resembles digital ethics in dealing ecologically with new forms of agents, patients, and environments" (Floridi, 2013:62-63). The AI4People principles are derived explicitly from the principles first described by Beauchamp and Childress in 1978 (see Beauchamp & Childress, 2012; The Ethics Center, 2017), specifically, beneficence, non-maleficence, autonomy, and justice. To these they add a fifth principle, "explicability, understood as incorporating both intelligibility and accountability" (Floridi, et.al., 2018:16).
In the precise statement of these principles, however, there are significant differences. For example, it is unclear whether 'the common good' is included in the principle of beneficence. It is also not clear what the "upper limits on future AI capabilities" should be. Additionally, should AI promote social justice, or merely be developed consistently with the principles of social justice?
(Fjeld, et.al., 2020)
Diagram at
https://dash.harvard.edu/bitstream/handle/1/42160420/HLS%20White%20Paper%20Final_v3.pdf?sequence=1&isAllowed=y
Principled AI - diagram - https://dash.harvard.edu/bitstream/handle/1/42160420/HLS%20White%20Paper%20Final_v3.pdf?sequence=1&isAllowed=y (Fjeld, et.al., 2020)
2. The Montreal Declaration for Responsible AI, developed under the auspices of the University of Montreal, following the Forum on the Socially Responsible Development of AI of November 2017 (hereafter "Montreal"; Montreal Declaration 2017)3;
3. The General Principles offered in the second version of Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. This crowd-sourced global treatise received contributions from 250 global thought leaders to develop principles and recommendations for the ethical development and design of autonomous and intelligent systems, and was published in December 2017 (hereafter "IEEE"; IEEE 2017)4;
4.The Ethical Principles offered in the Statement on Artificial Intelligence, Robotics and 'Autonomous' Systems, published by the European Commission's European Group on Ethics in Science and New Technologies, in March 2018 (hereafter "EGE"; EGE 2018);
6.The Tenets of the Partnership on AI, a multistakeholder organisation consisting of academics, researchers, civil society organisations, companies building and utilising AI technology, and other groups (hereafter "the Partnership"; Partnership on AI 2018).

Content

Force:yes