Unedited audio transcription by Google Recorder
Hello and welcome to another video in the course, ethics analytics. And the duty of care We're still in module for which is the study of ethical codes related to analytics and AI, but also related to other professions and other disciplinary groups. And of course, the objective of these ethical codes is to look at what they say and how they arrive at their ethical conclusions.
In this video, we're looking specifically at what the bases are for the values and principles listed in these codes of ethics. That is to say, what we're after here. Is an understanding of what grounds these codes of ethics on what basis. They're authors assert that this code of ethics, rather than some other code of ethics, is the code of ethics to, I should point out that, in many cases the codes of ethics don't offer any grounds at all, but where they do, they offer one or more of the types of bases that will be looking at in this video.
So, we're gonna run through again, like the format for many of the previous videos. A number of the different types. It's interesting. As we look at these species, you know, we might read an explanation, for example, something like an individual's professional obligations are derived from the profession and it's code tradition, societies, expectations, contracts, laws and rules of ordinary morality.
But when we look at this more closely, we find that this explanation or this description raises as many questions as an answers. So we're going to run through these one at a time and see what those questions are. So let's begin with the principle of universality. What we mean here is that the way the code is justified, by the authors is that they assert that the principles embodied in the code are universal principles.
That is to say, they are held by everyone. And arguably, if the principle is believed by everyone, then it should be believed here in this particular code of ethics. For example, the universal declaration of ethical principles for psychologists asserts that these are quote based on shared human values and later on in asserts, respect for the dignity of persons is the most fundamental and universally found ethical principle across geographical and cultural boundaries.
And us professional disciplines. So this is a pretty clear example of the case where universality is being asserted as a foundation for an underlying set of principles, The SLMR convention. Also states, for example, virtually all modern societies have strong traditions for protecting individuals in their interactions with large organizations Norms of individual consent privacy and autonomy.
For example, must be more vigilantly protected in as the environments in which their holders reside are transformed by technology. So again, we see a case where universality is adjustification for a moral or ethical principle. Now, as I suggested in the previous video, the assertion that there is such a consensus, I think is a bit misleading.
We, we look at what happens when we zero in more specifically and more in detail at what is meant by say accountability. And we find that it breaks down. Well, a large number of people may say accountability is a universal principle. What accountability actually means is something that varies from place to place from discipline to discipline.
And this isn't just my finding. There are other researchers, Maxwell, and Schwimmer, for example, find that analysis did not reveal an overlapping consensus on teachers. Ethical obligations Campbell writes, despite extensive research on the ethical dimensions of teaching scholars. In the field, do not appear to be any closer to agreement on sub-quote, the moral essence of teacher professionalism.
And similarly, we'll consumer argues. That the teaching profession has failed to unite around any agreed set of dental values, which it might serve, then Newland and Kendall report. The model used for the codes, varies greatly from country to country. So I think that although universality may be appealed to as justification for these codes.
It doesn't succeed. Another justification that we see referenced a lot is an appeal to fundamental rights. Or as we might say appeal to natural rights or perhaps natural law diagram, here is the John Fenis theory of natural law in moral reasoning. And as you can see, from the diagram, we begin with a description of reality, in some way, for example, the basic goods or the requirements of practical reason.
And from that, we derive, normative statements that is to say statements that instruct in the principles of ethics or is the diagram here, says morally, valid laws. This is an approach that a number of groups have taken. For example, the high level expert group on artificial intelligence in Britain sites, four ethical post quote rudely been fundamental rights which must be respected to ensure in order to ensure that AI systems are developed deployed and used in a trustworthy manner.
The Toronto declaration also reports are also argues or focuses on the obligation to prevent machine learning systems from discriminating. And in some cases violating is existing human rights. Law Access now specifically, adopting human rights framework. The use of international human rights law and its will develop standards and institutions to examine artificial intelligence systems.
Can contribute to the conversations already happening and provide a universal vocabulary and forms established to address power. Differential. We see there is an overlap here between universality and natural rights and that makes sense. Because if we think that rights are natural or fundamental, it's stands to reason that they would also be universal.
But there is a bit of a distinction here in the way. This is argued sometimes, natural rights can exist as a result of human activity. For example, the previous conversations that were already happening. Nonetheless, it's not clear what these fundamental rights are different efforts to list and describe these fundamental rights.
Describe them differently. We have documents such as the United States bill of rights, the Canadian charter of rights and freedoms, the United Nations universal declaration of human rights. For example, which are all very different from each other. Is it for example a natural right to bear rooms is the right to an education.
As found in the UN universal declaration of human rights. Is that a natural right? It seems that further argument would be required that these natural rights don't just reveal themselves to us. Ethical arguments in these codes of ethics often argue from a grounding in fact, and there are two ways in which this can come up.
One is that there is a fact which might be a law of nature or a description of a state of affairs from which an ethical principle is derived. Alternatively sometimes ethical principles are simply asserted to be facts either way. The determination of fact is used as a fundamental argument for the ethical principle in question.
Now there are some arguments that can be made about this argument as well. One is what is sometimes known as the is ought problem and it has its origins in David Hume and very roughly stated. It says something like you cannot do arrive and ought from. It is that is to say this state of affairs in the world.
However, it may be does not tell us in and of itself. What is right? And what is wrong? Now Hume doesn't say that exactly and he says that facts about the world need to be considered in contacts. They need to be observed explained and supported with reference to goals or requirements.
So there's a lot of arguments around that none of the less. It's not clear that you can point to a state of affairs in the world, for example, what is natural for a human and derive? A moral principle out of that? Another consideration is that if we're looking at facts, the fact remains that facts do not really lead to moral values.
Quote, a study here, you can see the diagram, well, facts are raised a lot of the time, personal experiences bridge, moral divides, far better than facts and that's an experience. We see not just in questions of ethics, but in questions of the relation of reason and rationality to individual decision, making generally, there are many cases in which facts do not convince people do not sway their opinions.
And this may be true, not only an eth, but in politics in personal conduct preferences and more. So it's a fact that fact does not inform morality very frequently in the ethical principles. We see reference to something like balancing risks and benefits. The AI for people declaration makes that explicit quote, I'm ethical framework must be designed to maximize these opportunities.
That is these opportunities from AI and minimize their related risks. There are many cases like this. The concordat working group just got document on open data. And the need to manage access quote, in order to maintain confidentiality, protect individuals, privacy respect consent terms as well as managing security and other risks.
So, here, we're balancing between the benefits of open and all the risks that are involved, the balancing of risks and benefit is a broadly consequentialist approach to ethics. And we'll be talking more about that in the next module. But for here, it's relevant to say that it results in a different calculation for each application.
Each time, you're looking at a specific balancing of risks and benefits these risks and benefits show up in different ways and have different values. If we look get the risk and benefit map illustrated on the slide, we can see immediately that there are two important dimensions that must be considered for each.
First of all, the likely hood of the risk or the benefit. It may be very unlikely or maybe very likely and that's part of the calculation. And then as well, we need to take into account, the severity of the risk and the significance of the benefit. So a risk that is very likely and very severe is something that it's kind of hard to trade off against of benefit.
That is not very likely and not very significant also is very hard to trade off against. So we need this mapping of what the risks and the benefits actually are. And so that means that we need to know what is likely to happen if we implement AI, an analytics in this way.
And so let's not always able to be determined has Rumsfeld says there are unknown unknowns, you know we look at the house of Lord select comedy on AI which is recommending a consoleist approach and it's 2018. Document it states. There was a need to be realistic about the public's ability to understand in detail how the technology works.
And it's better to focus on the consequences of AI rather than the way it works and and to make that the way individuals are able to exercise their rights, but this might be unrealistic. If people don't understand how AI works, it's seems hard to understand how they can understand what the consequences will be.
It's probable that the understanding of the consequences will be determined this much by marketing and as by actual projections of risk and benefit that could be obtained. Nonetheless, these factors are important. That's why we began this course with a look at the applications. That is a detailed drawing out of what the benefits are and a look at the risks.
The detailed drawing out of what the issues are now. We didn't consider what the likely hood of each of these was because it was far upon the ability of this course to make these projections. The standard we used was simply does the benefit exist? Does the risk exist actually performing the calculation.
Might be humanly impossible. Although possibly an artificial intelligence could do it, man. Finally, perhaps ethics isn't actually a case of balancing computing interests. Economics might be politics. Might be but ethics strikes. Us as something different from that. You know what were after is something that works for everybody. We depict a lot of these ethical issues as competing interests.
But perhaps, what we want to do is find what works for both sides. The information and privacy, commissioner, Ontario takes this approach asserting that quote a positive summer approach to designing a regulatory framework, governing states surveillance can avoid false economies and unnecessary, trade-offs, demonstrating that it is indeed possible to have both public safety and personal privacy.
We can and must have both effective law enforcement, and rigorous privacy protections. And that sounds like more of an approach based in the ethics of the situation than a calculation and a weighing of consequences. Another word argument that comes up fairly frequently. Is that a certain stance on ethics exists as a requirement of the profession?
For example, again, we come back to the universal declaration of ethical principles for psychologists, which states, that competent caring for the well-being of persons and people's involves working for their benefit and above all doing no harm. It requires the application of knowledge and skills that are appropriate for the nature of the situation as well as the social and cultural context.
So, This is basically a derivation of ethical principles which are depicted as a requirement for what somebody needs to believe from an ethical perspective in order to accomplish a certain objective or a certain goal. So the objectives or the goals might be healing people, it might be supporting them.
When they're on welfare, it might be attending to their psychological needs, it might be teaching them, all of these professions have a certain objective or goal and in order to achieve that goal, certain attitudes and beliefs may be required. And so the statement of ethics is a listing of these attitudes and beliefs that may be required.
You know, we we see for example, arguments like the IFLA or the library in the association saying, integrity is vital to the advancement of scientific knowledge and to the maintenance of public confidence in the discipline of psychology. We see, for example, the integrity itself being based on honesty and untruthful open and accurate communication.
So we can sort of back up our way through the requirements of the profession. We look at the diagram on the slide. We see that what this does, is it places a code of ethics and presumably ethics generally with the within the context of a wider model of a profession, here we have a model of an IT profession from the computer society and we see the standards of ethical practice and we see the mechanisms for self-governance and consensus and and these defined professional advancements in turn.
We also have mechanisms for professional developments studying and applying the knowledge as well as the preparatory education. That's where acquired the body of knowledge, curriculum, accreditation, and degrees certifications, or licensing, all of these together, constitute the profession, and they don't all flow from the code of ethics, rather. There's a relation between these elements where the goals, the objectives, the training flow back into the ethics and the ethics inform the training.
It's kind of a symbiotic relationship the principles in this way, maybe expressed in two ways. First of all principle might be derived that is to say. It's it's a consequence of an already defined ethical principle. For example, competent caring for the well-being of persons and people's is one of the requirements of the profession.
But it's previously established that working for the benefit of the people who you're serving is already established. So you see we have working for their benefit and then from that follows competent caring and and we can trace back similar requirements. We look at the principle of integrity which is established on the previously established value of honesty and openness and accuracy.
The second way a principal can be established is that it's conditional and we see this expressed in a number of these codes of ethics. So what that means is that the ethical principle and the relation to the profession is described as a conditional statement, something like this. If you wish to be a member of this profession, then you need to adhere to the following principles.
So as you can see, it's conditional statement. And so, for example, if one is engaged in the activity of competing caring for the well-being of people, then this requires working for their benefit. So arguably against such assertions several objections may be brought forward. First of all you can say that the the requirement doesn't actually fall for example.
You might argue that in order to be a competence psychologist. You do not need to be honest. And open sometimes deception is required. You could argue perhaps that competent caring does not require working for the person's benefit. It might actually require you distancing yourself from the idea of the benefit of the person and simply following the appropriate practices and procedures.
You might also say that the antecedent hasn't been established that is not actually a property of the profession. For example, we might say that being a psychologist doesn't involve caring at all. Then in fact I remember listening on NPR and number of months ago, now I was while I was backpack light packing last summer, a discussion on how the best psychologist might be one, who is psychopathic, who is actually incapable of caring for the patient and therefore immune to the bias that might be created by caring for them a criminal psychologist might take this stance for example,
Another principle commonly appealed to as a justification for an ethical code is the social good or social order. We see this most clearly in journalistic ethics, which states that, for example, the primary function of journalism, is to inform the public and serve the truth. Because as the society for professional journalists, says, public enlightenment is the four runner of justice and the foundation of democracy.
If similarly, we may see additional principles, brought forward to the effect that if we perform this profession properly, then society, as a whole benefits, or perhaps society, as a whole benefits directly from the practice of these ethical virtues. We we might see that for example, in a teacher code of ethics, where the teachers serves as a model for the student.
And therefore, they're not teaching ethics particularly, but the way they conduct themselves, ethically is directly reflected in the way society, conducts itself. Ethically, an argument from social good or social order. However, invites relative, people's judgments are relative people, support is highly context driven. People consider excess acceptability to preserve the social good or the social order on a case by case basis, drew writes that their first thinking about overall policy goals, likely intended outcome, and then weighing up privacy and unintended consequences.
You know, the, the relative is relative ism is clear from statements, like this better, that, if you innocent people are a bit cross at being stopped, then a terrorist interest because terrorist instant try that once more, then a terrorist incident. Because lives are at risk and often this relative is a reflex.
The society in questions, own interests and very often. Social order can be construed, specifically, in terms of national interests and, and therefore, not thinking about say, a global social order, or even the community social order at all, You know, we see policy in countries all around the world, like the one from the office of management and budget in the United States.
Which seeks as ethical principles to support the US approach on free markets federalism and good regulatory practices which leads or which has led to a robust, innovation ecosystem. So here, the social order is being defined in a very specific way, but it's not clear that the social order is defined by Americans or is defined by the Chinese or is defined by Brazilians is the social order that provides the ethical basis necessary for a code of ethics.
We also see fairness appeared appealed to frequently often with no supporter just to vacation. And so the ethics of a profession is based in fairness, full, stop. The New York Times for example, in its own code says that it wants to treat its readers as fairly and openly as possible and also that it treats news sources, just as fairly and openly as it treats readers.
Now, we could argue about whether it's successful in this, but would seem this beautiful is that it is making and peel to a fairness as a justification for a Jessica code. The problem is, what is fairness on the slide here? I've listed four possible ways of describing fairness, this is not a complete list, I am quite sure.
One way we can think of, as this is objectivity free from any widths any width of bias, arguably, however, fairness might involve advocacy fairness to others is seen as something that is non-arbitary citing. The, the original codes of so long. The idea that the same principle or law or rule is applied to all equally CID that.
Nobody is about the law. Another definition of fairness might be based in rights, where something is fair. If and only if it leaves people free from abuse and infringement of their rights yet, another definition of fairness talks about equitable and non-discriminatory practices. I was going to put in that little diagram, that shows the difference between equal and equitable but it's been so overused instead.
I put in a document from the linked team, fairness toolkit which I just recently saw talking about how to measure fairness and large scale, AI applications. And and here we see that actually thinking about what constitutes fairness in a complex discipline, like analytics in AI is far from straightforward.
What does it mean to be objective? Non-ember, ar, right? Or recordable. In the context of AI in analytics, there are ways of defining data classes, there are ways of defining algorithms, computer models, permutations different principles of regression, customization, etc, that can all have an impact on what we think is fairness.
So it's not clear that fairness without a further. Explanation can serve as the basis for a code of ethics. Epistemology is another principle. That is frequently cited. There two major ways to think of this when we think that the advancement of knowledge and learning being considered to be in and of itself, a moral good.
First of all, we might say that a value becomes a value if it supports knowledge and truth seeking. The good example of this is honesty. One of the reasons why we want people to be honest be is because it makes it possible to learn things to know things and find out the truth.
Another way of thinking about it is to say that and ethical decision which may or may not appeal. To one of these moral principles is ethical, if and sometimes only, if it is informed by knowledge and evidence. So in other words we use knowledge and evidence as the basis for our moral reasoning.
If not the basis for our moral principles now it's not clear that this also works as a basis for ethical codes. First of all, we can simply deny that knowledge and learning are moral goods that the it's you know, it's nice that people want to know things and learn things, but they are not in enough themselves ethical values.
We might say with Seneca, for example, that this desire to know, more are sorry, this desire to know more than a sufficient is a sort of intemperance, you know, you can know or want to know too much or we in in slogan form today we say something like, curiosity killed the cat alternatively, we can say that some things are not meant to be known.
It would have been better. Arguably, had we not learned how to create atomic weapons. This would have been a piece of knowledge. We were better off, not knowing. So you know more often we see the responses based in epistemology to be couched in very specific terms, not just knowledge in general but some specific piece of knowledge.
So knowledge related to advanced weapons or personal confidenceiality or host of other arms. Other harms is wrong, but other kinds of knowledge like scientific principles or even what is the good? These are inherently good. But now we have not a value of epistemology underlying, our moral code, we have a value picking somehow between good knowledge and bad knowledge.
Another basis for moral codes that we see fairly frequently is trust. And as a result, the elements of trust in themselves can be cited. As justification for moral principles. Again, we come back to the psychologists who, say, integrity is vital to the maintained maintenance of public confidence in the discipline of psychology for psychology to work, it requires trust.
And so for psychology to embody trust, it must adhere to a certain set of ethical principles. Well, what are those principles? Here we have a trust model that is frequently used that combines five major features of trust credibility respect pride comradery and fairness. So the argument here then would be that all five of these as components of trust justify treating trust as a virtue but of course these components of trust are also things that result from trust fairness.
For example arguably requires trust, so there's camaraderie. So it's not the case that one of these things supports the other and a form of inference or moral reasoning, but rather that these things are all involved together into something. A bit more amorphous, a lot of the time, it's a direct appeal to the reputation of the discipline that requires trust.
The New York Times asserts, the reputation of the times rests on such perceptions. And so do the professional reputations of its staff members here. Public confidence is being represented as an aspect of trust and we see that the authors are appealing to the principal of trust to support the assertion that integrity is a moral principle, although integrity might also be a component of trust.
So how does this work? Well could be argued that trust is neither good or bad in itself, arguably, I've seen it argued, it would be better for certain professions to work on a trustless system rather than a trust-based system. Why might this be the case? Well, for one thing trust is very fragile, it can be broke.
And even if you're not attempting to break it, it can be broken as a result of honest error, as a result of misperceptions bad timing. Any number of things, the moral superiority of trustless systems have, is that they are more reliable and more robust. You might think well, how can you have a trustful system?
Well, this is the basis for technologies, like cryptography, zero knowledge, proofs and systems, like blockchain. These are mechanisms where the relations between entities are completely defined by the technology such that you don't need to take a leap of faith in order for the interaction to occur. Now it's not clear that this is going to work in all disciplines.
It's, it's hard to imagine a trustless approach in psychology or even a trust list approach and teaching and learning But it might be the case that a trustless approach is the best approach when it comes to the ethics of artificial intelligence and analytics. One more justification for an ethical code, is the defensibility of a practice.
Now, what this means is that the coat, the the ethical value, or the ethical practice is, is virtuous. If it's the sort of principle that you would be willing to defend or even more to the point that you would be willing to defend if somebody else did it and you were asked to defend that practice.
We see this a lot professional associations where one member needs to come forward to the defense of the other. We also see this in academic environments where we look to faculty associations or even university administrations to come to the defense of their professors and staff versa actions. So there are some actions that these professors are going to undertake like for example murder which are probably not going to be defended but the University of the faculty association On the other hand if they exercise their freedom of speech for example in acting as an expert witness and a trial.
This action is typically one that would be expected to be defended by the administration of the staff association. And so this principle which, you know, it makes one think of Frank Robb's Frank Ramsey's subjective defense of probability or subjectivist interpretation of probability where the probability of an event taking place is established by how much he would be willing to bet on the, on the events taking place.
This is a similar sort of thing. Would you be willing to put your organizations reputation on the line in defense of this principle? So this has several aspects of one is related to the cost of such a defense. There might be a large moral or even financial cost to the defense and that makes it less likely that someone is going to defend it.
It might really to the work of ones. Predecessors defending something. That was a hard one, freedom. By the years of the association is probably going to be more likely to be defended than something. That's a relatively recent and less well established principle. So we here have a type of argument for an ethical code, which is almost almost definitively a relativeistic approach.
It is based on the subject of preferences of the members of the profession, given the circumstances. It's also based on what society has a whole thinks of it because that will have an impact on the cost or the difficulty of making such a defense. But we see this, for example, when we're looking at the ethics of federal agencies government agencies.
So, for example, we might see them urge to consider patient provider and system burden in the evaluation of AI benefits of costs and include data, accuracy, validity, and reliability. All these things together. Our brought forward to offer a statement in terms of defensibility of a practice or of a principle and that leads us to a final consideration of what we think were doing with any of these arguments at all.
Know, off the top of this presentation, I said that the ethical principles in question in ethical codes, sometimes aren't argued for it all and that's quite true. Sometimes they're taking a self evidence, sometimes they're just simply stated and there's no statement at all about how true or not true.
They are on the other hand. There is this idea of moral reasoning. And the idea of moral reasoning is that we want to have a process that allows us to come to correct, moral decisions. So, here we have, for example, from the United Kingdom's, statistics authority, and ethics, self-assessment for data, management and data ethics, and it raises several questions for us.
One of to draw the distinction between ethical value and ethical principle. As between to track lists, how do you consider it all of these things and process? How do you gone through a process of inference? There's another distinction here between conforming to a standard which is what a checklist would support as opposed to creating one, which a checklist doesn't support it.
All There is also the distinction between
Sorry about that live presentation. There's a distinction between consideration of ethical issues before drafting your code or conducting your practice and rationalization of what you've been doing all along after the fact. And then, finally immoral reasoning. There are questions about the standards of evidence. You know what counts as a moral reason, and forms of arguments is an inductive argument, good enough, or just have to be deductive, would they have alien method of thesis antithesis and synthesis work is?
Well, it's not clear that there's only one method of moral reasoning and therefore only one way to reach an output of your moral reasoning process. A really good example, of that was counter factual reasoning. A lot of moral reasoning is based on counterfactuals because it's based on predicting consequences where something hasn't happened yet.
Counterfactually, we've reasoning is notoriously difficult and it's often based in the logic of modality what could be the case as opposed to what must be the case and we bring in other modalities like probability what is likely to happen and day ontology? What should have? And the question is, well how do we?
How do we say something is most likely to happen or even, how do we attract established the truth of a counterfactual at all? If a train has no breaks then it will probably crash. Now that's a counter factual statement. It's counter facts will because in fact all trains have breaks why?
Because they would be dangerous, right? But how do we know that we could appeal to a natural law or principle? But there are no natural laws or principles about breakless trains. It's just too specific in case and we can imagine cases where brakeless trains are not dangerous. You know, if we look at the the logic and the movie snow piercer, you don't want breaks on those trains because if they stop, everybody dies.
So, how do you do this? Well, people like stoneacker and David Kay Lewis have developed a semantics of counterfactuals based on probable world or possible worlds. And what you do is you select the nearest possible world to our own and ask yourself what is true in that world. But not just pushes back the question, because what counts, as the nearest possible world to our own.
Presumably not the world of snow piercer but maybe it is a world where your trains only ever go 10 miles an hour. I need jump off and jump onto them as they pass through the station and thus they don't need breaks. So, moral reasoning, because it involves all of these sorts of considerations is an area front with difficulty.
And we come back to whether we can just create a checklist or just rationalize our existing process or the question of whether we can by thinking about it, maybe not seeking a universal consensus because not everybody's going to be swayed by facts not everybody's going to be swayed by argument.
But perhaps the thinking goes, we can sit down. We can think about it as rational reasonably well, informed people and come up with principles of morality that was support moral reasoning. Generally, that will allow us to draw the sorts of conclusions that we want to draw, which lead us to our codes of ethics and our ethical practices generally.
So, that's the segue to the next module. The next module is on moral principles or moral theories, generally what these what people have thought about ethics through history. And so, we'll be looking at some of the different, major, ethical theories. We'll look at meta, ethics or what considerations leave us to favor one approach or another of determining ethical theories.
Oh, look at some of the discretion around all of these issues. So with that, we'll leave off module for and start getting ready for module five. I'm Stephen Downs. Thanks once again for joining me.
- Course Outline
- Course Newsletter
- Activity Centre
- -1. Getting Ready
- 1. Introduction
- 2. Applications of Learning Analytics
- 3. Ethical Issues in Learning Analytics
- 4. Ethical Codes
- 5. Approaches to Ethics
- 6. The Duty of Care
- 7. The Decisions We Make
- 8. Ethical Practices in Learning Analytics
- Videos
- Podcast
- Course Events
- Your Feeds
- Submit Feed
- Privacy Policy
- Terms of Service