Unedited transcript from audio by Google Recorder
Hi and welcome to another edition of ethics analytics and the duty of care. I'm Stephen Downs and I'm just going to copy the video URL into the activity center. Again, I can never get this URL until I have actually started the video. Something that is very annoying to me and always causes a little delay at the start of these live presentations, but I've done that and I'm saving the page now.
And so anyone who reloads or accesses the page right after this moment, we'll be able to see this presentation starting up on time. So this this talk is a part of module four, for the ethics, analytics, and duty of care. And, in this module, we're talking about ethical codes, generally.
And we've looked at some of the overall properties of the ethical codes. I am not going to do a video on all of the ethical codes so far. I've looked at 73 of them. The will probably be adding more over time and that sort of presentation couldn't be particularly useful.
So what I am presenting in this module after the original overview which I already gave a couple of days ago, is a look at some of the features that these codes have in common. In this particular video, we'll be looking at some of the ethical issues that underlie these codes.
And in future videos, we'll be looking at some of the values that underline these codes the duties or obligations and we'll also be looking at who these codes are intended to be applied to, and who these codes consider has their clients or as their subjects. The people who we have ethical duties or responsibilities to.
So, as I say, for this one, we're going to be looking at ethical codes and ethical issues. Specifically, I'm going to run through a number of these ethical issues. Again, we're thinking of these from the perspective of teaching and learning and learning analytics, and AI in particular. But of course, there are broader replications for all of these.
So let's start our rundown. And and I want to distinguish what we're talking about here from what we talked about a few days ago. Well, last week, although there is obviously an overlap. And previously, we looked at a range of ethical issues surveillance tracking anonymity is a very detailed list.
And one of these activities in this course, is to have people try to look at these ethical codes with respect to these issues. So, here's a page representing one of the ethical codes, the association of computing, genre, code of ethics. And there's a link right here that says graph issue.
And once we're into this now, just make this bigger for the purpose of our video here, is the box representing the ethical code in question. Here, are boxes representing all of these ethical issues? And the activity here is to draw a line from one to the other. So, if you believe that this ethical code addresses, the ethical issue of surveillance, you would draw the line.
Similarly, with tracking similarly, with anonymity, if you do not believe this code, addresses, the issue of page of facial recognition, then you would skip that line and you would move on to the next line. Where you think this code addresses, these issues? Once you're done that right, click and then click on the export option and this will be saved.
You'll see here are the actual associations that you drew just click, okay? And we'll come back to the the task in question. So that's the assignment. And the sorts of issues that were looking at today, might be thought of, as a subset of these issues, it might be solved thought of as a super set.
In other words, categories of these issues, they were arrived at from a different process. They were arrived at by looking at the actual codes of ethics and trying to extract by inference, what ethical issues were being addressed. So, it's not exactly a match and that's going to be the nature of this discipline.
Perhaps, we can come to an overall understanding of what ethical issues, these ethical codes address, but it may take more work with that graphing application in order to do so. So let's look at the first of these and you'll notice that this ethical issue doesn't even appear in our long list of ethical issues.
And that long list of ethical issues, is derived from a fairly comprehensive reading of articles and papers on ethics in analytics, generally. And the principles are not necessarily the principle of doing good. Isn't necessarily explicit there, but in many of the ethical codes that we started here for module four, they do make reference to the specific good that can be done by the discipline and question.
Now the disciplines that these codes cover include things like journalism health care, psychology business, and a accounting etc. Not just artificial intelligence and analytics and that was very deliberate because other codes of ethics, analyzes that I look at focus specifically on this domain, but different domains look at ethics differently and I want to raise the question.
Whether there are questions of ethics. Same other domains that ought to be raised in this domain. In any case, the good that can be done is something that shows up in many of these types of theories of ethics. The UK data ethics code. For example, expresses an intention to maximize the value of data.
The sorbonne declaration points to the benefit of society and economic development, that accrues as a result of data research, the open university, the search that the purpose of collecting data should be used to identify ways of effectively supporting students to achieve, their declared study goals. So, you can see here that there's, there's a clear sense of benefit that is required, but the sense of benefit that is required, is not always interpreted in the same way by different people.
Another ethical issue that comes up a lot in statements, especially of academic ethics. But also, professional ethics is academic or professional freedom in some cases, not merely considered to be a good, but actually expresses itself as an obligation on the part of academics or professionals, where it is necessary for them to promote the concept of academic, professional freedom, and to refrain from actions or agreements that would infringing on academic or professional freedom poorly, this sort of freedom is not limited to academics.
It includes things like doctors and journalists and psychologists, how it's defined varies a little bit. But essentially, it boils down to the idea that the professional needs a certain scope of freedom without consequence in order to instantiate the values of that profession. For example, a medical practitioner needs to be able to base their decisions on treatment, on medical considerations and to not be in fringed by external say political considerations.
In the case of academic freedom, the principle is that the academic should be able to research and express points of view without having to worry about losing their position as a consequence of those views. Now, like any freedom none of this is absolutely. We've seen many cases over the last few years and indeed probably through history of academics being removed from their positions because of some of the positions that they take.
But overall and if you look at the diagram here this comes from a research study on academic freedom over the last hundred and twenty years, it has increased quite a bit, it began to increase significantly with the end of the cold war in 1989. If this is looking at a globally, heavily freedom really declined during the second world war, and also went into general decline of from the 1970s through to the end of the Cold War.
Today, academic freedom is fairly high around the world. There have been concerns about it recently being in French upon again, though, this isn't universally true around the world. In some places, it's being more inferenced upon than others and that's what the little map there. Shows another fundamental ethical issue being addressed by these codes is the question of conflict of interest conflict of interest is the idea that a person would use their position to personally benefit from their position, whether directly through the offer of gifts, or through other means, it's expressed explicitly, or is prohibited if try that again, it's expressively prohibited by many, but not all codes of ethics, but, you know, a lot of codes of ethics, conflict of interest can involve things like the sale, sensitive information, external employment, insider training, biased, supervision close, relationships, and nepotism.
The personal use of corporate or company assets gifts bribes commissions, etc. I think it's an interesting question and one we should ask is what counts as a benefit from the perspective of conflict of interest other codes. When they address conflict of interest, they're less focused on the benefit being received, but rather on the integrity of the profession.
And we see this and professionals like journalism where as one code of ethics states. Professional integrity is the cornerstone of a journalists credibility and here conflict of interest extends even to the idea of maintaining independence, being above the fray. For example, many journalists make a point of not being a member of any political party, not being a member of any particular point of view organization.
Even sometimes to the point of not voting a similar restrictions to column, we'll call them that don't seem to apply to other professions. But there is this sense in which the professions are expected to maintain a neutrality over and above day to day issues, politics world events and the like scientists.
For example, assert that research and development is a global enterprise and not something that is a characteristic of one or another nation. Any other hand. Nationalism is something that certainly thrives in science as well. So it's not 100% here, another principle. And one that many people are familiar with is the question of harm, many codes, explicitly state that professionals that are covered by the code, should do no harm on the origin of this, of course, goes back to the hypocritic oath of our interestingly, many codes of ethics trace the origin of this back to the neuromberg declaration where there was harm created in unethical experiments on humans.
And so the principle of ethics derived from that is that this harm should be avoided often in these principles though. The nature of harm is very loosely. Defined harm might be applied. The question of whether harm has happened, might be applied directly to clients or subjects, but some codes consider the effect of downstream harm.
For example, if you're doing data collection, the question of whether that data being collected, immediately harms the person in question, but then subsequent uses of that data or subsequent uses of that research over time. Might harm other people as well. Home is not necessarily limited to physical her things, like discrimination and human rights.
Violations are often cited as sources of harem some codes. Describe what will not be considered as harm and you can see the need for this in. For example, medical research, where harm might sometimes be caused. We're talking a little bit about that in terms of some of the, the core values of underlying ethical codes.
Another aspect of the question of harm as an issue, is the consideration of risk versus benefit. There are actions that could harm a person or a group of people. However, these actions might benefit a larger group of people, or even society. As a whole people often talk about risk or balancing the risk versus the benefit.
And so the the issue will rise is on what basis. Do you conduct this balancing, how do you weigh the risk to an individual or to a group of people as opposed to the benefits of the larger society? I think that many people have admittedly not all would say that the benefit to society never outweighs the harm cause by killing a person, other societies will limit the definition of this, the harm caused by killing of an innocent person.
For example, we're or the harm caused by the killing of a child. The risk versus benefits, sort of question, really brings out issues in the discussion of harm. The idea of doing no harm quality and standards is something discussed by numerous codes of ethics, quality and standards are often defined in different ways, the diagram illustrates some of the aspects of quality and standards and it's it's kind of ironic because in a discussion of quality and standards.
If you look closely at the document, it's really not a very good document. Or if you look at the diagram, it's really not a very good diagram. There's little, there's pixelation around the text. And the, the resolution isn't that great. So the circle has little bumps anyhow. So there are aspects of quality and standards ranging from customer focus.
Evidence-based decision making continuous improvement engagement of people etc. The international standards organization defines a number of definitions of quality and standards in different domains. And then, of course, there are many principles, like, say total quality management or six sigma intended to raise that as a value quality and standards.
So are defined by in different ways by different people. Sometimes quality and standards are defined in terms of competence and when you when that's the case, you see the ethical principles talk in terms of stewardship and excellence. In other cases, quality hundreds are described in terms of qualifications and the principle might create a requirement to prevent unauthorized practice of a discipline.
For example, unauthorized practice of medicine, preventing by unqualified, teachers, etc, and additionally, quality and standards might be described in terms of exemplary behaviors such as research integrity scientific rigor recognition of sources, etc. So in any profession, there is typically a long discussion about quality and standards. It's certainly an issue that comes up a lot.
It's not clear that it's an issue that has been resolved to anyone satisfaction. Although the standards bodies do attempt to reach a consensus on these sorts of issues we can ask. Now, finally, what are the limits? A lot of ethical issues that are rice, especially in the field of artificial intelligence and analytics are built around, what?
The limits of the technology should be and we see some examples of that. For example, IBM said, it was cease work in general, facial recognition technology, they did that last year, we'll see if that holds up and there have been other cases where companies have declined to continue to pursue research in a certain area, open AI when it develops GPT three set originally, that it was so powerful, but it really shouldn't be released to the public.
And then of course, a few months later, they released it to the public. There's the standard stated in the Silimar principles to create not undirected intelligence. Not general intelligence in other words, but beneficial intelligence. So one of the limits is that whatever is being developed should be developed for the good of?
Well, good of someone of society of the person who has it. That's often left vague. There's also a case where many individual researchers, and sometimes companies will refuse to work on military or intelligence applications. This is often cited as a reason for not working in China, that has to do with intelligence applications.
But also to, we had researchers at Google saying that they did not want Google participate in a military intelligence program. Finally the wrong limits that are based on things, like scientific merit, and research needs the research, ethics boards. That I belong to does have a requirement that the researcher be able to show that there is a legitimate scientific merit to the work that they're doing finally.
We ask is all of this enough do does the list of issues described in this presentation, constitute, all of the issues that come up. When thinking about, ethics, analytics and AI does this list in other words, comprehend, all the issues that were raised in the previous chapter is? It's not clear that it is, although it's hard to say where it doesn't, it doesn't cover everything.
We look at the individual issues, the good that can be done academic or professional freedom, conflict of interest harm, quality, and standards, and the limits of the research. And it's hard to say what other issues fall outside that. I mean, we can think of issues like slavery. For example.
Let's certainly an ethical issue. Does that fall under any of these categories? Well, our arguably, it falls under harm, perhaps it falls under conflict of interest, depending on your views, about graduate student employment, and perhaps it also falls under the heading of where the limits are. So, you know, again, it's hard necessarily to pick out and ethical issues say, whether it falls under this categorization, but this categorization was of obtained by a study of these ethical code.
So it can be stated that if it's not covered by these categories of issues, it's not covered by the ethical codes, but also too, it's important to note. First of all, no code not, one of all of those surveyed was designed to meet them. All of these purposes, all of these issues, different codes are intended for different saints.
Some codes are intended to prevent harm. Other codes are intended to promote things like professional freedom. Others are intended to promote good. But no code, addresses all of them.
Neither were
None of the purposes. I'm just trying to get this sentence straight, right? So, no code of those surveyed was designed to meet all of these purposes and none of these individual purposes was specifically addressed by all of the codes. So we don't have an all or only situation we can't point to a code and say well this code covered everything because none of them does and we can't point to an issue and say this issue was covered by everyone because none of the issues is.
So right off the bat these codes are talking about different things. And so, it creates it, makes it very difficult to find a sense of unionity when you're actually talking about different things. So, that's it for the, the ethical issues at least for the purposes of this video. We'll be talking about the core values and priorities, that underly, the actual recommendations made by these different codes of ethics.
So, in this video, we're talking about why people were creating these codes, what sort of things they are seeking to address the values part. Basically talked about how they go about addressing these, and that'll be the subject of the next video. So, we'll keep this short. We'll finish this here.
This is ethics analytics and the duty of care. And once again, I'm Steven Downs.
- Course Outline
- Course Newsletter
- Activity Centre
- -1. Getting Ready
- 1. Introduction
- 2. Applications of Learning Analytics
- 3. Ethical Issues in Learning Analytics
- 4. Ethical Codes
- 5. Approaches to Ethics
- 6. The Duty of Care
- 7. The Decisions We Make
- 8. Ethical Practices in Learning Analytics
- Videos
- Podcast
- Course Events
- Your Feeds
- Submit Feed
- Privacy Policy
- Terms of Service