Unedited transcript from Google Recorder
Hi, everyone. It's Stephen Downes and we're on the final module of ethics analytics and the duty of care. And I don't think this module will be as heavy as some of the others, but it's still going to be heavy. So we still have a bit of a ways to go modually in ethics analytics and the duty of care ethical practices and learning analytics.
I'm Stephen Downs and I'm happy to have you with me. Let's talk about first of all, where we already are. Oh oh, that's not right.
Okay, let's talk about a slide road of order where we are already in this course. As I say, this is module eight, we've had an introductory module and then six fairly substantial modules which have listed here on this slide applications issues, codes approaches care and decisions. So what have we learned?
You know, what sort of funny and I think of all the weaknesses and the incompleteness of this course and the things that it could have done to to be better. And I look at it just, you know, as these six items and it seems so small. But we really have covered a great deal of ground.
Unfortunately, I'm not nearly to the depth. I would have liked, but I say that now into something, like, 60 hours of videos, so maybe we've gone into it and enough depth after all the applications of AI. We learned that it's a lot more than just content recommendations, finding the right learning resource, or learning pasts or predictive analytics, where artificial intelligence is going, is in a much more interesting and exciting domain where it will be doing things.
Like assessing resources, assessing people writing learning resources on the fly and providing a wide range of other supports. It's hard to overstate how much that has the potential to shake up. The education industry, and we haven't really talked about that so much. But nonetheless, it's also offers huge opportunities for individuals to be able to learn more effectively and for governments and companies and institutions to be able to much more efficiently and effectively, provide learning resources.
And learning support, these advantages though are accompanied by a range of ethical issues. Probably the most important part of the second section, was the way the issues were divided. The first two divisions were between cases. Where the AI application doesn't work, where it fails in cases, where it does work.
A lot of the objections to AI are based on improperly assembled and improperly applied AI and analytics. And these are definitely issues and you know should be subject to some sort of framework to make sure that they don't happen. But the real issues in artificial intelligence, come up when it's actually working as designed because different people have different intentions for the use of these tech juice.
And when they're used as applied like pretty much any other tool, I think they can do a great deal of harm as well as provide a great deal of benefit. We also looked at some cases, where ethical issues arise from the perspective of things that are just wrong to try to do with AI.
The idea that some uses are inconceivable that we just shouldn't be doing them. I listed a few there. We're gonna come back to that theme in this module and then finally, and I think most interestingly, we saw ethical issues that are rise from the way artificial intelligence and analytics is actually making moral decisions for ourselves.
You know, we we saw the, the application of daunting AI where it's designing for us. What is right or wrong? I want it does that, but begins to raise a range of ethical issues. The next section was on ethical codes and we looked at a large number of ethical codes.
And the conclusion here, despite what is asserted? And a number of domains is that really the risk? No consensus among people about ethical codes. You know it's easy to say well there are some things that everybody agrees to like say analytics should respect privacy but when you push a statement like that, we find that we're not really saying that.
Analytics should respect privacy the same way every time and that that applies across a number of other issues as well. I mean, there's privacy and there's privacy, right? There's personal privacy, there's professional privacy. There's institutional privacy in some cases, sure, we want it protected. In other cases, we would rather know especially if the person is breaking the law and then there's all kinds of gray areas in between ethical codes.
Are also interesting in that they're defined by a profession. They're defined by the scope of what we're trying to do, and it's like clear and that people who are not in a profession, are going to be bound by anything like which in an ethical code and that's important when we see that artificial intelligence and analytics are things that are going to be able to be used by everyone in this society.
Sure, we may have the profession of AI engineer and they may be bound by a specific ethical code, just the way teachers might be or nurses, might be even if we don't agree on what it is. We can imagine them being bound on one, but that kind of restriction does not govern, you know, your teammates son or Joe politician, or Fred the marketer, right?
And so, ethical codes is a fairly narrow approach for a fairly narrow range of problems, but they are not going to address the issue of ethics. In analytics, we looked at, also, the many ways in which these ethical codes are justified through ethical reasoning. And in the approaches to ethics section, we looked at four, distinct approaches to ethics virtue ethics, which is like an ethics of character consequentialism where we look at, you know, what the harms are?
That could be caused or that are caused or are intended to be caused. We looked at the ethics of duty and the idea that each person should be thought of as an end and not a means but they have fundamental rights and that we're required to respect those rights and then we look at ethics from the perspective of social contract theory.
Now most treatments of ethics stop after the first three, and they don't usually think of social contract theory is an approach to ethics, but in the reading that I did for this course and especially when I look at things like echelon ethical codes, we saw a lot of language that suggested that people are thinking of an ethical framework, very much in the way, they think of a social contract.
And certainly, when we have codes of ethics for a profession, it's almost by definition, a social contract for that profession. So, I decided to include social contract because it does underly a lot of the intuitions that people have The. Other thing we found though, with approaches to ethics, is that none of these approaches is going to be sufficient, Certainly there's no agreement among them, right?
It's not the case that everybody thinks virtue ethics is the way to go or everybody thinks we should be consequentialist. In fact, there are large communities in large sets of reasons, opposing each one of those for accounts of ethics, and they also have their blind spots. The consequencialist isn't looking so much at the intent.
The, the day ontologist isn't looking at the results of a right action, the virtue ethicist really has no advice to give on actual behaviors and actual situations. And the social contract. Theorist doesn't really have a good answer to descent and disagreement. So we need something broader as well. The the second thing that is a problem.
With all four, ish, all four approaches to ethics is that they're all in a way universalist approaches and they're all in a way based on reason and rationality in the idea here is that we can think our way we think about situations with think about hypotheticals, maybe we draw on our knowledge of the world.
When we think our way through to some kind of ethical theory that maybe we can put into an ethical code, that may be everybody will agree on but maybe not that will address all of the issues, raised in the issue sections, so that we can enjoy the benefits of the application, but this isn't really tall order.
And for a variety of reasons, it's not clear that there are universal ethical principles. Ethics. Might actually be subjective. The application of ethics? Might actually be relative and even if we can say in a particular circumstance, that something is right and something is wrong. None of these statements are generalizable, There are too many considerations in an individual circumstance, to allow us to craft a general principle out of that case.
And this is part of the answer to intuition that forms the background for an approach based on the duty of care. It has its origins in feminist theory, but I think it's deeper than that and it's not simply the idea that we should care for other people or something like that.
But rather, it's an approach that defines ethics based on almost something like our ethical sentiment, our sense, our internal sense of what's right and wrong and at some point during this final module, I'm going to have to talk about that. And I'm going to have to draw out a little bit more, what we mean by that.
So an idea that was brought up by David Hume and I read today that never has deviated him in more popular and there are pretty good reasons for that here. Advanced what could be a described as and anti-rationalist anti reason-based argument about our knowledge of things including cause including necessary connections and including ethics.
And I think there's a lot to be said for that. The ethics of care prescribe a way not just of practicing and behaving, but a way of actually seeing an understanding what constitutes ethics, and it's a realization that we need to regard each instance as a separate. Independent instance, not bound by universal laws, or principles of ethics and that we need to be open.
I'd about the many perspectives that may exist and especially the perspective of the person who is being cared for or we could say this, perhaps more generally is, as I'm inclined, to, to take into account first and, and in many respects, most importantly, the the person or individual or group of people who are most vulnerable in any giving situation, where questions of ethics arise, and I think there's a lot to be said about that.
So we, we took that and we kept all of that in the back of our minds, as we went through. What turned out to be a pretty detailed examination of all the decisions that are made in the process, through the practice of developing and implementing and evaluating learning analytics, or artificial intelligence applications.
And, you know, we looked at the algorithms, we looked at the mathematics and, and they are there, and they are daunting, not surprisingly. But the concepts are not concepts, that can't be understood by people. But the realization of those concepts is something that's really hard to understand. There are so many factors involved.
Even in a simple. Perceptron we have to define what the activation function will be. We have to define what the bias will be. We need in some way to create the input and we need in some way to look at the output and ask ourselves. What does this mean for us?
It raises questions of models, and semantics and interpretation. I raise this questions of how do we label our data? How do we evaluate for what we believe is a correct or useful? Result, how do we test these systems and ensure compliance? What, what goals are there for compliance? And then, how do we understand, what the impact of our use of analytics, and AI is in the learning environment, and in society, in general, these are decisions that aren't clearly ethical decisions in any obvious sense and aren't really covered under any of the ethical codes or any of the approaches to ethics.
A lot of a lot of the issues that we raise in this section fall outside the scope of a lot of contemporary discussion on ethics in analytics and AI. Nonetheless, they have ethical implications every single one of these has ethical implications. Pretty good. Example, is tweaking the bias now.
That's not the same as biased AI, which is a completely separate section. That's the, the bias is a mechanism that determines how sensitive your neural network. Your perceptron is to new input, turn up the bias, it's more sensitive turn down the bias, it's less sensitive need much stronger input to make it fire.
The effect of turning this bias up and down isn't an inherently ethical act. But the effect is when you do that, if you imagine a 2D graph of possible outcomes, it moves the line. And when it moves the line, it changes how we categorize. The, the thing that are being categorized by the AI before example, and that does have ethical consequences and those are the sorts of questions that we need to be aware of, and that we need to be open to.
And so really when it comes down to it, we're teaching our artificial intelligence not just by providing the data, but in every aspect of our use of AI and as AI premates more or more of our society, every aspect of our society, all the things that we do in one way or another, including that train passing by becomes grist for the mill of artificial intelligence.
And these eventually implicated in training the AI and that needs, its implicated in whether the AI is operating in an ethical or non-ethical fashion. So that's where we are. So what does it mean? That's what the purpose of this section is. So here's how I'm going to approach this.
I'm going to finish summarizing where we are. In fact, I finished that I'm going to look at regulation, then ethical practices, then practices and culture and finally wrap up with an ethics of harmony. And you should think of this really as sort of stepping down a staircase. I what I mean by that is you know, our first inclination, when we run into a controversial situation to say there should be a law and you know and and and so we begin to start writing legislation and that cooler heads prepare home but but you know that's the starting point especially with something brand news, you know.
Like, let's make sure we're protecting people, let's make sure that we don't wreck our society things like that, and we write some regulations but then we find the regulations, don't cover everything, and I'll talk about why. And so we begin to map out best practices in the case of analytics in the eye though there's best practices and there's best practices.
And so we we could talk about some governance practices. Generally, we'll look at say, for example, some of the practices implicated with handling data but there's practices can only take us so far and then as has been apocryphely said, we get we hit culture. And if we think of practices as strategy, when we hit culture culture, how does the saying go each strategy for lunch, something like that Culture is what underlies practices and practices I guess are what underly regulation?
So we're descending this staircase and we're looking at now at what is the culture of AI use? What is an ethical culture? How do we create an ethical culture? Whether the elements of it and then that's fine. Finally, going to take this to the last step and it's kind of like individual ethics, and it's kind of like community ethics, and it's what I call and ethics of harmony.
And that's where I'll wrap up the section. So with respect to regulation, we need to look at that in a little bit of detail and yeah it's a little bit peripheral to the course but it's important to consider what the scope of regulation is and I'll sample. Some regulatory approaches around the world and highlight a few salient features.
Look at some issues involved in regulation. And then, as I suggested, we'll explore what some of the limits of regulation are and why we can't just conclude our discussion with, you know, some laws and a regulations. Then I'll look at the ethical practices themselves and there's staircases within staircases here.
But I'll talk about good practices, generally talk about processes, how we work, collectively, how we use our tools, and especially looking at governance and different ways of thinking about governance. I also have to talk about truth because truth and data are intimately, correct, connected. And we can't talk about the one without the other.
And we can't talk about good practices without talking about truth. And then finally, I'll wrap up with some management frameworks and some government's frameworks that discuss, how we organize ourselves, how we organize our workplaces, and how we extend that organization. So that it meshes with the rest of society and the rights and responsibilities that we expect on a more broad-based basis which will naturally lead to culture and as the image suggests there and this is something I'm going to talk about.
We can't see them that we all live. In the same culture we live for all practical purposes in individual ethical communities, these ethical communities, interact through various mechanisms. And that's the time to to governance and regulations. But within communities, there are ethics there are ways. Ethics are developed. There are ways ethics are transmitted or taught culture and communities overlap and the way culture.
Interacts with ethics, has a great deal to do with design of a community decision, within a community democracy and power. And I'll wrap up this section by talking about individual agency, keeping in mind that, you know, agency individual agency isn't the be all in the end all. We've learned already not to assume that everybody lives in the same sort of culture that we live in, or at least that I live in, I can't say week because I don't know who you are or where you're living and the culture I live in values individual agency, and freedoms, and, and balances that off or integrate with social needs a bit differently than other cultures do.
But even within that framework, I think we can talk about agency and we talk about independence individual agency and social agency at the same time and that brings us to what I'll call and ethics of harmony based on a concept of a pedagogy of harmony and a wave by arms, a bit because, you know, these aren't going to be clearly defined concepts nor do I want them to be, but there are, you know, some topics of interest can be reality.
And here. For example, we could talk about Ivan Ilik, ambiguity, and small things, the ethics of openness, which matters to me, the ethics of connectedness and diversity, the role, critical pedagogy plays and all this. And looking at the systems and structures that create our society and give us the environment if you will, in which we are going to be ethical, and then some other things that matter to me respect, kindness, empathy, and then, wrapping up with a pedagogy of harmony of this isn't the end of the module, so I'm not going to do it.
Nice wrap up right here, we've still got a fair amount of digging to do in a fair amount of thinking to do, but I'm hoping that you can see the natural endpoints of this investigation. At this point, we started off very boring and very analytical looking at all the applications of analytics in AI, and all the issues and analytics and AI, and a huge number of ethical codes.
And then all of the ethical theories, I didn't list them all. But yeah, I mean there's hundreds and all of the steps and and, and even a fairly detailed. Look at the ethics of care. And there's hundreds thousands, tens of thousands of moving parts in our discussion of ethics.
And I regret the many things that I've gotten wrong, and I'm sure I've gotten things wrong, but even more high regret than many, many more things that I've left out of this discussion to show. It's not, there's not just thousands of things, there's tens of thousands of things, maybe hundreds of thousands of things, which says, why our approach of trying to come up with ethical theory or ethical codes rules language is ultimately going to fail and, you know, harming me to me is the sensation.
The feeling that we, the most, the word that describes the sensation, or the feeling that we have. When we feel that all is right in the world and ultimately the ethics of analytics and AI is going to come down to something like that. So, that's the start of this module.
They're testing trains now to that sounds like one of the new ones they can't really tell. No, it's a freight train. I'm trying to be able to tell the individual trains from their sounds, but that's what I mean, right. You know, I mean, maybe so I hope you enjoy this final module.
I hope you've enjoyed the course. So far I'm Steven Downs and let's get to it.
- Course Outline
- Course Newsletter
- Activity Centre
- -1. Getting Ready
- 1. Introduction
- 2. Applications of Learning Analytics
- 3. Ethical Issues in Learning Analytics
- 4. Ethical Codes
- 5. Approaches to Ethics
- 6. The Duty of Care
- 7. The Decisions We Make
- 8. Ethical Practices in Learning Analytics
- Videos
- Podcast
- Course Events
- Your Feeds
- Submit Feed
- Privacy Policy
- Terms of Service