Unedited audio transcription from Google Recorder
Hi everyone. I'm Stepen Downes, welcome back to ethics analytics and the duty of care still working on lodge will seven the decisions we may make where we've been going through the analytics and AI workflow. We're now looking at evolution and impact. And when I talked about evolution and impact, what I'm talking about here is not the testing that we do in order to make sure that the AI or analytics application is operating correctly.
Rather, what I'm interested in is the overall the overall performance of the AI and analytics in the wider context. So here what we're asking is something like did the use of artificial intelligence or analytics or machine learning produce satisfaction results. Did it do what we wanted it to do and as you can imagine by the vagus of the question, a lot's going to have to do with what we wanted it to do.
And there are different objectives and different reasons why these technologies are deployed either in a school or in a university or in a workplace. So there's a lot of literature out there as usual on subject of evaluation and especially things like program of evaluation of gone through program evaluations myself.
So I'm not going to try to give a lesson in program, environment evaluation, rather, what I want to do here in this video is to touch on a few subjects that will raise once again for us. Some indication of what decisions need to be made in the deployment of AI and analytics with an I to informing us about the ethics of such deployments.
So, as I said, evaluations are taking place in a much wider context. What that means, especially in this, I think, is probably the most important part of this factors that will have nothing to do with the design and development of artificial. Intelligence will come into play. I'd love to bring forward some of my own examples of this.
For example, when I was working in the 1980s on a thing called remote job entry for Texas, instruments, I was based in Calgary, the computer was in Austin, Texas and I used it to play chess with someone from Perth Australia. Now, this was not an intended use of the system.
And in fact, I was removed from RJE and eventually left the company just for playing chess on a global computer system. But less, it was part of what was made possible by the system and the actual operation of a system needed to be understood in the context of an operator sitting there at two in the morning.
Bored out of his tree, looking for something to do and deciding to play chess. I also sometimes talk about the internet itself and how we've developed literally trillions of dollars worth of technology and we use it to exchange cat pictures. Now you might think that exchanging cat pictures is morally, good or morally?
Bad. But what's important here is that the exchange of cat pictures, never really entered into the minds of the people who were developing the internet in the first place. Although they began to exchange silly stuff almost from the almost from day one, but it wasn't built in the into the design parameters and had anybody been considering the morality of the internet, nobody would have talked about cat pictures.
So the idea of doing a wider evaluation, you know, wider context is to try to think of what the cat pictures are of artificial intelligence and analytics. And you know the again this is going to be set by context. It's going to be set by legislation. It's going to be set by policy such as the European Union's policy on encouraging data sharing or Canada's open government policy or you know, the adoption of creative, commons and open source software.
All of these are going to set the contacts for the sort of evaluation that we have in mind. So, what does that mean in practice? Well, it means if nothing else analyzing activities. What I mean here are the activities that people undertake using artificial intelligence and analytics. Now, I'm borrowing here from a different context and analytical framework for student activities, but this context works perfectly well, I think for the purpose of analyzing our use of these new technologies, so we ask ourselves things like goals, are people able to set their own goals?
The actions? Do they design their own activities? The strategies can they determine the strategies for their use of this system? Reflection are there mechanisms for reflection. Can you think back about what you've done and related to that are the actions replicable, right? Is that not a role of the day?
So every time you use one of these systems content do people select their own content, I was doing a CNIE presentation earlier today where I talked about the Leo system in feedly. And what I said was important about feedly was that or about Leo was that I could select my own RSS feeds that it would select resources from and I could select my own topic and even my own examples that I would use to train the artificial intelligence.
So being able to select your own content, plays a major role in how you were going to use your artificial intelligence whether or not you can select the content I use, I guess the analytics of YouTube when it recommends me things and I use the analytics of tick tock when I look at the four you feed, but I'm not doing any of the context selection here.
I'm not picking really, what sort of categories I want to look at what sort of sources. I want to look at except, you know, I can say, I don't want this where I don't want that, you know, but saying, what you do want is very much easier than saying what you don't want and then monitoring can, somebody monitor their own progress with the system?
Can they see whether the analytics are helping them or not, helping them, all of the sets, the overall context of use of an AI or analytic system, and this context of use can apply for an institution, they can apply for an instructor or learning designer and it can apply for a learner or a student.
Wider frameworks also mean wider than just the obvious goals of education, one of the projects that. I mean, I'm involved in is looking at how the adoption of learning technology can support the United Nations, sustainable development goals, and you might be thinking, well, that doesn't really make a whole lot of sense.
I mean, education, technology supports SDG goal four, which is education. But, you know, maybe, there are economic impacts, maybe there are impacts on the environment. Maybe there are impacts on human development effect, that probably are right? And so, when we're looking at impacts, we're not just looking at specific educational outcomes, we cover that under testing, right?
But the impact is a wider impact, and so may well. And I would argue should include wider frameworks such as the sustainable development goals. Now, we should pause here for a second and take stock to the fact that the selection of these frameworks, the consideration of these wider objectives.
And whether or not, we're able to track and improve and enhance, our use of the technology within these contexts. Very much form a part of the ethics of the application. If we use an artificial intelligence system, that is just the same as everything else, except that it improves our performance on climate change, that speaks to the ethics of it.
And even if there are, maybe some harmful side effects, perhaps these are 8 by the benefits overall to human development. Now, I'm not presuming, you know, a balancing out here is the right ethical approach. But what I'm trying to do is raise the idea that considerations outside the narrow scope of the use of the application will come into play when we're evaluating the ethics of it.
So, what do we mean by evaluation? What are we looking for? Probably the best way to think of it is impact. Now, the, you know, there are other ways to think of it, too. You know, I I frequently talk about, well, looking at what the benefit of something is, but I'm using the, the broader term impact here because on the one hand, we might talk about the good of an analytics engine, but we can also talk about the bad of such an engine and even the ugly of such an engine, has outlined in the article on quoting here.
So what is the good? Well we might say it's better grades, we might say it's organizational efficiency and it might be profit for shareholders or as Dana Boyd asks maybe there's even something beyond that the bad? Well look at the awful. AI wiki for examples, I guess I picked one example.
Out of the awful AI wiki something called face section. And what it does is it uses AI to provide a personality analysis based on your face. I can't imagine that being good to me. It seems like a retreat into the the field of friendology which was the study of characteristics by studying the shape of your head a now discredited science.
But it was thought of as quite seriously a real science for a very long time. So again what you think of this science which you think of as valid science is going to impact what you think of as a valid use of artificial intelligence and this I think would be invalid juice.
A lot of the time we see impact assessed as risk and you know just quoting from the BMG here, some documents use the terminology of potential harm others call for the identification of risks. The emphasis, particularly among the latter category of documents is on prevention, and impact assessments are an accountability measure mechanism, because a sufficiently dire assessment, or risks are too high or impossible to mitigate should prevent an AI technology from being deployed or even develops and that's from fuel again.
Who we've cited quite a bit throughout this study. Now, that sounds great and theory but there's a couple of things. First of all, I think that assessing impact is of risk is kind of one-dimensional and we do need to look at the broader range of impacts and not just the risk.
Although that said, you know, there may be risks that are too high or impossible to mitigate, but you know that hasn't stopped us in the past. We've developed technologies like say the handgun by a logical weapons nuclear bombs, all of these would seem to me at least to have risks that far outweigh any advantages that they could ever confer.
And yet we still developed them, we still deployed them. So, when we're looking at what risks are, we need to take into account, not just an idealized perception of risk and mitigation and tolerance for risk. But we need to keep in mind what they actual people in the actual field are likely to accept as risk.
I mean, we've just gone through our two-year coronavirus pandemic, or maybe I should say, we're in year, two of a coronavirus pandemic. As I speak, I don't know when it will end, if it will end, and it seems evident that people are willing to risk catching this disease and dying, rather than decent things.
Like where a face mask or get vaccinated so risk tolerance. I don't think is going to be a good ground on which to assess the ethics of artificial intelligence or perhaps to put the same point, a different way. I don't think that we can presume to know better than the people who are actually out there.
What risk is and what risk people are willing to tolerate evidence suggests, some people will tolerate extreme risk other people very risky. There is no sweet spot of risk, acceptance or avoidance. When we're looking at the impact of analytics, this is another analytical task, right? And we need to be looking at what the data trail for that impact is again very often.
When we're looking at the impact of the use of AI or analytics. And learning, we tend to keep our I fairly closely focused on actual learning data, which usually means test scores. Might mean something a little bit broader in some contexts, but there's a large range of data coming from a variety of organizations that can all be informative about the effect of learning analytics.
Now, the diagram here is for health analytics, but if you look at the various people who produce and manage data, the patients, the clinicians, the researchers, the data management processes, the data repository, all of our infrastructure and all the organizations, hospitals, universities government insurance. Because it's the US and research institutes, these all combine to provide an overall assessment of and analytics RAI initiative and the same is going to be true in the case of learning analytics.
If we only ask one of these organizations the university and if we only inquire of one data provider, which would be the LMS I guess then we're getting a very partial picture and we're probably missing out on some key variables. Or key factors. We do need to be looking more.
Broadly the more wide spread impact Simple example from during this. He writes absence is a vague and undefined term in online courses. Absence in the online course does not necessarily equate to inactivity etc, During our CMIE talk today. Somebody talked about wanting to know about how much time people spent looking at certain resources and it was pointed out that time wasn't the significant variable here.
Some people will look at a resource very quickly and absorb it without any problem at all. Someone else might just sit there and stare at it, and I've seen people like this. So, just sit there and stare at the screen, and it just doesn't enter their head. They're just staring at it.
Nothing good is happening. So you need this wider range of of data in order to assess the impact of analytics. The other thing you have to I think take into account is use what I mean by that is not use in the sense of how the ad how the technologies intended to be used but as I suggested earlier, how the technology is actually used surveillance is a really good example of this pretty much every document I've read on the ethics of learning analytics makes a point of saying that surveillance is bad.
Nonetheless, pretty much anywhere you travel in the world. There is surveillance and in some cases a lot of surveillance and the reason for that is multifold. Look at what snier says. You know, we are shown different ads on the internet, receive different offers for credit cards or smart billboards with different advertisements based on who we are in the future.
We might be treated differently when we walk into a store, just as we currently are on, we visit websites. All of my it's based on surveillance. It's clearly used is it the case that all of these people are ethically bad or does it point to the fact that the wider impact of analytics and the processes and mechanisms used to develop and deploy it produce a benefit that causes people to want to use it?
I think that's clearly the case. I think that from certain perspectives, the impact of AI is, you know, higher sales, better revenue, more efficient production, etc. And so this sort of thing also needs to be taken into account by our evaluation process, how our ethical decisions actually made we've talked through the course quite a bit about some of the things drones with machine guns, right?
We've talked about that. Well, here's an article from new scientists and I'll pop it up here. So turkey is getting military. Drones armed with machine guns so and and there's one in action.
Okay, I'm not going to show more of that. I've shown you five of 22 seconds, but did you enjoy the strong striking military music? Maybe I should play you a little bit.
There. Now, the new drone is called Sun Garden. It's made by an anchor base electronics. Firm, you hit the idea that that's a decision that has actually been made. There are people in Turkey, who feel that? That's a good thing. I may disagree, but on the other hand, I don't live in Turkey.
Similarly, we see actual analytics that incurs Mercedes-bans that quite naturally will protect the occupants first and that's a sales feature for those cars, right? The the you can imagine the salesperson in the Mercedes-Benz store saying other cars might protect, you know, pedestrians but the first and so purpose of the Mercedes-Benz would be to protect you.
The driver. I can see that being a strong selling point intuitively. It feels ethically wrong to me, but I'm not the only one with an intuitions in this matter. So and and I think this is an important point to because it's easy for us to presume when we look at some of the uses of these technologies that we already know what the ethics is.
But it's important to look at the actual use implementation and impact of a analytics in order to understand what the ethics actually are in the field on the part of people using it, we can't, we don't get away with just saying, oh well they're all unethical, that's a nice position to take but it's not one that can be sustained.
At least not through any sort of argument that I can find. And as we've seen in this course, we've been looking pretty hard for them. What the impact is, often depends on what our narrative is about the impact, as autos. That sounds David Karp gives us two narratives about fake news, which nicely illustrates this.
On the one hand, he says, there's a story of digital wizards capable of producing near a miscient insights into public behavior. On the other hand, we could offer a more mundane but possibly more accurate story of messy, workflows incomplete data sets. And then list trial and error. Now, I was obviously exaggerated a bit to make a point but the there is a point about how the story we tell about the impact of fake information.
Informs what we think of it but more, he says it also, informs the actions that we take with regard to it. In other words, it becomes a little bit, self-fulfilling it becomes a little bit of a spiral whether downward or upward depends on your point of view. He writes if the public is so easily duped, as it is in the digital wizard story, then our political and eats need not be concerned with satisfying, their public obligations, if real power lies with the propagandists, then the traditional institutional checks and corruption can be ignored without consequence.
If you think about that, the way we tell the story of AI impacts directly, the way we govern a I and if we say that AI confers so much power that any effort to stop, it would be pointless. Then we're not going to make enough effort to stop it.
Any other hand, if it's just messy, work, closing complete data, sets, trial and error. That's something that we can manage, right? That's stuff that we can talk about and work on, on a day to day basis with an I of making it better. And, and better in the sense that everybody can use it and make it better rather than in the sense of the people who use it.
And wield, this infinite power actually have good ethics. And the best intentions at heart, too. Totally different stories about AR all begins. With the story, we tell about how I was produced and deployed obviously incentives. Have an impact on impacts and there's, there's two ways of looking at this.
On the one hand, we can think of incentives as explanations for why people develop AI and analytics or why the invest in it and that is a way of thinking about it. Although I I think it's more a second case where we're rationalizing after the fact, you know, coming up with theories, you know, looking at the sorts of things that maybe people wanted to gain from it.
And I'm not being as clear as I would like here maybe actual tax. I'll probably draw this out a bit more clearly, but if you look at the actual incentives, there's, I've got a list of them here. Tools. Create less dependence on LMS analytics so you can get a more independent picture of how learners are doing, or as Taylor says, there's an economic pressure to automate education, to make it more efficient, reduce input costs maximize revenue or as Chris DD says, you know, it might be based on the need to develop students 21st century skills and analytics offers us insights into ways of teaching these skills that let cat be captured by traditional evaluative mechanisms in education.
And then there's a strong political interest in how data can inform and improve learning. And, and here, right? I'm thinking of people like David Wiley and others who are focused on improving the quality of learning these all provide motivation for people to develop artificially intelligence and a analytics technologies, but they also provide the metrics if you will for evaluating the impact of these technologies.
And there's there are the two readings in effect. Again, the incentive that you had to deploy this technology, becomes the metric, you use to evaluate the technology, so different people would different. Incentives are going to be looking at the impact of technology differently and will evaluate whether or not it is harm for a harmful or helpful in different ways impacts also has a role to play in governments.
And what I find interesting here is the idea that the use of AI and analytics creates digital policy instruments, which are enabling techniques of governing education to be operationalized in new ways. According Williamson here. And you know, it does provide maybe an actuality or maybe an illusion, it depends on the particular system in question, a better way to put your hands on the levers, that manage resources, manage people, manage workflow etc in an educational institution or in any institution.
And again the evaluation of the impact of the AI is going to be almost blended with the evaluation or the impact of the governance. That results people often talk about evidence-based government evidence based policy making the evaluation of those policies is at the same time the evaluation of that evidence and the process used to obtain analyze and present that intelligence.
And it's the case that the the crediting or the blame might be misplaced. It might be misplaced in the sense that it was nothing that the policymakers did wrong, the AI simply failed them, or it might be the case that there was nothing wrong with the AI in the analytics.
It's just that the policy priorities of the people running. The institution were so skewed that they interpreted the data, however, they wanted, and made the mistakes that they did, no easy way to answer, which of these it is. And of course, which of these it is may well depend on your point of view and other use of analytics.
And I found this very interesting and this came from a religious publication, it starts from the perspective that ideology implicit ideology is an unavoidable feature of pedagogy. And of course, a lot of people would agree with that. And I think I've already said in this course, that there's no such thing as an objective pedagogy.
There's no such thing as an objective technology. I think, at the same time, a lot of people are very sensitive to the ideologies that may be present in a learning environment in a learning technology or in curriculum itself. So analytics may be used to evaluate for learning technology and pedagogy ideology how you assess the impact of that.
Very much depends on your point of view. If you are, for example, someone from the right wing and you run analytics on the the technology and the content of say university courses, and this analytics determines that all of this has a left wing bias. Then you may feel justified in saying that well, look, the economy leans left On the other hand.
It may appear to lean left. Only from that person's point of view from my point of view, living in Canada with a different political environment than that in the United States. Say, I might look at the very same analysis and say, well, if that means pretty far right, someone who's concerned for whatever reason that the ideology not lean the wrong way left right center, who knows?
May or may not be satisfied with the impact of the analytics. Someone who does have that particular ideology and sees the analytics used in order to move educational away from that ideology. Which would say that this is an unwarranted use of the technology in order to prevent me from holding an expressing.
The particular ideology that I have obviously this is a big swamp and it gets even more interesting when we think about how these technologies can be used to inform and even create an ideology. The diagram on the right, gives us an example of this where we have two human teammates each, with their own personal ethical ideology and to a teammate with their own ethical ideology.
And as the two, well I guess four teammates work and interact together that produces a new human slash AI team, ethical ideology. Now, I don't think that ethical ideologies are like matrices that you can just combine with matrix multiplication, but maybe they are, who knows that's something that we need to be study?
But it is the case that we could certainly see this interaction resulting in some kind of ethical ideology that we haven't seen before and we begin to ask, you know, and it has been asked by people. And in the course, what's the long-term impact of this? If we create ethical AIs, are these AIs, eventually going to change what we perceive in society, as a whole to be ethical.
I think there's certainly good argument for that. I think there's certainly an argument that technology does change ideology. Look at the history of walking on the street. For example he used to be commonplace for people to just cross the street or walk along the street whenever they wanted to but then the auto industry invented something that came to be called jaywalking and over time, it became ethically wrong to walk in the street especially when a car wanted to use that street.
So this is a case where technology impacted ideology, and so, it follows that AI technology will impact ideology, which predictable? I mean, it doesn't follow unnecessarily that it will. It's pretty safe. That that's an impact. And how we assess that impact has a lot to do with what that new identity is and how well it meshes with or clashes with previous ideologies.
How do we impact? Sorry, how do we evaluate these impacts? Well, as I said, you know, there's there's a lot of research and practice on impact evaluation the the main thing I would say here, this is following during this again, is to observe that like everything else in this course, as we've seen evaluation can be ineffective and even harmful if naively done by rule, rather, than by thought, and he points to to five ways, in which we can do learning analytics by thought.
And I would extend that to say the here are five ways we can think of evaluating impact, by thought that wasn't his original point. But I'm borrowing it for this purpose supporting creative ways to reflect on the dynamics of the online. AI an analytics experience possibly leading to definable codes or specific descriptions from indicators, possibly, providing, and, meaningful wellness index or a general health of the AI or analytics.
Informed environment. Possibly leading to good self-reflection and possibly leading to a traceable and and cohesive and coherent community of practice. So, wrapping up, probably the main thing that needs to be said on this subject is the need for us to be thinking about it. Now the need for us to be preparing our students to think about what the impact of artificial intelligence is on our lives.
You know? No. That's dog tracks. Here says you know as a potential with the urgency of now the present as opposed to some futuristic notion of the rise and machines science fiction thinking about what AI is now where it's going, how it'll change our lives and I'll close on a poem from dog tracks.
In the game between human orbot where every word is parsed for curated truth. What leaves the poets hand? Might reassemble elsewhere on demand by algorithm or design seen by the sleuth as bought or not. Thank you all. I'm Steven Downs. Next video. We'll talk about AI explainability.
- Course Outline
- Course Newsletter
- Activity Centre
- -1. Getting Ready
- 1. Introduction
- 2. Applications of Learning Analytics
- 3. Ethical Issues in Learning Analytics
- 4. Ethical Codes
- 5. Approaches to Ethics
- 6. The Duty of Care
- 7. The Decisions We Make
- 8. Ethical Practices in Learning Analytics
- Videos
- Podcast
- Course Events
- Your Feeds
- Submit Feed
- Privacy Policy
- Terms of Service