Content-type: text/html
Deontic Analytics


Unedited Google transcription from audio.

Hi, I'm Stephen Downes. Welcome to this episode of Ethics Analytics and the Duty of Care in module two. And this particular video is on the topic of Deontic Analytics. I'm just going to put the URL into the activity center, in case anyone is watching live again. If anyone knows how to get the URL from the YouTube live streaming application, before I actually go live with my live stream, please tell me because I have to be in every one of my videos going through this little lecture size and then, you know, after that, I would have to trim the video and, you know, so that it starts properly.

I've also got my audio going, so we're just about to set this, to get going here. And so I'm going to officially kind of start it. So I'm Stephen Downs. This is the module subsection, I guess called, deontic analytics, and that's a bit of a mouthful. It's the last of the types of analytics applications and learning and development and it's probably the most contentious number reset of applications that were looking at when we're looking at AI in education.

Let me just get my head together here. So there we go. So they don't take analytics in a sense deals with right and wrong. It's a basically the idea that the computer the artificial intelligence is telling us what's right and wrong but of course this subject of right and wrong is more subtle than that and they want to catalytics are more subtle than that and they bring in areas from very domains here.

For example we have enterprise transformation and human impact. This would be in some sort of production or office kind of environment and the sorts of recommendations that come out of the system are recommendations about education, regulation adaptation, social policy. You can see how these all go beyond mere prescriptive analytics and they even go beyond generative analytics in their not telling us what is they're not telling us what can be but really they're telling us what should be given everything at knows.

Here's what we should do. Here's what would be the right thing to do and that's why I call it daunting analytics. It comes from the idea of dialantic logic which is the logic of art and should. So these analytics, look at things like sentiments needs desires, other such factors the range of human emotions.

The range of economic environmental on other circumstances to tell us what we really ought to be doing or saying or making policy as etc. You'll see we've got a number of examples of this and we'll go. I'm just as we have in all of the other videos of this series.

So one place where we see, deontic analytics already being applied is in the definition of community standards. Now this is a tough one because we think of community standards as something that is defined by the community and something that we detect rather than defined. But what happens is when you have a system doing the detection for community standards, for example a content.

Moderation algorithm that in some way measures what a community deems is acceptable and then moderates for that, you're very likely to set up a feedback loop where the community standards become self-fulfilling prophecies. In other words what people think should be the standard gets interpreted perhaps slightly differently by the AI what the AI interprets the standard as being now becomes the new standard.

Any deviation by the AI, from what is actually, the standard becomes the knee standard and it's going to be pretty much impossible for the AI not to deviate. Because if you ask members of the community, what the community standard is, first of all, you may get many different answers, but more significantly.

You're going to get in precise answers answers using only the vocabulary that's available to the members of the community, the AI isn't under any such restrictions. So the AIs understanding, if you will of a community standard is going to be much more nuanced and taking to account many more factors than human would.

And so, with is going to change the standard for better or for worse, and I may take into account things that actual members of the community probably wouldn't bring up, you know, in the wider world. For example, climate change global warming is a significant factor affecting our lives and we may find the algorithm may find.

That this is something that elsewhere in these people's lives. This matters, it might be not something that's discussed particularly in the community, but the influence of concerns about climate change, may come to define what the niche standard is. So for example, climate change denialism is not part of the community standard.

Even though there was never any rule or even statement about it, it would just be considered, you know, not right to be doing climate change. Denialism in this community.

The AI can through actions like that, actually influence human behavior. This is, especially the case the more, an AI learns, about an individual person, they can learn about their habits, their behaviors, there wants needs desires as exhibited in what they look at what they write about. And then take on social roles that influence their behavior.

The roles might be a role model or an advisor as in, you know, the rubric teachers that we talked about earlier a partner or, you know, an artificial mentor or as a delegate or agents for the person. And we can see how acting in any of these four roles would influence the behavior of the person minimally, the person would have to respond to what the AI is doing on its behalf or in an effort to help it, advise it, or teach it in some way.

But as well, especially with something like the role model a person will possibly begin to mirror or imitate what the AI is doing. If the AI is able to present itself as seeming, sufficiently human. Then it's possible that you know, humans copy each other and humans. In this case might start copying the AI.

There have been cases where human behavior has been influenced by artificial intelligence. We'll talk a little bit about that in some of the next slides.

When looking at these wider environmental and community contexts and artificial intelligence can learn to identify what is bad and wrong. And by that I don't mean you know, spot crimes or things like that. But look for patterns of behavior that in themselves are not wrong. But our suggestive that a person might be exhibiting bad intentions.

We see this already in airport security systems and similar kinds of security systems where perfectly innocent behavior by most of us. But when taking in context and assembled, with other behaviors triggers an alarm. And, you know, you're kind of gate agent pulls you aside for some extra screening that sort of thing.

Now, we will talk in the future about how this be misapplied and misdesigned and resulting discrimination and other such problems. Nonetheless, this kind of activity is certainly within the realm of possibility for artificial intelligence. A similar sort of logic exists in an AI powered lie. Detector. Once again, the AI doesn't know your lighting.

All right, it's not spotting the lie, but what it's doing is it's assembling a range of information, a range of data about you, everything from how you look, what sort of emotion you seem to be projecting. Whether you appear nervous, whether your heart is racing, whether your temperatures elevated, all of these symptoms, none of which constitute a lie may suggest to the AI that you're lying and may as a result cause the AI to conclude that you're lying.

Now, how good is this technology? Horrible. It's terrible but that's now right. What about when it gets good? Yeah, polygraphs never did get good. Although it's interesting because we hear them referred to as you know, take a light detector test. If you want to prove your residence, your essence.

You know, we we may see people say well subject yourself to questioning by the AI. If you think you're innocent, unless you what the AI says, certainly within the realm of possibility, conversely, an artificial intelligence spice spot, what we want and society and begin to amplify it. Max Tegmark has an interesting perspective.

On this saying everything, we love about civilization is a product of intelligence. So amplifying, our human intelligence with artificial. Intelligence has the potential of helping civilization flourish like never before. And it's interesting because the presumption is being made here. That isn't that intelligence is good but it would not be surprising to see an artificial intelligence.

Reach this conclusion on its own by looking at environmental factors looking at the behaviors, looking at the results of behaviors, it would probably conclude much like Max Tegmark has that intelligent behavior results in good results. Results in everything we love about civilization and such an AI would then through, you know, all tutoring, or advising or even just putting its, it's artificial thumb on the scale for various evaluation metrics, begin to promote products of intelligence.

Now, how do we know this would happen? Because we already see it in a reverse, right? We see that artificial intelligence has the capacity to amplify the bad. When the metric in question is to increase engagement and interaction on a platform. It turns out that the way to increase engagement in interaction on a platform is to get people riled up and showing at each other in the way to do that is to amplify extreme statements or controversial opinions.

So that's what it does. That's when it produces. So we can see how an AI could amplify the good in the same way that it is currently amplifying the bed. And these two, you know, the identifying the bed amplifying, the good. These are actually, you know, a pair of applications that go together.

You can identify and amplify the bad, you can identify and amplify the good. The problem comes up when you identify the bad interpretive as good and amplify that, and that's when you run into problems. And that's why Max Tegmark says that this all is good. As long as we manage to keep the technology beneficial, we've also seen artificial intelligence implicated in the project of defining.

What's fair. Now fairness is a property that we would like for our official intelligence. We see that in many, many documents but what is fair, what counts as fair. There are a variety of factors that can go into a definition of fairness, arguably, thousands, hundreds of thousands of factors that's beyond the capacity of a person.

And typically, what we do is we define fairness by some sort of rule or principle. But rules are principles always have edge cases. They always have exceptions and they all ask always have people trying to gerrymenger the system to create their own particular kind of fairness, which is unfair for everyone else, an artificial intelligence cuts through that.

At least that's the theory here. We have an example of an I that's defining fairness in US elections. One of the features of US elections is that the electoral districts are what they call, jeremy manager, there, altered by committees in order to prove, you know, increase the probability of one party or another party or more usually just the incumbent being elected or reelected now it's possible easily possible to draw these districts more fairly but what counts as fair and that's what the artificial intelligence determines here.

It determines what fair district boundaries would be in order to. Well there's the question right in order to shall we say best represent voter intentions or best represent the populations the demographics, the balance between rural and urban the balance between different, ethnic groups. All of these things are factored in by the AI and should be weighed according to a wide range of needs and interests hard for us to do heard for an AI to do but arguably they already do it better than humans.

We can see this sort of approach used in other areas. I saw for example, recently a system that is redesigning the tax system along similar lines, using an artificial intelligence to determine what sort of tax system would be fair for people. The AI indeed could play a role eventually in changing the law itself.

There's two major ways that can happen. First of all, the very existence of AI can cause the creation of new law. It's simple example is a copyright law for things that are created by analytics engines. Such as deep fakes who owns the copyright to that does the machine on the copyright does the person who built the machine on the copyright?

So if you're using say a Microsoft analytics engine Microsoft owns a copyright, is it the person who wrote the software or is it the person who flipped the on switch to actually make the the new content there needs to be a decision made in decision? I did not need to be made until artificial.

Intelligence came along more significantly, though. The way I performs could actually inform the content of that law and to, to get a sense of how this works. We go back to learns less a who back in the year 2000 was writing things like code is law and the idea here is that the the capacities, the demands, the dictates, the actual implementation of computer software is much more detailed than any law could be.

And so what happens is the way the program is written because the defacto law and a particular environment. So if you write a system that prevents you from being able to upload PDFs, then the defacto law is, you can't upload PDFs and it doesn't matter whether it's legal or illegal.

But that sort of question goes by the wayside. What matters is you can? Or can't do it. So, now, we have two ways in which AI could change law, first of all, AI can give us new capacities. We didn't have before such that, the law does not prevent them and they go beyond the intention of the scope of the law.

Simple example was the scraping and use of faces on the internet for the purpose of facial analytics. And emotion detection, there was no law against collecting all these faces and analyzing Nola. And so this became quote, unquote, legal under the the idea of code as law. Any other hand, there are things that you can't do for example.

You can't really hide your identity by not sharing all of your information in one place and artificial intelligence makes it possible for a company to gather information about you from many different sources. Something like that is called an identity graph and so they can create a representation or a portrait of you a user model.

We might say. So you no longer have the right to not have that information be publicly known because AI makes it possible for that information to be publicly known and it's one of these things. Once it's possible. It's really hard to put back in the box and make it illegal.

So these are just some of the ways that analytics can change law. If we look at all of the different applications. Everything from opens text to legal, robot to Lex predict, which would protect, you know, what, a judges verdict would be to case taxed, and so on. All of these applications, we can see that artificial intelligence is going to have a significant impact on law, finally, easing distress.

And this is sort of an application of the AI is tutor, or AI is coach, but here, the application is going beyond, just helping you do whatever you want to do, and it's actually interpreting what state of mind you should be in and works towards promoting that state of mind, I don't know if you can hear the train going by, but it's a big one because of course, so similar systems can interact with students can monitor sentiments and emotions that may be interfering with their learning or socialization.

Almost acting. As I say, say here, a fitbit for the mind, we know that this can happen because it has happened and there have been privacy group, complaints and papers published etc. About Facebook experiments to use the social network in the way data is presented to manipulate users emotions.

So if the emotions can be manipulated, then they can be this kind of system can be used in order to ease distress. Of course, the flip side of that is this sort of system can be used to increase distress to agitate people to foment, a conflict and disputes. In the like I've put easing distresses the title of this slide because that's the application.

I would like to focus on. I know, I'm biased so that wraps up the list of day onto applications. You may think of more they are to clappifications. And if so go to the all applications page a module too, where you have the opportunity to add your own and add links, which are examples of discussion or software which instantiate these applications.

And it's the last bit of content that I'm producing for module two overall. I hope that you've seen that there are many different types of application of artificial intelligence. It's not all content recommendations, it's not all learning path. It's not all predicting, whether a student will pass or fail.

There's a wide range of possibilities and affordances of learning analytics and AI in educational technology in learning and development. And what makes this whole subject. So pressing, We're not going to not use this stuff. There are just too many things we can do with it, that it won't make sense to anybody to just turn it off.

So it creates a pressing need for an understanding of what the ethics of the application of AI are. But the same time. When we're talking about the ethics of artificial intelligence, we can't be talking about the ethics of a narrow range of applications. We can't be using. If you will stereotypes in our thinking about AI applications, We have to be again with a sense of the broad scope of the field before us, the capacity of AI to do everything from describe, what things are out there to tell us the way things should be and everything in between and we need to decide, you know, not not just what we want.

And what we don't want to follow this but how even we're going to decide what we want from what we don't want. So the next session the next module, as you know, will be the issues and learning analytics. And we're going to take a similar approach that we did in this module.

Instead of going through lists of applications, I'm going to go through lists of issues. And the purpose is the same. It's not to study these in any depth. We're not going to be able to study them in depth. They just isn't time, we don't have the capacity and there are other people doing that.

The purpose is to get the broadscope of issues that have been raised in the field. And at the end of that module will look back and see what sort of associations we can draw between what we know about what exist and what we know about, what sorts of issues there are, but all of that comes in future videos for now.

That's it for this video. I'm Stephen Downes. Thanks for joining me.

Force:yes