Content-type: text/html
The Machine Is Us


Unedited Transcript by Google Recorder from audio

 

Hiya everyone, I'm Stephen Downes. Welcome back to ethics analytics and the duty of care. And this talk, I'm going to talk about the idea that or well, talk about the question of whether analytics and AI can be ethical or have ethical properties of all and if so where they come from.

So I don't want to spend a whole lot of time on this because I think this will probably cover a ground that many people are familiar with what I do think it needs to be addressed. And if you will dispense with. So, let's start with what is. I think a fairly typical assertion of this view.

This is Audrey Waters, writing a couple of years ago, and she writes a pedagogy controlled by algorithms can never be a pedagogy of care integrity or trust. And I understand why she would say that. I mean, after all, and there are certain limitations to what machines can do, and I'm using limitations are fairly familiar to us, including, you know, they have difficulty driving on highways.

They have difficulty interpreting text as the transcripts from these talks is shown. And they have difficulty demonstrating care, integrity or trust. But never, I mean, how do we know this? What would make this true? What if everything, including even things people do is an algorithm even what do we mean in this context by pedagogy of care, integrity or trust?

Now we've covered that last question. I think quite a bit especially in the previous model, but how do we get to that point from talking about analytics? MAI? How do we go from the machine to care? Integrity. Trust etc. But and and of course, can we? Well, here's my argument there's a pretty simple argument here.

My argument is, if we can have hate, we can have hate speech, and we, and we can have hate obviously, and we certainly have her many instances over the years of hate speech and in my own country. People like Jim Key Strung and and others have, you know, felt the force of the law because they did exercise hate speech.

And if we have heat speech, we can have hate literature again. I think that is pretty obvious. There is hate literature. Hate literature is defined in law. It is identified. There are laws against it at, at least in Canada, probably to some degree even so for the border, but we certainly know that it exists and literature is a technology now, it's not digital technology, I agree right?

Elbow a lot of literature is expressed using digital technology, but of course, literature existed long before digital technology. But, you know, you go back, you have the Gutenberg press and and you have off-sent print web offset printing, you have distribution networks, bookbinding, all of the rest of that best technology.

It's very simple technology and nobody pretends that it's intelligent technology. But nonetheless, it can produce hate and and you know we don't actually say that the technology is at fault here. You know, generally we think of it as the the users are producers of the hate nonetheless, we have technology.

That is hate. And the question here, I would ask is, why wouldn't it be the same with care integrity or trust? Big enough heat technology, you can have care technology and I know it sounds like I'm doing a little bit of a side step in a dance around, Audrey Waller's point, but I think it's an important side step and dance.

And I think that the evidence suggests that we need to take it, there is this desire especially in education to depict technology as neutral and Ian McKnight. For example rights about this, this widespread desire to depict technology to depict technology is pedagogical same and you've heard this many times before, it's not about the technology, it's the pedagogy that matters, the technology just enables us, but it's not really clear to me.

Anyways, that technology is neutral and in one particularly striking slide presentation, a number of years ago, I showed slides of the printing, press the handgun and nuclear weapons and said really, really is technology. Is technology neutral. Really and I think that is an important point, but even more to the point.

Technology isn't a simple conduit of intentions. There is what we call opinionated software. I'm giving a good example from our own field and that's Moodle. Moodle was designed as a learning management system. But Moodle was designed explicitly, and it's in the dogs. Explicitly to promote a constructivist approach to technology or to enable a constructivist approach to technology.

It's opinionated and similarly for any technology, it's not simply taking you from A to B, it defines how to work, how you work, it creates opportunities through new new affordances and it imposes, limits all technologies, to some degree opinionated, and the opinion of the technology, whatever it is, reflects the intent and the purposes of the authors designers and delivers of that technology.

Give you a little example of how this can transpire thinking about the concept of user hostile technology. This is quoted from a little article called when technology hates us. By Paul McFedry saying, go into any of the little cafes or on Paris has left bank and sooner or later someone you will hear someone say they shows count.

So conquer new things are against us. The thing in many cases is the technology. I've often said my computer hates me, not about this computer about my computer, and it's not literally, because my computer has this sentiment of, you know, similar to a sentiment that I might feel and actively hates me.

But nonetheless, it can produce all the behaviors that I would interpret where I thinking of it as a human as hate, and at the very least, you know, that's not simply a property of the computer. It is actually a reflection of the properties, the intense desires purposes and feelings of the people who designed it.

If I have a computer that hates me or the very least, is indifferent to me, it's because the designers hated me or were indifferent to me, or whatever. And that's how we get. These are hostile technology. They haven't done things like user testing. They haven't done things like follow standard design principles, it's cetera.

The little article here from. I don't know. The 1960s talks about this from osteo. We see that especially in some of the more recent examples of the failures of AI take, for example, pay, which is the racist, AI created, accidentally by Microsoft. And what happened is they created this artificial intelligence powered chat but they put it out there into the world to be trained by people and people immediately trained it to be racist.

Took quotes. Even Branye. There is a saying in computer science garbaging garbage out. When we feed machines data that reflects our prejudices, they mimic them from anti-semitic chat bots to racially biased software, just racism, you know, even benign impulses can be fed into computers. We also have zoo the politically correct AI and here, when a user sends a piece of flag content, at any time sandwich between any amount of other information, this censorship wins out.

Mentioning these triggers forces, the user down, the exact same thread every time which dead ends and there are some examples in the article. For example, a person says I was bullied today. The computer response sympathetically in helpfully. The person says I was bullied because I'm Muslim the computer. Says, I have absolutely no interest in chatting about religion.

Basically any time it spots, one of these words that might be contentious, it basically switches off the conversation, shuts it down. We're not entrusted. That seems to me to be a pretty good example of an indifferent for uncaring computer. You know, we've seen this over the last two decades.

We've seen the use of big data and the use of algorithms leading to decisions that harm the poor reinforce racism and amplifying inequality. I covered a whole bunch of that in the module on issues and artificial intelligence and arguably and in the scientific American argue. It is argued. Sorry.

In the Wikipedia article. It is argued. Let's try that one more time in the book weapons of math, destruction by Kathy O'Neil. It is argued These tools. Share three key features. They are opaque unregulated and difficult to contest. It is arguable that a tool designed this way, okay, can regulated difficult to test, is it tool designed for hate right?

You've built hate into your software and no, nobody would contest. I think the idea that coming out the other end actually is hate, you know, it's not this thing that could never be hate because a computer can't feel hate. You know it is hate and and that's why we treated this such and it's not just a simple case of bad stuff in bad stuff out.

It's not a non-neutral. Conduit. The technology actually works with what comes in here. We have an illustration of a feedback like a, where technology amplifies and sometimes magnifies hate and bias. And again, technology that works that way can be thought of of as technology that is hateful. Can it not?

Because it's actually producing more hate than it has been fed? Now, it depends on being fed hate, it depends on being designed for the way it is with the the feedback loop built in with the algorithm, creating an algorithmic bias etc. But essentially the other end is coming hate.

As not just bias. I mean, we talk about bias all the time when we talk about analytics and AI and pals and missing balm, argue that academia, and business have become almost completely focused on the question of bias, in AI, algorithms and AI data. And the objective here is to simply tweak the data and the algorithms to produce fairness.

And they say this discussion of bias in AI has you know, swept across the disciplines. It's not whether simply a few computer engineers, think of this is a technical problem. We be solved that same belief also extends to people working in ethics, law and media, but that's not the case.

Obviously, the problem with bias isn't simply a technological problem, it's a social problem, it's not a computational problem, The computational problem, can magnify it, but you know them computational approach could also diminish it equalizing moreover representation. Merely co-ops designers into a larger task of perfecting vast, instruments of surveillance and classifications.

So even the act of treating bias and prejudice in AI, might produce other bad effects and solving for bias, they're focusing strictly on solving for bias detracts us from bigger more pressing questions. So we're gonna a number of balls in the air here. Now, let's be clear about them.

We've got the problem with hate and bias, etc, existing in the population that feeds data in the AI. We have, the problem of the people who design and program the AI, perhaps not caring, perhaps building in intentionally mechanisms that produce even more magnified outcomes. And we've got the problem of focusing on some specific problems about AI and treating them as technical problems.

Causes us to overlook some of the wider problems, with AI of Sasha. Baron Cohen talks about the deliberate use, not just of AI, but of technology in general to produce propaganda. And he writes all this hate and violence is being facilitated by handful of internet companies that amount to the greatest propaganda machine in history.

He says the algorithms, these platforms depend on deliberately amplify, the type of content that keeps users engaged stories that appeal to our base are instincts and that trigger outrage and fear. So, you know, the title of this presentation is, the machine is us, think about it. We've got the data coming in, we've got the design of the software, we've got the business model behind the software, all working to produce hate and violence.

So, yeah, you can have technology that hates but that technology, that hates is an important ways in distinguishable from the people that hate or the society that hates. In other words us I alluded to deeper issues as well and we touched on some of them a little bit with Sasha, Baron Cohen talking about the the incentives and you know, these deeper issues than these.

These are discussed by Doug, Belshaw include the financial incentives that create these online publishers in the first place. The centralized nature of the web, despite its original design to be decentralized, it was commercialized and brought into basically a central management structure and then the design of the technologies themselves, especially social network technologies to lock this in.

So that even if the technology is harming us in some way, we can't get out of it, we can't get away from it. And, you know, we should be thinking that all of these things that produce bad results and that produce. Quite literally hate, can producer opposite. Why would we not think so?

But we it's the way we need to be thinking about it. That makes this clear. When we consider the mechanisms of hate, it's not simply that you have this analytics array. I system standing there on its own and somehow it produces hate right. You know. It doesn't have feelings of hates and it has no prior disposition to produce hate, and arguably, all by itself.

It might not be able to produce it, maybe it could, but it really wouldn't be what we recognized as hate. But when we look at all of this together as a wider system, where we have all the people doing the designing, they providing the data, creating the business models, managing the technologies, all of that together, very definitely and demonstrably produces hate.

And so, when we talk about whether technology can produce the good thing like integrity care and trust, we shouldn't be thinking about whether an AI or analytic system, all by itself, can produce the feelings necessary. In order to produce these good kinds of ethical results. No, once again, it is this wider system of data input designers, business models and all the rest of it that can produce care.

Now we know we can produce hate, it's not clear that we can't produce care. Although arguably, we don't know how, but the question here isn't going to be whether we can have algorithms that provide care integrity or trust. The simple evidence of algorithms that produce hate distrust. And more shows that that we can produce these kind of results with technology.

The question is, what would a technology that produces care integrity trust or whatever it is that we think of as ethical look like and how would we approach? Designing one. Such technology and that's the motivation for this module, right? It was important to me to pull out all the different processes that go into analytics and AI to examine the sorts of decisions that we make as we design these systems.

And as we use these systems to see where the, the care, the trust, whatever goes into it. And and even to see what we would think counts as an ethical approach to these design and delivery decisions. We know pretty much how to use these to produce hate, whether through an absence of care, whether through a deliberate manipulation of their technology, but we haven't spent nearly as much time on all of the mechanisms, that produce ethical machines.

So the purpose of this module as a whole is to be clear that, you know, by an analysis of the actual mechanisms of producing analytics in AI, the actual workflows, the actual decisions that we take. We can see what the ethical import of our contribution to that is and how that can produce the result.

That presumably, we would like to see. So let's the beginning of this module and now that we've done the preliminaries, we'll go into actually looking now at the different stages of the AI in analytics workflow, So that's it for. Now, that's it for the short video and I'll see you in the next one.

Thanks a lot. I'm Stephen Downs.

Force:yes