Content-type: text/html
When It Is Fundamentally Dubious


This presentation looks at cases where the use of AI us fundamentally dubious. This includes cases where the consequences of misuse are very high, when there is the potential for feedback effects, cases where classification is used to infer agency, and cases where we don't know what the consequences may be.

 

Unedited Google Recorder transcription from audio.

First five minutes clipped...

 

Pull of justice here that suggests that people need to have actually committed the act in order to be held responsible for it. Now, it's not 100% true. You know, there are criminal sanctions for things like conspiracy and to like nonetheless, you know, suggesting that somebody is going to be liable for punishment, simply because of who they are or where they live, is inherently problematic, and it's all the more.

So problematic because of the possibility of feedback loops existing within the system, you begin predicting criminality by a certain group of people and that results in not surprisingly, focusing, more policing, resources on those people. But the very fact that you're focusing more police resources. On those people means that you are more likely to catch them doing something wrong.

And for voila, you have increased criminality the people themselves, haven't done anything different from the people that live elsewhere, but they're being put at greater scrutiny. It's like if you put speeding cameras in one part of the city and did not put them on the other part of the city, well, you're gonna discover that, all the speeding takes place in one part of the city and then with you take this data and put that into your analytics, obviously you're creating a false impression and you're putting people at risk of being unfairly targeted, unfairly charged, when really there?

No different from everyone else. So that's what we mean by fundamentally dubious. Similarly racial profiling. It is arguable indeed. I would argue that there is no ethical application of analytics to identify specific races for special treatment. Now, the argument could be made to the contrary that in order to achieve equity, it's necessary to identify systematically disadvantaged groups in order to provide the support and the protection that they need.

So you know, this argument isn't going to be straightforwardly wrong in the sense that it's not obvious. That all cases of racial profiling are going to resolve in fundamentally unethical or dubious practices. Nonetheless, if the purpose of the race will profiling is to do anything other than benefit, the people being profiled then, I think that it really is of a case of fundamental dubiousness to the application of AI in this case.

And again, it's similar to the predictive policing issue, where your predictions about a certain racial profile might create a feedback effect where you apply more scrutiny to them based on what they look like. And this greater scrutiny results in more frequent observations of the behavior that you're attempting to target taking this same approach but now applying big data and increasing analytics to this results in something called identity graphs, the idea of an identity graph is that you use multiple sources of information in order to construct profiles of specific individuals here.

For example, on the slide, we see an illustration of such a profile. The person is very smith. And on the left hand side, we have Mary Smith at home. Her full name including middle initial her age date of birth, home, address, family email, addresses cell phone, whether she's registered to vote and presumably how she's registered in systems that require registration her interest.

And many more things could be brought together. Facebook accounts, Facebook post information Twitter accounts shopping habits. Credit card purchases etc. You also have Mary Smith at work. The company she works for what her gender is her business. Identification number, the business name the address when it's open what its website is the business social media and so on sales, volume and possibly even things like her salary, for specialization her business interests, her business, contacts, etc.

All of this information is assembled to create a profile that is then fed into an artificial intelligence system or an analytic system in order to perhaps sell her things in order to perhaps to predict when she's looking or a change of career or perhaps to seller house, whether she's looking for certain services to identify, how she will vote to target information and propaganda to her etc.

Again, this creates the case where we're assigning and agency to a person who was not necessarily exercising that agency based on commonalities with other people. And identity graph is useful for analytics only if it is combined with other identity graphs in order to generate these predictions. A secondary fact that's come up with this, is that the information about Mary Smith isn't just about Mary Smith.

It includes her family. It includes her friends. And so by collecting data on Mary Smith, you're actually casting a fairly wide net of data and therefore drawing conclusions about people who may not have given their consent for you to use their data. And of course, Mary Smith in this case mean, how to given her consent for you to use all of this data.

So this sort of practice and it's it's hard to say that it's fundamentally dubious because it's so widely used by marketers and political organizations and like yeah, at the same time, just one presented this way, it does seem to be fundamentally dubious and AI and analytics based on this practice seem to be doubly.

So the discussion of automatic weapons on robots is something that has already occurred in our ethics course, and it is our viewable and I would argue that the arming of autonomous robots is fundamentally dubious nonetheless. Just as in the case of identity graphs, it has already begun to happen.

We do have cases of autonomous drones being reported to actually being used in our armed conflict and specifically in the Libyan civil war. The second edition of that also is pictured. We have these armed robot, dogs being used to security guards and one person in the course commented on how the use of the word dog makes it seem like this potentially lethal weapon.

Is it so scary after all? Because you know, we all like dogs. So as we'll see in the next section, this sort of use of AI raises all sorts of questions. If you're shot by an autonomous dog, who is responsible for shooting you. Who do you sue who has the authority to use an autonomous dog to shoot you.

How does that authority come into place? There are all kinds of questions that haven't been answered by society and yet governments and private agencies are still beginning. The process of arming autonomous weapons, fundamentally dubious. Finally, there's a general class of applications of analytics that can be covered under the heading of when we don't know what the consequences will be.

For example, there's a report of a suggestion that colleges should put smart speakers and student dormitories. Now, a smart speaker, doesn't just speak it. Also listens to what's happening in the room, So that it's able to respond to commands. And to suggestions, and presumably also pick up information that will be used by advertisers in order to market to the people who use smart speakers.

And the question is, we don't know what will happen when we put these into student dormitories or as the biomed central article says, we simply have no idea what long-term effects have having conversations. Recorded in kept by Amazon might have on their futures. So there's you know there are different factors, influencing the consequences, there are anticipated consequences but significantly unanticipated consequences.

Some of these will be beneficial and used on a post-hawk basis in order to justify the use of the AI in this case but some of them will not be beneficial. Oh we don't know how many of each there will be also when we don't know what the consequences will be, we're not prepared to mitigate against the potential of those consequences we're not prepared to cop our hand.

The impacts not just on the person in question but on the overall social system. Imagine for example that the conversations of students in the dormitory of an elite university are accidentally leaked. Well, I week we can have no doubt that some of these conversations are politically incorrect to use the currently invoke euphemism.

The students will say things in private that would probably render them unemployable in the future, maybe not all of them. I wouldn't think I was among those. Of course, I would say that but some of them and they might not know, they probably would not know that they're being recorded.

There's a fundamentally devious application of AI here. In order to make this work, it's arguable and I would argue that this simply shouldn't be done, not just because it's inherently wrong, but because we don't know what the outcome of this use of analytics and AI will be. Even if the there are no bad benefits, bad benefits, if bad consequences, even if it turns out after the fact to have been fine the argument here, is that, before the fact, we did not know.

It would be fine. And we created this unnecessary risk. So that's the end of this short presentation. Again, it's probably possible to add to the list of fundamentally dubious applications of analytics and AI. I think I've covered some of the major ones and you get the sense here of the sorts of things that come to play.

When there's a high risk of bad consequences when accountability and mitigation aren't clear. And when the actual use of the AI creates affects that are a magnified beyond what they would be. Otherwise all of these create cases where AI and analytics are fundamentally dubious. The next presentation in this series, will look at the final set of issues, looking at social considerations of AI and we'll have that to you shortly.

So for, now, I'm Stephen Downs. This is the course, ethics analytics and the duty of care and we'll see you again.

Force:yes