Content-type: text/html
Bad Actors


Unedited Google Recorder transcript from audio.

Hello and welcome to another edition of ethics analytics, and the duty of care. I'm Stephen Downes. I'm happy to welcome you to this session on bad actors. Part of module, three of the course, ethical issues in analytics. Now in this section on bad actors.

We're looking at people who use analytics and artificial intelligence for well, as the name suggests bad purposes, These may be people who use it for illegal purposes or immoral purposes. And thereby, we see the intersection with ethics and analytics. It might seem a bit unusual to include bad actors in the discussion of ethical issues in analytics because of course, ethics are addressed to people who are not bad actors.

They're addressed to people who are trying to do good or at least avoid doing bad in their use of analytics and AI. However, as we'll see many of the actions that are undertaken by bad actors using these technologies have implications on ethical individuals and wider society. And so as a result actions of bad actors, create a need to be addressed under the heading of ethical issues.

Generally also want to note that the definition of a bad actor can vary from culture to culture from country to country different cultures, may think that different kinds of actions are bad as well. Sometimes an action that is bad for one country is good for another country and of course in that case each of these two countries will view the bad or not bad status of that person differently.

So we need to be careful when we talk about bad actors and understand that the the phrasing bad represents an interpretation of their actions. And certainly not a statement about their ontological status as people. So we'll be focusing on bad actors in AI and analytics. Specifically, we're not looking at bad actors generally.

And in fact we'll be looking even more specifically at bad actors in AI and analytics in the area of learning and learning technology. Now again, we'll see that there are cases where there can be bad actors and bad actions that might not seem directly related to education and educational technology, but pretty much anything that you can do, bad with AI.

Can be done bad in an ethical context in the context of learning and learning technology. So what I'm going to do in this just like in many of the previous videos that have been set up for this course, I'm going to go through a number of examples of the things that bad actors can do using this technology and talk about some of the ethical implications about and specifically with respect to the ethical implications in learning and teaching.

So the first instance of bad acting is misrepresentation. What we have here is an unfortunately common case possibly in learning technology where the promoter or the proprietary of some system misrepresents the capabilities of that system. Typically, by saying that it can do more than it does, although there's a subclass of instances where the system may do, something that they don't tell people.

That it does like say collect data for advertisers, the more useful case pretending that the system is able to do something that cannot. That it cannot happens, especially in the field of learning analytics where say a vendor might say that the system is capable of predicting accurately, whether someone will drop out of the course or that the system is capable of effect.

It effectively, recommending learning resources for a person this may or may not be the case. Very often. These claims are presented without suitable evidence or in another sub case. Maybe presented with fabricated or insufficient evidence. There's also a sub case of misrepresentation when the proprietors of these AI and analytic analytics systems use them in competitions, in order to attend to validate their claims and they cheat at those competitions.

So there's been a number of cases where they're proprietary has, for example, embedded data into the hardware or into the algorithm where it can't be detected and then used that data in order to create more accurate than usual predictions or perhaps the data models have been pre-trained in some way.

And again, in a way, that can't be detected again with the results of producing more accurate results than that expected. In fact, in the course, web page there's an instance from team where Baidu was found to be cheating in some of its prediction tasks in these competitions and was banned from the competitions.

Another widespread use of AI is to promote conspiracy theories and we think of this as an instance of bad acting because of the damage that conspiracy theories can cause in society in general and in educational institutions in particular. Now, we need to be careful about how we define this.

And so I am drawing from this article published in nature here. A conspiracy theorist is a person or group, who promotes an alternative narrative alleging a coordinated campaign of disinformation. Usually on the part of recognized authorities or institutions, in other words, what they're doing is attempting to get us to believe that the system is trying to fool us is trying to pull one over on us.

That the system say is rate. Now, one thing to note about conspiracy theorists, they might not be wrong, there might actually be a conspiracy, so we need to be careful when we say that a conspiracy theorist is necessarily about actor. Now, certainly from the perspective of these recognized authorities or institutions, a conspiracy theorist will be thought of as a bad actor.

But, you know, it depends on the context. Whether they are about action, then the less we know that conspiracy theorists can. Replicate animate analytical methods in dissemination. So making it look like they're using analytics in AI in order to research inclusion. But perhaps not actually using it apps improperly using it, perhaps appropriating somebody else's analytics and misrepresenting it in order to promote their conspiracy.

So these are all ways in which analytics can be used by conspiracy theorists to promote the idea that the authorities are lying to you, stalking is a prevalent concern and well, in the world generally and in the online world, in particular and has been the subject of fairly detailed analysis, although the use of AI and analytics to assist in stocking, has not been nearly as well.

Covered, at least not. That was my result when I went looking for it. In any case, what happens in stocking is that an offender for whatever motive uses online or social media or other technology including analytics, or AI to interact in, some inappropriate way with a victim first by following the victim, finding out about the victim and then secondly, by promoting or creating unwanted interaction discourse with that victim, the victim may be identified by personality, by their attitude, by their socialization, and it creates for them.

A, you know, a barrier to their internet and social media use. So the cyberstalking itself takes place in a social media or technological environment and this environment can include a learning environment cyberstalking does happen in LMSs. It does happen in social networks that are used to support learning for the victim.

There may be psychological physiological or social costs. Certainly there ability to freely use technologies impaired, they may occur financial costs, they may have to take legal recourse in order to block the offender. Meanwhile, the offender is creating fear is creating this continual behavior, often adapting to whatever measure the victim takes in order to prevent the stalking and this creates the need for moderators or mediators.

First of all, to have a mechanism for detecting and tracking cyber stalking. And then, secondly, some capacity to intervene and the sorts of people who can intervene or perhaps civic stakeholders, legal authorities, or online platforms, This serious problem, it's made worse by AI and analytics because it gives advanced investigative power to everybody at low cost.

And of course some of these people will use it for this purpose. Another example of an unethical use of AI analytics is collusion. Now usually when we think of collusion, we're thinking of price fixing and there are certainly evidence that the use of AI analytics can result in price fixing and indeed, as the paper referenced here shows that algorithms consistently learn to charge super a competitive prices without even communicating with either each other by super competitive prices, what we mean is prices are that are higher than would otherwise be the case in a normal competitive environment.

And basically what's happening is that if a competitor lowers a price there are algorithm learns that their punished for lowering that price through a reduction in profits or perhaps through the reactions of their penetr who also lowers the price. So they're not getting any greater share of the marketplace that are receiving lower money.

So the algorithm learns don't lower the price and indeed in the interactions. Well, not me interactions with the other algorithms, but in the environment, in which these prices are lowered and raised it, learns to raise the prices higher because that will provoke the reaction on the part of other AIs that are most beneficial to itself.

So they're not actually talking to each other, they're not colluding in the traditional sense, but they are learning from each other that they can both benefit if they prices are higher. So, collusion isn't limited to price fixing there can be collusion over contract negotiations, there can be collusion over requests for proposals and other, other mechanisms for purchasing.

There can be collusion about political influence policy development. A range of other cases and generally collusion, when pollution happens, the AIs are learning to create greater benefit for the owner of the AI at the expense of either their clients, specifically, or wider society. Generally, another form of use of AI or analytics, that can be considered an instance of bad acting is AI enabled cheating.

Now it's interesting. When I went looking for information on this, I found mostly information on using AI analytics to prevent cheating, tons of resources on that and we've looked at that application earlier in the course. I found very few cases, where AI analytics are actually used to promote cheating, but that said, I did find those examples and I found two specific types of examples.

And one case there's a system that used in AI to match students to an academic, ghostwriting the academic ghostwriter would write their assignments for them. And they, of course, would pay the ghostwriter. This is really hard for an institution to track down because unlike y'all with other kinds of as they say contract cheating, the reason an example or an instance of the writing out there on the internet to compare what the student is handed in.

So a system like turn it in, is simply not going to work. Additionally, if the ghost writer the same ghostwriters use throughout the course, systems won't be able to detect any change in the way a person presents their written work. So this can make it very difficult to track down these instances of cheating.

The second case is using an AI to actually write the essay itself on the slide here. I actually have an advertisement for a company research AI that will quote start generating results and help you improve your essays unquote. And it's a step by step guide to how to use an AI to write your essays for you.

So, obviously, the systems aren't great yet, but that's said they could even currently fool an instructor or a marker who wasn't looking closely at the content. I know that never happens, but but if they didn't look closely at the content, it could fool them. And of course, the development of AI is going only a one direction at the moment, it's getting better and better.

And so we can easily imagine much more sophisticated products coming out of these systems in the future. Another type of bad acting with AI and analytics is audio and video impersonation. This is the, the famous deep fakes kind of model, where the AI is used to generate fake images or fake video In the case of in personations the AI is actually using other data in order to impersonate a person to make it seem.

For example, that a person has said something that they didn't actually say or done something that they didn't actually do. Of course, there's a wide range of purposes to which this can be put, including authentication, you know, logins things like that cheating, misattribution of sources and much more. Again, it can be hard to detect the roaring anti-deep, fakes AI systems, that look at things for example, the way the eyes look in order to detect the the impersonation, but as this technology you get, it's more sophisticated.

It becomes more difficult to identify the the fake and distinguish, it from the original. And this has a wider impact on the use of video for learning generally because it undermines the trust that we have in photographs and visual imagery and it makes it harder for us to accept when we see something on video to accept that at face value to accept that when they say that, somebody said such and such that somebody actually said such and such and therefore, it undermines, our trust in digital media.

Generally. Now, this next application, might not seem like an ethical issue for learning and teaching technology nonetheless. I mean, including it in here, because it demonstrates how some of these wider range applications can apply in our, more narrow circumstances. The the category here is driverless vehicles as weapons and sure, it's not an academic issue, but academic institutions, schools colleges universities.

Very often have a physical presence. And, in the past, this physical presence has been the subject of attacks. I think, you know, most naturally of the case of school shootings in the United States but that's just the most recent kind of violence. We can think of, for example, the terrorist attack on a school in Russia, where many people were killed.

We can think of authoritarian regimes attacking and shutting down, colleges and universities. And so we can picture at least in our mind, the idea of an autonomous vehicle being used as a weapon at an institute of higher education. Now might be quite simple like, say somebody using an autonomous car to driving to a crowd or the autonomous vehicle might be equipped with bombs, or weapons or whatever.

This is not hypothetical. We've already seen not cars but drones use as autonomous weapons. There's at least one documented, case, where in a conflict in Libya. The drone was sent in. On a as they say, shoot and forget mission. Now, there's two major ways that this can happen. The first major way, is that the owner of the vehicle uses the vehicle as a weapon?

But probably not going to be what ethical people in institutions do. The other way, is for the vehicle to be hacked, or otherwise misappropriated. And and then use as an autonomous weapon. This is something that does impact ethical institutions, and people, because the fact that there autonomous vehicle whatever it is, could be used as a weapon, creates an ethical concern around their ownership and use of that vehicle minimally.

For example, people might say that they have an ethical responsibility to secure that vehicle and ensure that it's not used for hacking purposes. This is the same sort of thing that the same sort of thinking that applies when people have computer systems, there's an ethical obligation to protect your own computer system from hacking because your system might be used as the basis for a bot net where a botnet sends spam messages or denial of service attacks to other people.

So, there is a concern here. And this ethical concern affects ethical people. Finally tailored fishing. Now a fishing message is a message, usually by email, all of their examples of text messages being used as fishing but contain a link or attachment or something. And they're trying to get you to click on that link or attachment in order to introduce you.

Perhaps to give some information or grant access to your computer system and then this information or access. Well then be used for unethical purposes like maybe stealing your money or representing you as maybe a cosigner to someone's loan, whatever. There's a range of possibilities here. Spearfishing is a type of fishing that is personalized.

That is the attack is sent to a specific individual. They're usually named and they may contain information about that person as part of the content of that message and because it's personalized the person receiving the message is much more likely to believe that it's real and therefore much more likely to respond.

Spearfishing in other words is more effective than playing ordinary fishing. Now, what researchers have found is that deep learning models. For example, GPT3 another AI services can be used to lower the barrier to spearfishing using these tools people, even without any coding skills, cannot spearfishing attacks on a large number of individuals greatly increasing the chances that they'll be successful.

So again, this is the sort of instance, of a bad actor using an AI where the reason ethical implication on the non-bad actor because it requires their cooperation in order to work, It requires access to their data and it requires a mechanism whereby, they can be fooled into clicking on these bad links either because they're not aware or they're they're not paying attention, whatever.

And this is something that can happen to almost anybody. I think it also creates ethical implications for organizations that run email services and messaging services. For example, I look at my email services. I have one from Google, a one from my organization. And I find two very different types of content get through to me.

Google is pretty good at preventing spearfishing attacks. My own organization, rather less so, and I've had to report dozens and dozens of attempted attacks to our centralized computing services. So it does raise a question, how much responsibility do I have as an individual to report these? How much do I have or just my organization have to prevent these.

And if one of these things happens, what are the implicit ethical implications of that. So that wraps up our list of bad actors, I could probably have come up with more. And one of the references for this module is a reference to the use of AI in digital crime.

It's a fascinating read, and I do recommend that you read it. Bad actors themselves are not necessarily subject to ethical principles or more accurately are not concerned about ethical principles, but the actions of bad actors have ripple effects. I need ripple effects, do create ethical issues even for the most ethically minded of user of learning and teaching technology.

So we've got two more sets of ethical issues to go and I'll be getting two those within the next day or so. I then, after that, we've got the section on ethical codes where the format of our videos is going to change a little bit and we'll be narrowing more and more on the ethical content of this course.

So, that's it for now, thanks for listening for those of you who stuck with me, through two. Previous attempts, to record this video without sound. I thank you and I'll see you next time. I'm Stephen Downes.

Force:yes