Content-type: text/html
Social and Cultural Issues


Unedited audio transcription from Google Recorder.

Hi, welcome to another edition of ethics analytics and the duty of care. I'm Stephen Downes. We're in module three and looking at social, and cultural issues of analytics and AI with an eye on learning analytics and the use of AI education, training and development. Generally, this is the last of the sessions that will be doing on issues.

In AI will be looking more about the ethics later on. We've already looked at some of the issues that arise when analytics works, and when it doesn't, we've looked at the influence of that actors. And we've also looked at some of the uses of AI an analytics that are fundamentally dubious today.

We're looking at a wider category, we're looking at the social and cultural issues that analytics may raise. And this is a class of issues that addresses the social and cultural infrastructure that builds up around analytics. So we're not looking at the impact, the direct impact of analytics or the immediate ethical harm or good, that maybe caused by analytics, but rather the wider ways in which it changes.

Our society changes our culture and changes. The way we learn and think and work and interact with each other. There's quite a few of these issues. There's going to be some overlap with some of the topics that we talked about in previous issues, but our focus is always going to be in this case on the wider sorts of issues that arise

To be in with let's consider issues of opacity and transparency. And we can see from the diagram here that there are different degrees of opacity and transparency to different types of AI and analytics. For example, in neural networks as with fuzzy logic, the inputs are fairly clear but the operations and especially the modification or evaluation of the network structure is less clear as is the decision or output In things like machine learning and meta-heuristic AI and analytics, even the input is to some degree opaque, we don't know exactly what data is going in or perhaps more accurately, how the analytics engine is considering the data that is going in.

Now this raises a wider range of issues because the decisions that are based on analytics and AI will have to be justified due to ethical concerns due to our ability to trust in the systems that were running, and the institutions that deploy them. But because of the way analytics is structured and we'll talk about that a lot more in the course in the modules ahead.

It becomes a lot more difficult because of the black box nature of AI and and more accurately because technically we could examine every single node, every single connection, but the the complexity and the lack of labeling of these nodes, makes it very difficult. If not impossible to have a straightforward description, that's understandable to people of what's going on inside an AI engine.

So there needs to be for the wider social use of AI a better understanding of how to make the decisions and the way AI works less opaque and more transparent sort of a related phenomenon and it's not talked a lot about in the literature is the phenomenon of alienation.

We can we can depict that and several ways one way of talking about it is the way that AI and digital technologies. Generally impose themselves as a barrier between one person and another and we can think of the very social situations in which that comes up between, for example, a decision maker and the person affected by the decision or between an educator and a student, when AI especially in digital technology generally imposes itself in this way.

It creates this distance between the two humans in the process and has the potential effect of alienating one from the other. The person, especially at the output end the student or the client doesn't feel connected to the human that is providing the service or making the decision. So the capacity of someone to access jobs, services, other social, economic, and cultural needs, feels more distant and more in personal and the person feels less, and less a part of society, and more something separate, or apart from society.

And this can lead to much more widespread and long-term social issues related to both of these is the phenomenon of explaining ability. And the idea here is that if an AI has an impact on someone's life, then we need to be able to provide a full and satisfactory explanation for its decisions.

This is tricky. Not simply because of the complexity and not simply because of the opacity, which I discovered earlier. But also just because of the nature of explaining ability, an explanation is an answer to a white question. You know, why are there roses in the yard? Why was I found guilty?

Why was my job application rejected? And typically a reason is given in terms of straightforward causes and effects. There are roses in the garden because you planted them. There you are guilty because the evidence pointed to your guilt, your refused for the job, because you don't speak the language.

You know, we can understand that but in real life causes are a lot more complex in that one of the advantages of AI. Is that it takes into a account multitudinous factors that humans don't take into account when they're simply deciding what to attribute as a cause of a certain effect.

Now, that makes artificial intelligence predictions more accurate sometimes. Even uncannily accurate, but it makes it hard to explain because now we don't have access to this simple causal story. So we don't get around this simply by saying, well, you know, coming up with a story that's a nice simple.

Causal story explain ability in terms of artificial intelligence, he's going to have to be done using a variety of methods. That on the one hand, educate, the person being explained to about the nature of cause and effect. And on the other hand, taking advantage of logic like counterfactuals in order to create a story that does not necessarily depend on stretched cause and effect.

For example, why did you not get the job? Well, if you had presented information about your ability to speak the language, then you would have been successful. That's a counterfeitual. It's not specifically a cause or effect but it does give a good enough story to the person about the explainability of AI, even this though is hard to do and certainly it's not the case that everybody involved in AI.

An analytics. Do anything of the sort, another factor related to all of? These is accountability, this is going to come up on a number of differential occasions. I mean, you can see already the relationship between explaining ability and opacity who is accountable for the actions of an AI. Here.

We have a person who has been denied the job and even if they know that they're being denied the job because they don't know the language who is responsible for that decision. Is it the person who programs the AI is at the person or organization that provided the data on which it'll base its decision?

Is it the owners of the AI or is it the end user? The person who actually pulled the switch, turn the AI on and applied to this particular situation. Now again as with causation, we could say that there are multiple people accountable all down chain but our traditional perspective of accountability doesn't really work that way.

And ultimately socially, we expect there to be one person in charge and a racist. The question of whether this social expectation can persist in a world where there are multiple agencies for responsible for the actions of a named individual. Artificial entity.

One of the interesting impacts of not just artificial intelligence, but digital technology generally has been the clustering of people into what have been called filter bubbles. What we see pictured here is a representation of the books, read by people on the left and people on the right in American society.

And as you can see, there's barely any overlap between them. Now, this is a function of how these books are recommended to these individuals and how they're described to each other. A lot of that right now doesn't take place through the work of analytics and AI specifically but it is the result of network processes and especially things like social networks and data networks and so it's reasonable to assume that if such decisions are automated, the results will be very much the same.

And there's a long-term social risk here at play as we read in this spectrum. Article, eventually people tend to forget that points of view, systems of values ways of life, other than their own exists, such as situation, corrodes the functioning of society and leads to polarization and conflict. Now, there are many factors in digital technology, including the motivations and the incentives behind digital technology.

But all of these also inform how we design and apply our artificial intelligence systems. And so it's reasonable to worry about what happens if we are. Not careful with these motivations and incentives. If we're not careful with how we designed the input to these networks and the functioning of these networks within, and I to prevent tank too much, social cohesion, and filter, bubbles part of this is the result of feedback effects and we're going to see feedback effects a few times.

We already saw it in the case of an application of AI that is fundamentally dubious, that is the use of predictive policing and that's a classic feedback effect. The idea here is that the AI predicts that a certain region of the city will produce more crime. So the police do more policing in that region and because of this increased scrutiny, the results are that there's more crime, at least more crime detected by the police.

And that's fed back into the data system, thus reinforcing the conclusion that it drew in the first place. And the difficulty here is for the problem here is that this conclusion may well have been wrong in the first place. So there will have to be as Ross. Dawson, writes careful consideration of the social dynamics of predictive information in some cases.

It's arguable that it just should not be used in other cases where the stakes aren't so high. It's not obviously a dubious use of AI but it could lead you astray. It could lead to decisions about the allocation of resources, the organization of labor, the recommendation of content, etc.

Being, you know, increasingly incorrect. Classic example, about latter phenomena is the YouTube algorithm, which recommended more and more extreme videos on a particular subject. And we also have seen that happen with the Facebook algorithm the case on the Facebook algorithm. The thumb was kind of on the scale to actually increase and promote this feedback effect.

So arguably, we have the impact both of feedback effects and bad actors in that situation new types of artificial intelligence, also lead to new types of interaction. And in such cases, it's going to be of increasing importance to look at the impact on traditionally disadvantaged groups. These impacts will often come in a shape that we don't expect.

That's, we have to be particularly vigilant. One example that was given, was that an automated vehicle parked in such a way, that it blocked access to a person in a wheelchair. Now, we don't think of that typically, as, you know, a type of exclusion and yet to the person who's being inconvenienced in this way, it is very much, a type of exclusion.

So when we're teaching or training and artificial intelligence to operate their students to be a requirement that we somehow include this context so that we don't get the undesirable side effects such as the lack of inclusion. There's another factor with inclusion as well and that's with the creation of algorithms and the creation of data sets that are used in artificial intelligence for one thing these data sets need to be inclusive.

They can't all consist of only one ethnic group or only one nationality as well. It is preferable that the teams who are building these AI solutions are diverse as well, so that it actually occurs to them. To think about cases where an AI might result in a solution or situation that is not particularly inclusive.

Like, the one with the person in a wheelchair being blocked by an automated vehicle. It might just not come up to a person who is not disabled. So generally there's this wider, social political and economic impact that they may AI may have from the perspective of creating more or less inclusion in society.

It's not clear how this issue is addressed. It's not clear how you can add an awareness of contacts to your typical AI and so it's certainly is a longer term ethical issue to be considered

Artificial intelligence and analytics also raise numerous issues of consent and in some senses me even redefine what we mean by consent. Certainly and society and culture they've increased awareness of the need for consent. And of course, that's a lot of the thinking behind the European general data protection regulations GDPR, but it's something.

Also that applies across the board ethics as a whole. Not even thinking about artificial intelligence, but ethics as a whole and especially research, ethics definitely talks about consent and conditions of consent and mechanisms of consent. The concept of consent isn't just simply clicking okay on a box, arguably that satisfies.

Neither the condition of knowledge nor the condition of permission. There's the concept here of informed consent. A person needs to know what their agreeing to and not just what it is that they're agreeing to. But what the potential consequences are, what the potential risks are and and the permission granted needs to be explicit sent as a concept, that applies to both the provider and the recipients of services.

And there have been discussions about cases, where providers may refuse consent and questions about whether that is unethical, this happens, most frequently in the area of medical procedures where for one reason, or another the application, and the procedure violates, a person's ethical code, but it may apply in other areas as well.

There are cases where people working on analytics at Google. For example, refused to participate in certain projects. And so we have to ask first of all, was this refusal of consent, ethical or was refusal of consent in some sense. Unethical, this would be asked if, for example, the refusal of consent could be argued to produce a wider harm to society.

Consent also includes rights over access to data use of data, erasure of data, even the repurposing of data and all of that is wrapped up in questions. Like, how are the harms identified? How are the harms if they occur remedy, what meaningful alternatives to consent are provided. If the only way you can use a service is to consent, and if the service is in some way required, then it's arguable that there are no alternatives to consent more broadly, the use of analytics and artificial intelligence is leaving to what many call a surveillance culture.

And there are different ways in which this comes up different sorts of ethical issues. That arise as a consequence, there's a whole discipline now being created called surveillance studies. And what surveillance studies is about, is not simply the ethical implications of being surveilled. Some people are arguing that they have a right, not to be watched and other people are given that they have a right and indeed an obligation to watch.

But beyond that we can ask, how does the awareness that we're being watched change society? How does it change our behavior? How does it change the way that we interact with each other in the period of the pandemic? There were questions raised about how people were behaving once. They were being observed by other people directly in the face by a zoom camera and they didn't interact in the same way as they did.

When they were speaking face to face, they felt more on stage, more sensitive to their physical presence. You noticed me, looking to the side here. That's where I see the video of my image being projected and indeed. As I do these videos, I'm very conscious of my hair and, you know, my smiling enough, things like that.

These are things that they might not normally think about, or perhaps, they're thinking about them in different ways. It's not clear that these create ethical rights or ethical wrongs, but we don't know that unless we study it, another aspect of surveillance is that the people doing the surveilling have a much better understanding of you and your environment and your contacts than even you do.

And so, they have what's called algorithmic, certainty. They can tell how you are going to behave. What products you are going to buy who you're going to vote for, and this has a long-term impact on market. Economics democratic process and cultural and social values, generally, and we have to ask how do these long-term impacts play out?

What is the ethics of surveillance with respect to these long-term outcomes? Maybe algorithmic, certainty is good. Maybe we find the gate a society that actually responds to what we believe and what we want. But on the other hand, maybe algorithmic certainty is replacing our capacity to change our minds and make new decisions in the light of new information.

Certainly abroad area for study related to this our issues of power and control and these issues come up again. And again, when we're talking about the use of AI analytics, AI has the potential to alter social structures of power and control but also has the potential to entrench existing structures of power and control.

So that those who are disadvantaged or disenfranchised remain forever, just advantaged and disenfranchised surveillance says Edward Snowden is not about safety, although it's often argued to be for the purposes of safety. It's about power and it's about control. What's happening over time is that as we have more and more data and as we have more and more processing power, the the way we take decisions changes now this might be a good thing and you know, we need to keep in mind both sides of this and the diagram on this slide we have first of all, not enough data to take good decisions and so simply the people in charge made a decision over time as you have more data volume, more processing power.

This allows for what they call today, evidence based decision making. Now, there's a lot of discussion we can have about that, but the viability of the evidence about who decides what the outcomes are, who decides what the benefits are, but on this chart and probably currently this is represented as immunity immediate state.

Ultimately, once you have sufficient systemic complexity collective intelligence. However, that's defined replaces top-down control. This is actually a scenario that I'm anticipating and working toward and trying to understand. It doesn't follow from the fact that this is what happens that this is a good thing, it might be the case that collective intelligence is the worst thing that we could be depending on in order to govern ourselves, it might be that collective intelligence, removes individual autonomy and freedom.

You know, again it's about power and control. It's about algorithmic certainty or it might be that collective intelligence allows, the many voices who do not today, have power, some mechanism for projecting their power and creating systems that work toward their benefit and to their long-term gain. There's no easy answer to any of these questions.

We're only beginning to comprehend how algorithmic decision making creates collective intelligence, the conditions under, which it would create collective intelligence and the sorts of structures that we need to put into place in order to make sure that we get ethical collective intelligence. I think this is an important point to make here and it's not one of the long-term ethical issues in general.

But the question of who does what is important, we're being sold right now, and and I'd say sold is the right word, a picture of a future with artificial intelligence, where it provides the calculations, the copyation power, the instant pattern recognition and we humans provide the creativity and the empathy and so, we're handing hand living happily ever after humans in charge AI doing what we want, but especially with recent advances in capability.

And we we looked at a number of those in module two of this course, there's no reason to believe that in the future. AI will not be able to outperform humans, both on the computational side and on the creative side. And that gives us a very different picture of a future with artificial intelligence and analytics, I don't want to say it gives us one with no role because I think that's probably inaccurate.

It might give us one with a different kind of hybrid role than the one that we're being sold right now. But I think we need to be aware that the future of AI won't. Excuse me, won't be the way it's being depicted in this picture. Here's the problem. Doing these videos live.

I love doing them live. I think I do better when they're live, they're certainly faster. But I feel I got things like my throat. Turning into a frog. So related to all of this is the possibility and in fact, perhaps the likelyhood of an oppressive capitalist economy developing out of all of this Audrey Waters.

Looks at this, she writes scholarship, both the content and the structure is reduced to data to a raw material, that's used to produce a product. So back to the very institutions where scholars teach and learn I would argue about it's not just scholarship that's being reduced to data pretty much all forms of creativity and interactivity.

Are we being reduced to forms of data zoom? Which today announced that it's going to be selling advertising on its free version. Also took pains to say that it would not collect the data of the contents of zoom interactions in order to inform that advertising. Now maybe you believe zoom, maybe you don't.

But the point here is that your conversation with another person using an interactive video product produces data, and this data can be gathered and commodified in order to produce new products. And it gives the people who produce these new products and advantage, far beyond anything that individuals could produce, it's equivalent to the advantage.

That a manufacturer, who owns a clothing factory has over a person who sees shirts by hand, it's a that kind of scale. Now historically, when imbalance is a that sort of scale have occurred, there has been a concentration of wealth and power like the one not worth seeing today and and increasing increasingly oppressive economy in the past, that has not resolved in good things for the economy.

Because ultimately, the people eventually are either worn down or they revolt and, and both is possible. I mean, like, we take a Marxist perspective, they'll probably revolve, but, you know, the Marxist perspective isn't always right. And if we look at some of the more impressed countries around the world today, they're not in revolt, they're just in brutal.

Repressive conditions where the mass of people need wretched lives. So you can see the ethical issues that rise when AI and analytics are able to take every thing that we produce and turn it into raw material for the production of materials that we currently produce. And we currently depend on for our own livelihoods.

AI is also increasingly becoming an authority and one way of talking about that is to talk about our sense of right and wrong. Again, this is that picture that we're sold, right? Where AI will do the calculating in humans will do the deciding. Well the deciding is based on the sense of right and wrong.

Now, there's this picture of this sense that analytics me I cannot reason cannot understand and therefore cannot know the weight of their decisions. But we can imagine, I'm AI developing a conscience. We can imagine an AI developing a sense of right and wrong if for no other reason than that.

People are trying to teach AI what counts as right and wrong. And once the machine starts making these pronouncements, it's going to be very easy to allow it to keep making these pronouncements. On the one hand, it'll be really hard to argue against the AI because it has all of that data and all of that knowledge and you have just your sense of right and wrong.

It's like somebody trying to argue against the entire medical establishment using intuition. I mean, there's really no point. It's the there's there's really no equivalency between the two points of you also to be convenient to allow AI to make the decisions of right and wrong. We won't need to worry about it.

We just ask the AI and we can act accordingly. It takes a lot of the stress and the pressure out of life and even for people who are looking for, you know, the the gaps and the sense of the right and wrong looking for the loopholes having an AI clearly, what's right?

And what's wrong allows people to walk is close as possible to the edge of what's being wrong? What's, what to the edge of? What's determining to be wrong without going over? And if you think about it, it's a lot like speeding. We don't independently determine the rate speed or the wrong speed drive on the highway, we're told and we're told in two ways.

Number one we're told by the signs that are on the side of the highway and the second way is we're told by the police who will pull us over and give us a ticket if we drive too fast. Now the signs are a guideline. The police are the actual enforcement and everyone knows at least, in this society, that you can drive faster than the posted speed limit to a point and most experience drivers in a given reason know down to the exact kilometer per hour.

How fast that is on the 417 out here, it's 120 that might be 125. Depends on how it. Depends on how you feel. To me. It's before the speed limit was raised to 110. It was 119, you didn't want to go 20 kilometers over the limit. And if ai is allowed to determine the rightness and the wrongness of all acts the way the speed limit in the police, determine the rightness in the wrongness of the speed that you drive.

We will very likely move as far over to the edge as we can, so that we're still right? But we're as close to being wrong as as possible and it's arguable arguable that you're not sort of environment or even one where we just sort of likely follow the instructions of the AI that we actually lose our sense of right and wrong much in the way a person.

If they use only a calculator to perform mathematics might lose the same of proportionality. When they do multiplication or division, similarly humans might lose the sense of proportionality with respect to right and wrong. If we allocate the decision making to an AI, that's a long-term ethical consequence. It's not one.

Let's discuss a lot in the literature and it's probably one, but it's going to have more impact over the years. I think than many of the issues like, like bias, for example, that we talk about today ownership, the rise of creative AI, and there is a rise of creative AI, don't think that only humans can create this rise creates, many issues with respect to ownership and I've listed a few of them here.

Should AI algorithms be patented can intellectual property restrictions restrict uses of the data being used to train an AI who are the creators of AI generated art? What if an AI is used to create all possible arts? That is not an impossibility. There's one person who created all possible combinations of notes in a certain scale of a certain size and and then granted it to the public domain so that none of these melodies can be copyrighted.

But what if that was done by a company that simply took an AI created, all possible songs and copyrighted that could humans. Consequently be blocked out of content creation entirely can can humans even compete with AI generated content. I know very few people who go to the store and by handmade shirts.

You know, we all get our shirts that were made by machines by people, in Hong Kong. You know, my grandfather used to own a tailor shop where all of the shirts for me by hand. I mean, he even the cloth was made in the region and then it was sewn into shirts those industries, no longer exist, because that happened to all of the creative industries in the future.

Does that happen to things? Like this video, which is being lovingly handcrafted using the best technology I can buy? Does that mean this is replaced by an AI sometime in the future, which will have a nice musical track in the background and better video cost less and be more quickly, produced, and then, over and above all of that.

What impact might regulation on the creative capacity of AI have will there be, you know, right now in Canadian media we have what are called Canadian content regulations and a certain percentage of television shows and musical content broadcasting Canada have to be produced in Canada and autopactworks allow you to maybe in the future will have human content requirements.

So that automated radio stations, which already exist, must play, a certain amount of human created content. Let's certainly a conceivable regulation. And you know, it's the sort of thing we should be thinking about now because the people who produce automated content are probably also thinking about this sort of regulatory regime that they would like to work under.

It's not one that includes protection for humans responsibility, you know, if you can get credit for something, you can also take the blame for something. And again, the question comes up, who's responsible for a harm, because by an AI, I was involved in some e discussions on this subject where one proponent was arguing for the concept of AI autonomy.

Such that the responsibility for what the AI did could be detached from any human and actually assigned to the AI itself. Now, that's an inherently problematic concept. At least to me it is other people that might be implicated or the developer of the AI. Particularly if they're a black hat, developer has pictured on the slide here or they might be the owner of the AI much.

In the way that the owner of a dog is responsible for the actions of a dog. Another thing is AI technologies has failed. Another say can place further distance between the result of an action and the actor who caused it. It's a remote causation problem. There is this questions about who should be held liable and under what circumstances, It also allows for the creation.

As I commented a bit earlier of an environment where complex causation is the norm. There's no one person responsible for the act, multiple people and multiple systems are responsible for the act, and it's becomes hard to place. Blame on anyone This creates large contractable social, and cultural problems Global warming is is an example of this.

There's no one person or no one agency responsible for the economic system that functions basically, by producing global warming. It's clear that we want it to stop, but it's not clear that there's any person or even group of people that we can talk to and have change their behavior in order to make it stop, we're told that we should each undertake a personal actions, and taking individual responsibility for global warming.

And so, we do things like use paper straws, and drive, electric cars. And yet the engines of our society in the basic makeup of our society, depends on being able to produce greenhouse gases. We look at the supply chain. For example, we've already seen what happens. The instability that happens when our supply chain cultures and yet the supply chain is a major contributor to global warming.

So, how do you assign responsibility in that case? It's not simply the person that bought a shirt from a Hong Kong. Instead of one that was tailored at home. It's a collective kind of responsibility and in AI and analytics. Generally pretty much all attributions of responsibility are going to be of that sort.

We need to figure out how to handle responsibility in such a case. We also have a condition known as winner takes all in some people. Perhaps, oh yeah. Well, you mean capitalism, but it's not simply capitalism. I've put a number of images on the slide here because I want to identify that there are multiple causes of a winter, take all kind of environment.

So the first the ethical question in broad strokes is, how can the database of some large corporations and winter. Take all economies associated with them be addressed, how can the data be managed and safeguarded to ensure contributes from the public good and a well functioning economy? These are good questions.

The problem is, we can't simply answer those questions because we have multiple mechanisms that produce minute. Winner takes all phenomena. Um here's here's one summary of some of these the focus on relative performance instead of absolute performance. A good example of this, a sports economy where you're not trying to reach an absolute pinnacle of performance.

You just need to be better than the next person in order to win. And just being better is enough to create a huge imbalance between your salary and their salary. There's also a concentration of rewards such that you reward only the winner and allocate very little to the rest of the people who are losers lotteries work that way, right?

The lottery will concentrate the reward on just two or three people who win the large pot and the vast majority of people win. Nothing. This kind of thing can happen in an environment that is a competitive and overcrowded where many people are trying to attain the result that the person who eventually wins does.

Think of, for example, music. There are many people who play music and would like to be successful in music, and because there are so many people, it creates much more interest and popularity. And so, the the people who are successful are able to be very successful. Meanwhile, because there are so many other people.

The relative rewards that are allocated to these. Other people are very small because there's so many of them another focus on winner takes all. Phenomena is the mass market. The mass market allows one individual to reach many people in society, indeed, all people in society. And so a person who can appeal to the masses is able to excuse, excuse me?

No, I just thought I'd sneeze there because, you know, I had a frog throat earlier some of these live video, don't you love it. The mass market allows someone to become very well wealthy by extracting, a very small amount of resources from very many people. This is how the commodification of AI works.

Anyways, we take the the AI company takes such a small percentage of the value of say, somebody's conversation on video conferencing system, such a small percentage of the value, but by reaching a billion people of that video conferencing system. They can create enormous wealth for themselves. This is aggregated by network effects and feedback effects.

The, the network effect is something along the lines of the following, the value of a network increases at a much greater rate than the size of the network. And that work of two people is not worth very much network of 10, people's worth quite a bit more. A network of a hundred people's worth much more than 10 times and network of 10 people.

And so on, I would say it's exponential, but I'm not sure if the actual mathematics is that exact exponential. So the idea here is that, whoever can be the one who has access to that network becomes the winner and competing that works. Even if they're just a little bit smaller, they're so far behind in the benefit that they produce, that they fall further and further behind.

So ultimately you get just one network. That's why I really, we have just one telephone number. That's why we have just one road network. I mean, can you imagine an alter road network? It would make no sense. That's why Facebook can become almost completely dominated. Because again, an alternative to Facebook starts so far behind in utility, even if it's close to catching up, it's not nearly as valuable as Facebook.

Is companies and organizations take advantage of these to create winner. Take all scenarios. They also put their thumb on a scale of a bit by creating lock in and barriers to exit. That is to say they make it hard to leave their network or their product. Have you tried getting your data from Google?

Google says it's possible. It is not easy. Have you tried getting your data from Facebook? You cant get your data from Facebook, have you tried switching from a Microsoft product to an open office product? And again, there's a significant locking here because there's a lot of learning and adaptation required to move from Microsoft to the competition.

Finally, on top of all of this, we have the affirmation, the feedback loops that I talked about where the prediction of success ultimately becomes a self-fulfilling prophecy. All of these lead to winner. Take all phenomena winner. Take off phenomena are not good for society arguably. Now there's going to be the set of people who say no it's good that we have billionaires because they're able to amass the resources that we need for a really high profile projects, like sending William Shatner to space and to a degree that is true.

On the other hand, the billionaire was able to do this only 20 years after all of us as a society were able to do this 20 years. After 50 years. After. So I'm not convinced about argument personally but that argument exists, certainly the winner takes all phenomenon produces a lot of losers and this has ethical consequences assuming that you believe that the situation losers find themselves in aretha.

Ethically problematic. If you believe that having a large mass of people in the country, having economic difficulties that is ethically wrong, then you may be obligated to say that a winner takes all scenario is also, ethically wrong. I think there are a lot of arguments that go back and forth here especially in the economic side of the debate but I think that technologists and educators also have to become involved in that debate and start to talk about what is the ethical distribution of the rewards from analytics and artificial intelligence.

And I don't think there are any easy answers here. Moving toward the end of this presentation. There have been concerns raised about the environmental impact of AI based systems. So we talked a little bit earlier about how responsibility for environmental impact is very difficult to allocate and we have a similar case here we have for example the training of an AI model that according to this study anyways produces far more CO2 emissions and say traveling from New York City to San Francisco.

Now that of course depends on where the AI model has been run. If it's right here on Ontario, the emissions are almost zero because something like more than 95% of Ontario's electricity is produced from non-co2. Limiting sources, of course a large amount of that's produce by nuclear energy and people may have different ethical objections to that.

Nothing is easy and nonetheless the environmental impact of something like AI can be raised, who is responsible for that one. Would think the people who benefit from AI are responsible for any damage that it causes. However, this has not been the case for previous systems such as the oil industry, such as railways, such as cars and so on.

So arguably it would take a significant change in society for this to be the case with AI also. And in the Forbes article cited here, mentions this. There may be environmental benefits to the use of AI generally. Perhaps having automated systems will reduce our impact on the planet. Certainly having an AI managed temperature control in a house, could maximize efficiency in that house.

Although if the person in the house is like me, it'll maximize heat in the house and end up with a worse system, there's no simple answers here again, but with respect to the environment again, we're we're weighing the benefits against the cost. We're weighing what we count as an ethics bearing cost as opposed to simply say in in economic cost or a personal cost or a convenience cost, the ethics of a particular strategy.

It's always something that overlys that strategy. And even, you know, even in the cases, like, you know, the environmental action of the planet, the needs ultimately to be an argument with respect to why it's bad that we destroy the environment of the planet. And you know, because it's not immediately obvious that from the point of view of the planet, that this is a bad thing.

Finally, the wrong way. Finally literacy issue of safety. The impact of AI on safety. Could be very direct. As for example in the Uber self-driving, car case pictured. For those of you who don't have video, you see a car and you see a human figure being flung through the air as it was just struck by the car.

But again, with respect to cause there could be any number of causes related to AI and analytics. That result in poor safety, they could range from an inadequate safety culture both on the part of the designers of the IAI and the users of the AI. They could be the result of misdiagnosis and errors.

They could be the result of blind spot in the AI model. I'm just never predicted that a person could ever pull that switch. For example, AI an analytics could lead to unsafe, turns of behavior. If we come to always trust in the predictions of the AI and lose our sense of caution this pattern of behavior might ultimately be harmful.

It's kind of like the person who depends on a calculator for math. This creates a pattern of behavior such that when they're presented with an obviously wrong. Mathematical result. They don't have the education in the background to understand that this result can't be right. And then there they're led into a mistake.

There's the possibility of vulnerability to attacks on the part of AI. I mentioned this a bit earlier with the the risk of hacking and cyber intrusion. I'm finally, there's the impact of compliance and regulation. Here we have the wider social issue of how the enforce compliance in the AI industry.

What mechanisms we use to regulate AI. Do we dare let the AI industry regulate itself, if not, who should do it? What should the penalties be? How would they be enforced? And the light, these are all issues. These are all ethical issues, because they speak to what constitutes, right?

Use of AI, and what causes you from wrong? Use of AI. Most of what I've read and the ethics of artificial intelligence, barely touch on any of these social and cultural issues. They're far more concerned with what happens when AI goes wrong with things like bias and misrepresentation stuff like that.

And their in a certain way you talking about blind spots in a certain way, they're blind to the possibility that the use of analytics. And AI could significantly. Change our culture from the perspective of learning and development. They could change what we need to learn. They could render. What we have learned, not useful anymore.

Somebody who takes 10 years in order to learn how to create high quality content, find some cells replaced by a five night $95, AI engine, a person who trains to become a photographer is replaced by a Google vehicle auto driving around taking the best pictures and then using an AI to curate them and present them in flicker albums or whatever.

You know, these issues go well beyond in my observation, the current discussion of the ethics of artificial intelligence and analytics and and we'll see that as we look in the units to come at the ethical codes and values, underlying, these ethical codes, first of all, and then later on the ethical principles, in the sorts of decisions that we make when we're applying these systems, but all of that in the future.

For now, these were the social and cultural issues related to artificial intelligence. I know it was a long presentation. I hope you found it. Interesting if not just listen to the audio on high speed, I guess it's a bit late to say that or read the transcript. I'm sure there'll be some people who do both.

Thanks all for listening to me and I'll see you next time. I'm Stephen Downes.

Force:yes