Content-type: text/html
Using AI


Unedited audio transcript from Google Recorder

Hi everyone, welcome to the final video. Yes, I've said, final of module seven and ethics analytics, and the duty of care module seven. Looked at the decisions we make in AI and analytics. And this video will look at the final set of decisions those that we make with and around the act of using artificial intelligence and analytics.

It's worth pointing out. I think that AI has already started to enter the mainstream it's here. Now, it isn't this mysterious thing that might arrive someday in the future. We are working and living with it even. Now perhaps we might not realize that we're using it, but we are and it is beginning to have a significant impact on our lives.

Let me just throw out a few quick examples of the sort of things that I personally am using AI for something called. Feedly is an RSS aggregator. And what that means is that it gathers contents from websites, from around the internet, I tell it which websites, I wanted to harvest, it goes out and gets the contents for me.

What feedly does is that organizes the material that has been harvested based on rules? Well, maybe not rule based on artificial intelligence that I use to train that. I use examples that I want to sort or categorize as examples to train it on boy, when I do the transcript of battle, I'll have to come out with that sentence a bit more coherently.

It's been a long day. It's been along, we can along month and then these things are beginning to take a toll on me. But I hope you see that we're getting close to the end of all of this. So what I found is that feedly is a really good recommended.

And the reason why it's really good to my mind is because it's using the sources that I want it to use. So it's not picking, you know, some random marketing content from some fly by night, operation out there somewhere. And then it's using my training set in order to train the AI.

I'm picking the kind of stuff but I want. Not tracking me. It's not surveying me. You know, if I happen to look at something that's a little bit different, it's not gonna alter the rest of my search results. For all time, I feel really in control of my AI with feeling.

I think it's a pretty good example of the way. AI should work in the future. The way analytics were working in the future. Another thing I use, and I've shown this before is Google recorder. Nice to find some white background photos, Google loves the dark mode. Here it is right here and I've done, I've shown this a bunch of times, like I can't ever get tired of showing Google recorder, creating a pretty good transcription live as I speak it.

And my intent for this entire set of videos is to go back and use those transcriptions. I'm gonna edit them down. I'm gonna make them better and that'll produce written versions of all of these contents. I think it's any rotors dream to be able to write by thinking aloud rather than have to go through the laborious process of using a pen, which I still have or using a keyboard.

Which again, I still have another example, something called topaz AI, I won't fire it up here because it it probably won't bug down the machine. But I don't want to risk it because now just have to start the video over and I don't want to do that. But what topaz does is, it works either on its own or the way I use it is and that is her integrated with Adobe Lightroom and I can use it to intelligently fix my photos.

So there's two that I use quite a bit, the first one I've been using it for a while now is called Dean noise AI. And what happens when I take a picture at too high in ISO, in other words at too low light instead of nice smooth colors, I get pixelation and delays removes the pixelation and replace it with replaces it within a smooth color sharpen.

AI does exactly what the name suggests. It looks at various ways I can be producing bad photos For example by moving my camera or by being out of focus, analyzes that figures out. What kind of fix it needs to apply, then applies the fix. And so that's how I made able to get some nice sharp pictures of birds instead of the usual fuzzy pictures of birds, One of the questions I asked Image an important question is have I preserved the integrity of a photo.

What I'm using AI to enhance the photo And to me the question boils down to have I changed. What I actually saw. When I look through the viewfinder to take the picture. If the answer is, yes, then I've damaged the photo. I shouldn't be using the AI but if the answer is no, if I'm removing artifacts, I'd actually make the picture less faithful to what I saw then.

I don't think that I'm violating any sort of principle of the photographer and of course, other people just might not care. They're just after a better picture of who cares, if it's real. But I care. Another one, I would not want to give up is adaptive crews controlled in my car.

This is actually what my car looks like. I drive a Honda clarity with the the weird thing on the rear wheel there and it's a plug-in hybrid electric vehicle. So, while everybody else it really worried about the gap price of gas, these days, I don't even notice the price of gas because I fill up.

So rarely they can have a spike and drop and hit valleys. And I wouldn't ever know what adaptive cruise control does is it allows me like, normal careers, control to set a speed that I want the car to go at. But it keeps an eye out in front of me.

And if somebody else is driving along at a slower speed than mine, it will slow down. My car to match pace with the other car. So, it adapts so whatever's happening in front of me, there's also a mechanism that will keep me between the lines. Don't like that one so much because it's sort of wanders a bit.

I want it to stay centered between the lines, not just between the lines. But overall, I mean, it makes driving a lot easier and a lot safer and a lot safer. So, in addition to saving on gas, I'm also saving on insurance costs. How about that? So, those are just some simple examples of how I'm using AI, but let's think now about how the use of artificial intelligence affects the ethics of artificial intelligence, let's begin by dispelling, the common myth, and the common myth is that our contribution to AI is as data to train the data models.

And that's unfortunate as this article in tech republic says, most analytics, treat. The data subject is passive just to set of raw variables, but in fact, we as people who are involved in the process of a, I may have something to say as well and the question comes what happens when we don't speak out.

I've got an image here from privacy. Watch about Cambridge Channel, analytica, harvesting 50, million Facebook profiles. That's what happens when we don't speak out, right? So, we are already, whether we like it or not engaged in a conversation about the ethics of AI. We're not just going to be passive data subjects.

We're going to be active contributors to that debate. So who in that debate is speaking for us, which is different ways of approaching this problem, right? You'd think, well, we're speaking for ourselves but might never happens. Rarely happens. We don't live in a direct democracy where everybody contributes their own individual voice.

Our legislation is not created or approved by referendum. So for the most part in the political process, people are going to speak for us. They're going to speak for us and all of the different stages that we've covered of the creation testing evaluation and deployment of artificial intelligence. So let's look at a little bit of that.

Probably the most common discussion of who's speaks to a who speaks for us is the discussion of diversity of. He says, in AI development teams and they're different ways that this comes out. Josh Feast wrote a thing in Harvard business review, couple of years ago, about how to ensure diversity, EMAI development teams and he says, things, like ensures, diversity in the training samples, ensure that the people labeling audio samples or any samples come from diverse backgrounds, measure accuracy, levels differently or separately for each different demographic categories, in order to ensure that they're accurate.

Reliable for each of those demographic categories, collect more training data, associated with sensitive groups and apply. Modern machine learning techniques. We've talked a bit about that in the past by itself. Ensuring diversity, in AI development teams is probably not going to be sufficient, but it may well be necessary.

We also need to have a diversity of perspectives here. We're not just talking about the usual kinds of demographs demographics. Like, you know, race religion language etc. Instead as John Haven says, you can't have a standard on facial recognition technology and not having the room data scientists psychologists anthropologists and people from around the world.

Now that might end up being a very big room. But nonetheless, I think the point here is valid and the point here is that diversity in artificial intelligence, is isn't excuse me? Isn't just a technical problem. It isn't just a mathematical or statistical problem, it goes beyond that. So that the the sorts of things that aren't considered the technical or mathematical level are considered in the development team.

As simple example, people's faces were being used without their permission in order to power technology that would, that could eventually be used to survail them, that was reported by NBC News and if we were simply relying on diversity, as you know, a technical or mathematical process. And there's no problem with that.

And in fact, that's a good thing, right? Are you gonna get all these millions of scraped online photos? And we're gonna really good diverse sample doing that, presumably. But someone in the room should have said something. Like maybe you should ask them first or something like that. Raising the question of whether it's appropriate to simply mind, people's face data from the nearest handy social network service.

Another way that people speak for us is in defining, what counts as success and this is something a lot of talked about before and I'm going to raise it here again, foreign example, look at the world banks, education programs key questions are what counts a success for these programs.

And importantly, success for whom quote, the teaching of life skills, the promotion of data, capturing digital technologies, and the push to evaluate teachers performance are all then closely linked to the agenda of the world bank. That's Philip cursing. This in his blog and that agenda consists of things like cost accounting and quantification competition and market, and incentives and the private sector in involved, in role in education and rolling back, the role of a state.

If the world bank is the organization who is going to speak for us, with respect to the ethics of artificial intelligence, then this world view is going to inform what we think of as successful. Artificial. Intelligence question is who made the world bank, our representative, right? When organizations like the world bank weigh in on these sorts of questions, what we need to ask is what voices are being overlooked, or even what voices are being silenced here?

What other evaluations could we make of the world banks education programs? We could ask. For example, our people actually learning. Are they acquiring skills? That help them be more successful in life? Are they satisfied with the way that they are treated as citizens? Or as you know, ethical beings of the ethical perspective.

Indeed is something that does not sit well in the boardroom generally and that again raises questions about who speaks for us. This is just a little story of Google and ethics here. The independent ethics board that it launched in 2019. It was forced to shut down less than two weeks later because of controversy about who was appointed to it.

Then later on after Google recruited a star, ethics researcher, Tim Knitt, Gebru, she was fired, that's her picture there, she was fired for criticizing, Google's AI ethics and then they fired. Another ethics researcher following another internal investigation. If you're firing your ethics, people, it's sort of like, you saying, you don't want to hear what your conscience has to tell you, and that's not a good look.

And it raises the question of my mind on, what authority does, Google, assert, frankly, anything to do with ethics and AI when it can't even get along with its own ethics advisors

Talking about the data analytics team as a whole. There's a variety of roles that are played and a variety of individuals that need to make decisions about what happens. And we've seen a whole bunch of this so far. But let's look at the actual people who are making these decisions rather than the workflow process on a borrowing from rather in here.

The analytics team will need individuals to identify the business requests. Develop use, case understand how data fits into that. Use case create the algorithm and analyze the data, develop reports and dashboards develop, a prototype from models and tools. Pilot it scale it and then ensure that it's adopted and that there's ongoing maintenance on it.

This could be represented is just three people, The data engineer, the data scientist, and the business stakeholder. But in practice, it's going to be a much larger team of people especially if you're working in a large enterprise. All of these people are making ethical decisions and they're making ethical decisions.

Usually not based on the business needs. That's what the first person does, but the rest of them are making decisions based on something else. The question is, what sort of decisions are they making and what sort of ethics are coming into play? And that's going to vary and every application of ethical decision making to an analytics or AI project.

Let's look at a few of these roles in a little bit more detail. For example, consider the person that we call the data controller. Now, this is a role that's actually sanctioned in law. It's identified for example, in the European GDPR. And the mean, purpose of the data controller is to be someone who ensures that the use of personal data is fair and lawful, Well, mostly lawful.

And so they would ensure things like making sure that data subjects are informed kept up to date, understand what's happening. They make sure that the data is only used for the purposes that have been identified to the data subject. They make sure that the data itself is up to date that it's accurate, and that it's secure, and they make sure that any request that the data subject has about the handling of the data receives a response.

None of this is technical, none of this has anything to do with the actual creation of the AI or the mathematics behind say, bias, and fair representation of a data sample or, or an analytical process, but it's an important kind of role. It shows the way that humans set up the environment around which an ape, I project takes place another person.

Obviously who's been in our mind for out for pretty much. The entire course is the AI researcher. And the question that can be asked is who is actually doing the research in AI. This article points out that very few research, articles on AI and education. I've been written by actual educators, with the majority of authors coming from computer science, and science, and engineering backgrounds.

And Melissa, blonde, Missible, Melissa Bondt, and Olaf Zawiki Richter, Wright. This race is the question of how much reflection has occurred about appropriate? Pedagogical applications of AI? Again, this isn't a question of the engineering, it's not a technical kind of question. It's not a mathematical kind of question. It's the sort of question we ask about use and what is an appropriate use of a AI we could.

For example, using AI to have people recite and memorize passages from a book, AI would actually be pretty good at that and could have evaluate how well they've read back the passages that they've memorized, but that's not a pedagogically appropriate use, even if it resolves in better grades. It's still not a pedagogically appropriate use because the education use case and in this instance would be so narrowly focused on about one piece of content without any contacts surrounding that content, that it would be pedagogically inappropriate, not to mention useless.

So, these are the sorts of questions that need to be asked about the application of AI and the researcher at AI needs to be thinking about more than just, well, how do we sequence the data and present it to the user? Or how do we match the data to the preferences that we think the user has education is more as I've often said than a search problem.

And there's often a tendency on the part of AI research are to reduce it to a search problem. Regulators. Again, these are not users of AI properly so called but they're certainly involved in the use of artificial intelligence and analytics, there's a regular regulatory framework for AI that's been proposed by the government of Canada, the Canadian privacy commission and it's suggests that an appropriate law would allow personal information to be used for new purposes.

Authorized those uses within a right space framework, create provisions, specific to automated decision making and require businesses to demonstrate accountability, whatever that means. We've gone far enough through this course to know about the weaknesses of a right space framework specifically. We could say, for example, that there are many instances of what might be considered, unethical uses of AI that are not covered by a right space framework because rights are focused on individuals and they're usually focused on specific aspects of individuals and especially they're freedom from discrimination.

They're freedom from restraint unfair, punishment, etc, and their freedom to express themselves, but a right space framework, doesn't address, social justice a right space. Framework doesn't address equality of opportunity and it doesn't address society. Why concerns like the state of accuracy in the media and the role that fake information plays or the overall prevalence of a surveillance state where everybody's treated the same.

So they all have the same rights or everything's good but it's a surveillance state things like that. We see it more in other areas like the environment where you can't really create a rights based case for protecting the environment. It's not without really stretching it a lot. And by analogy, there are probably issues impacted by AI and analytics, similar to environmental issues that are not addressed by a right space framework with the regulators again.

We we need to ask how do they come by the knowledge, the capacity and the the right to make these decisions on our behalf. They consider things like copyright, trade, secrets, privacy laws data governance but we as educators are more concerned about individual agency, personal prosperity, community relationships, things like that that aren't covered under these kind of concerns that are typical of regulatory bodies.

Another question that comes up and I've mentioned this before, what, what data accounts When we're inputting data into the system. What data are we pulling out of the environment? And what data are we ignoring? And are we focusing too much on one? Particular type of data Simple example, the San Francisco declaration on research, assessment Dora.

Lovely name is a call for the major players in academia and scholarly publishing not to use journal impact factors as a quote surrogate measure on quote of the quality of individual scientists or their work. When that comes from a news article in research research and it's a good point, the the measure of the quality of an individual, scientists research is something that's determined by a wide variety of factors and and probably I need to determine it number of factors.

Again, it's one of these 60,000 data point kind of things where you might not know exactly how to identify a key or important researcher in the field. But you know what, when you've seen one and focusing on things like journal impact factors were to in fact, actually out of control of the researcher seems to be an inappropriate way of assessing the researcher but this is what often happens in evaluation and metric-based programs where the researchers key in on a few data points and use those those data points to train their models or to draw whatever conclusions are going to conclude.

So, the question comes up, how do the rest of us? Have an impact, have a say in what data accounts and what data doesn't count when we're talking about training, AI, algorithms and data models of, you know, I when you think about it we can get out outside or we can go beyond traditional categories when we're talking about this sort of thing example here, and you can see it on the slide is citizen science and this is a form of science as Irwin describes in 1995, developing, inactive by citizens themselves and an important strain of citizen science is the contextual knowledge.

And you notice how that's plural that are generated outside of formal, scientific institutions. This is a way for people individual people ordinary people to become involved in the scientific process. Now the classic case is, you know, the sort described by melon where people are involved in sending out a bunch of sensors to be placed in homes imagery, dublin, etc, the European capitals that will count the number and speed of vehicle cyclists, and possessions, you know, okay, citizens can place sensors.

That doesn't seem like much but where those sensors are placed, what information that capturing and the role of the citizens in deciding that this information is indeed worth capturing. And that we're going to measure, say not only the flow of cars, but the flow of bicycles and pedestrians, that's what's important here.

And so it's this interaction between the scientific community, who wants to do this research and the citizen community whose a major part of designing and implementing this research that actually changes it. And you can see how this becomes a model for artificial intelligence research, where sure you have the professionals, who are building the AI, the algorithms setting it up and training them modules or the models.

But you have citizens involved in discussing and actually contributing to the sort of information that will be fed to the models and the design of the models figuring out what they will receive as data, how it will be labeled how it will be organized, how it will be collected.

This leads us to a concept of what might be called citizen inquiry and it comes I guess originally from sharples. Although you know, again, like these ideas, there's never one unique source for them, but the idea of citizen query and I quote, is that it fuses, the creative knowledge, building of inquiry learning with the mass collaborative participation exemplified by citizen science.

Changing the consumer relationship that most people have with research to one of active engagement. And so citizens, quoting again are engaged in all aspects of a research project from defining. The research questions to collecting analyzing reporting data. And you might say, well, they're totally unqualified to do this. I mean, yeah, maybe they are on their own, not working in an inquiry project without any guidance from real life scientists and researchers.

But if you have these people working together on a common project, then you are bringing in these multiple perspectives and multiple points of view into the creation of the inquiry itself. And so you're not just relying on what some bank or some government agency or whatever says, is worth measuring or is worth counting.

Similarly, we could look at how the actual materials used or even created by and AI or analytics. System are created here. Looking at an article by Rebecca Clunig in Edgeford of all places. There's some lot of human intervention that happens behind the scene in chatbots. Now, the most basic kind of chat box are raw based chatbots and in a role-based chatbot, there's going to be a lot of scripting.

A lot of scripting because everything, the chatbot says is going to have to be written ahead of time binding individual. Now, this is going to be less the case and a neuron network or deep learning-based chatbot, but it's still going to need to be provided with a vocabulary that it can that it can use.

It's still going to need to be guided by the sorts of scenarios that might come up. The type of information that it might need to report on. Even the kind of expressions, it might use in a conversation with a person. So either way, you're going to need an understanding of narrative convention, you're going to need to know, for example, to train the chatbot.

For example, to take turns, speaking to ask for feedback, etc, the chatbot is going to need to be able to interpret. Really what the nature of the request is that's going to require quite a bit of human intervention. I think it's going to be a while before we have an AI that both manages to conduct conversation and on the other hand has a depth of knowledge on the subject that is supposed to be helping someone with, We are moving in that direction.

But the only way we're actually going to be moving in that direction is with a lot of training of these chatbots in actual circumstances with actual human conversation all examples. And that's what the humans will bring to the story.

Sorry about that, losing my voice, this in general applies to design in general. Now this article by matte shipment talks specifically about feminist design in hiring algorithms and trying to avoid bias and yeah we've covered that. And the idea here is that designers need to be in a process that leads them to consider multiple audiences both in terms of the people who are doing the hiring, and also, in the people who are considered as candidates for hiring, even in things like selecting who the algorithm should consider, who the algorithms should prioritize for further, screening or further interviews.

All important questions, but thinking of designing generally, these recommendations still apply. Think about the design of a web page. We haven't talked about web page design at all, but there are ways to be more or less inclusive in the divine, the creation of a web page. And also in the testing, I talked about AB testing earlier on, who's doing that testing, who are the people?

Looking at the two versions of that web page, how are we ensuring that? Well, maybe we're not. But are we ensuring that? We're getting a reasonable selection of people and that the people doing the A/B testing are reflecting a broader range of objectives, that simply solving a task that the designer has set for them.

I know that's how most of these usability sizes work, where you sit somebody down, in front of the screen. You tell them, we want you to do such and such a task, and they try to do the task and, you know, they're most movements and their time and all about our measure their needs.

Also, I think though should be room for the, the open ended sort of assessment where they don't know what they're doing because that's my general experience and asking ourselves. What are they going to do? Then again, our use of AI and analytics in the design process is what's at issue here?

How are we shaping our own understanding of design? So that if influencers and informs how an AI does design relationships, this is going to be an important one because people are talking and I'll mention a little bit. A few slides down about working with AI in teams or collaboratively.

You know, that we have the human in the loop. We have the AI in the loop. Oh, that's based around relationships. Michael West talks about the difference between relationships as understood by his friends and Papua New Guinea and relationships as understood by his students, in the United States where the latter tend to emphasize their independence and individuality but the former are more connected in much more profound ways.

He says and understanding how we construct relationships among ourselves is going to inform even an important way. How we instruct maybe the wrong word there, an artificial intelligence or analytics engine to work in relationship with ourselves. You know we sort of sometimes come up with the idea that when we're working with an AI that it will be hard-nosed and inflexible it will reach a conclusion and what it does that's it and that's not how relationships work.

We know that and for an AEI to work in a relationship with us it's going to have to know that. So there needs to be a way of modeling relationships, actual working, functional relationships in such a way that an AI can learn how to interact with people, as well as to be informed on whatever it's area of expertise is going to be our relationships are in the form of social networks.

They might be very small, social networks, you know, the atomic families are very small social network or they might be very large, social networks. Like, you know, only the community of Taylor Swift fans are Swifty's as their known. What's important here is that we connect to each other. There are various ways we connect to each other.

There are various ways that we, as a connected group or network of people, learn things, develop things, and make things, what we might call community knowledge or social knowledge. And, you know, and I go back and forth on this quite a bit and I'm gonna do it again where the things that I say about AI and learning analytics can also be said about social networks and it's worth asking how do we learn as humans, how to work in social networks.

How to be a part of a social network and even importantly, how to learn from social networks, with respect to ethics specifically, we need to ask, what is the the ethics of our own communities? We we could say, our own learning community specifically, but you know, it's wider than that.

I belong to a community of people who share images on line called imager or imager, never know how you pronounce. It doesn't matter. And there are some ethical principles that have developed over the years. For example, the no selfies rule. You only show selfies on Christmas day, no other day, or follow the format of the mean and there are others.

How do those come about? How do we create them? I've mentioned this before, in this course, where I had an argument with Jesse Stumble who believes that the way communities create. These rules, is somebody else really loud, don't do that, and that's how a real gets created. Okay, character turning is point of view, but not by much, right.

I don't think that said, I think, you know, there may be a role for stating the rule if there is a rule. But mostly there's a role for modeling, the correct behavior modeling and demonstrating. As I've said before, how do we show this in our own community's and make this available in a way that artificial intelligence is and analytics engines can understand it's an important part of the way.

We use these systems that when we use them in these environments the way we use them informs. How they learn about these environments? What would make an AI ethical partner in a collaboration? I'm mentioning a few moments ago that we have this image of an AIs in flexible and unyielding, but here are the sorts of things that needs to be able to do.

It needs to be able to enter into an agreement. This is all from Gary Klein and others. It needs to be predictable and its actions, at least it's predictable as I am, it's along with the rest of us need to be able to take direction looking for the AI do that.

And there's the need to maintain something like a common ground. Now, again from this point in the course, we can see here that what they're describing is something like a social contract model of team building. It's not clear to me that the best way to work with an AI in a team environment is through social contract, that might be yielding to much control over to the machine perhaps and it might just not be a good way of organizing a team in the first place.

Most of the teams that I enter into, don't have social contracts and we don't enter into an agreement, a basic compact or anything like that. A lot of it is just pick up, you know? We, we learn how to work with each other on the fly. Even teams that are organized around rules, like sports teams where there are predefined roles the, the way people play those sports and fulfill those roles from team to team and indeed that's what makes one team different from another is the way they do.

Teamwork together. I went to an Ottawa senators game recently, I sat there thinking about what is it that I'm looking for in a team that tells me that this team is working well together, but I came up with the list of things sharp passes because it shows that people are getting into position.

They know what to expect and that people are executing with. Trust that the other person will be there. Another principle winning the battles along the boards because those are the hardest parts of the game winning those battles and you have to actually owe muscle the other team even if they're stronger than, you know, things like that, I came up with the list of them.

But these are the sorts of things that it takes a combination of interaction between the players. Perhaps they defined environment like a hockey arena and some coaching. Now, I've never seen that and one of those sports games, you know, if you play EA hockey, for example, you have you and then a bunch of game control teammates.

Those teammates are generally pretty bad and teammates, they don't get what you're trying to do with your hockey team. You know, if you're trying for, you know, when attacking style or a defensive game or whatever, right? Sometimes you can actually just toggle a switch but really, they, they should be able to learn from your example, follow the admin, the flow of the game.

That's what I'm talking about. You might think, well, how, what does any of this have to do with ethics? Has everything to do with ethics? Because we could, for the sake of argument represent ethics as how we work together as a team. No, clearly ethics is a broader domain than that.

And we're not working in a team with the rest of society, really? But that's only a difference in scale or degree and not a difference in type. And so, the sorts of ways we would want and AI to learn about how the humans are playing hockey in the digital hockey game are the sorts of ways that and AI should learn about how a human can conducts him or herself in a collaboration or a partnership or in a wider enterprise, where a lot of these ethical values and principles come into play, inclusion is a good example of that.

Again, this is a case of the AI being something other than the hard edged in flexible, it makes a decision. That's the end of it. Kind of participant. Inclusion is evaluating teams, particularly if we want to support the formation principles of bias and representation. Because simply having diversity isn't going to be enough people in the team, people who are developing, the AI system people who are using the AI system.

All need to be actually actively included in the process otherwise they're just decoration, right? So what does that mean? Well there's a list here that's kind of provided and like most such lists. It's bit if, but it's also inaccurate, right? Because it's an abstraction. So it's going to miss a lot of the fine details, but look at the sorts of things that it's considering empathy understanding the users situation.

Co-creating collaboration with a multidisciplinary team learning by trial and error, accepting uncertainties, and then testing and validating things. The experiential component, being inclusive means, including people in all of these things, all of these processes and actually engaging in some given take in each of these five dimensions and then all the gaps in between these five dimensions that the five dimensions don't actually cover.

So the sort of AI that we want to work with that will be non-biased and will include diverse perspectives is one, that is going to practice inclusion. When it is working in a team environment, How would it? How is it going to learn this? Again, It is going to have to be the people who are working in team environments, who model this inclusive, kind of behavior for the AI to learn because it's not going to learn it as a set of rules, It's not going to learn it as a center principles, you know, in that really brings us to the decisions that we make, as users as users of AI has users of digital media.

Generally, we really need to question ourselves here. For example, number of reports have come out a mayor or this MIT study by Bosugi and others, we prefer fake news. So it seems no, okay, maybe not, but, but the study show that we're more likely to share fake news, that we're more likely to read fake news.

The the sensational, the controversial, that's why the algorithms, which privilege engagement tend to need to showing more and more fake news because that's the bread crumb that as a salon, all right, that's the thing that keeps us engaged. So how are we training? AI? I might ask if in our actual practice, we're demonstrating that we prefer fake news, but pretty simple example, we're teaching it to give us fake news.

That's probably a bad thing confirmation bias. You know we could talk about whether or not confirmation by us is a real thing. Whether the filter bubble is a real thing, a filter, bubbles, the idea that you select for sources that confirm or echo or reflect your point of view to the exclusion of other points of view and it's kind of represented in that diagram there.

If indeed, that is how we are selecting resources to learn from, or just to read generally how, or what is the impact of that on how an AI learners again, we are teaching the AI to feed us only information, it knows that we already agree with. And again, arguably that would be bad who makes the decisions.

And here I'm talking about not, you know, the role of the world bank or the role of legislators etc. But just how we in our day, today lives are work lives or home lives. Allow people to make decisions. And what we do tend to do is to allow companies to make decisions for us to allow private companies, to make decisions for us.

And the institutions that we set up privilege, those who are in authority and and disenfranchised to a degree, those who are not in authority. And if you've read my newsletter over over the years, you've heard my comments about student newspapers at universities. This is a good example of this because in many US institutions, especially these newspapers aren't actually run by students.

They're run by administrators who oversee the the newspapers and actually hold tryouts to see whose allowed to actually be a writer on these paper. Very different from the sort of student newspaper that I worked on in Canada, where there was no administrative control whatsoever. Not even by the students union where the newspaper itself was run as a collective.

And it was open to anyone who wanted to participate in the creation and the publication of the student newspaper. It's a very different model and the way we make and set up these decision making models, these are the ways we are also training, artificial intelligence, and analytics engines. And if we train them to defer to authority and to say disenfranchise students, and that is the kind of behavior that they will opt to emulate when they are doing their AI kind of thing.

And you know, I'm talking in these broad strokes um as though we could just train an AI to defer to authority and that's not actually what happens. We need to be careful to make that clear. The act of defering to authority to pick one notably various examples. I've given isn't a single thing that we're training AIs to do deferreding to a to authority.

Actually consists of a thousand, ten thousand individual decisions that we as individuals make that create this pattern. That might overall be characterized or recognized as deferring to authority. And so that is what the actual material is that. We're giving to the AI or the analytics engine. And it's important to understand that because we will be able to say in in all fairness later on.

But I never taught the AI to defer to authority, or to favor regulatory environments where authorities are privileged and you have it. But what you have done is in your own daily life, by exhibiting, a pattern of deferring to authority, for each instance of that pattern is what is actually given to the.

You have taught it to defer to authority and if you don't like my phrase deferred to authority insert, your own phrase about who makes decisions or how we confirm information or what sort of news that we're going to follow insert your own version of that. It's all the same.

Rules about what matters again, another one he starts of examples. Here's a sort of thing that happens. Wikipedia is an encyclopedia created by humans. It's used by many AI programs as input data for AI models. Now, wikipedia, a number of years ago, more than a decade after getting exactly how long ago in order to ensure that it was viewed as credible required that everything in it.

Every assertion effect in it was substantiated by a published source, which sounds like a good idea. But there's a severe and documented lack of media coverage of floods or disasters in under reported regions like say Africa Patagonia parts of southeast Asia. Wherever. And so, what happens is that over time the coverage in Wikipedia begins to be dominated by the same sorts of things that dominate the coveraging traditional media.

Well, traditional media has a history in, it's not a good one and it's, you know, and we could go on for quite a while. Talking about that Marshall. McClune versus I'm sorry. Noam, Chomsky has certainly talked about the sorts of manipulations that happened in traditional media. And, you know, I've looked at some of the references and Wikimedia, the daily express is a published source.

The Toronto Sun is a published source and yet these are what I would argue to be unreliable sources and yet if I don't have such a source and I'm say writing about my own work, that's considered unreliable information.

So here we have a case. Where the pre-existing bias to prefer quotient quote, published sources results in a distribution of information, that skewed toward a particular, world view Western-centered, white-centered mail, centered power, centered, authority, centered, and then that's reflected in the coverage of Wikipedia which is then reflected in the models that use Wikipedia as a source of input data.

And this informs everything from the language that she's used the people that exist, the kinds of facts that are important to be covered. The whole, the basic question that was addressed by in the early days, what matters and so in choosing as a whole, what matters we are in a very direct way training.

Our AI in analytics engines of the future, even the way we build our environment. There's a phenomenon known as stigmurgy. And what stigma G is basically is the way we use objects to communicate with each other or, or to put it in the words of verbique artifacts mediate human existence by giving concrete shape to their behavior and the social context of their existence.

You know, the typical example is the way ants communicate each with each other by leaving scent trails for example or like building caves in certain directions. There's an example here of people in the city of Denbach, and the Netherlands leaving messages to each other, in chalk on various things.

But it's more than that. It's the way we build our buildings. It's the way we organize our roads. It's the way we prioritize. Different kinds of shapes and different kind of purposes in our architecture. It even boils down to the statues that we have. The monuments that we have the things that we put plaques remembering on the walls.

All of these things are part of the overall grist for the AI mill and you might think. Well, let's a bit much, right? Well think about it, these things. All in one, maybe not all because the way it's going, they all end up as photographs, the photographs end up in photosharing sites like flipper photo, sharing sites like Flickr are used to train artificial intelligence.

Therefore, the shape of the artifacts in the, we have in the world and ends up training, AI, and analytics engines. There's, you know, there's, there's no way around this. We need these kind of images to train AI. There's, there's side discussion, we can have here and I touched on it earlier in the discussion on data about creating artificial data sources to train AI and analytics engines.

But ultimately, I think what we want is to use real examples. Real people, real things real photos. But what we need to realize when we're saying, that is the decisions that we make as humans and up being the decisions that we make as artificial intelligence is. If for example, the design of all of our cities were flexed, the belief that cars will be preeminent, we don't need to train an AI with a rule.

It says, cars will be preeminent, but that rule or some version of it will be observable in the source of decisions. It makes because all the different data points that we've given it and up to something like the belief that cars will be preeminent.

So, we're training our AIs. And I've talked about how the AI would reflect the things that we've said, the decisions that we make, how we use it. And the question can be asked, well does our refix work that way? And I argue that, that's exactly what happens. Of course, we have to, in order to make this kind of point.

Ask can answer the question can robots even think, like ethical beings.

There are the tendency is to kind of treat it like a technical problem. And if you get the algorithms right, you'll get the ethics, right? Or even to think of it as a data problem, right? If we can't play data sets, right? We'll get the ethics right. One of the contributions of boss trim and idkowski back in.

2011 is the recognition that AI isn't a terror at the ethics of AI isn't a technical problem. And that it it's not simply a product of ethical engineering. Rather the wider question is what constitutes ethical cognition itself and they say that that should be taken as a subject matter of engineering.

I'm not sure. What is the subject matter of? I do know, I'm not going to trust engineers to solve the problem for us. I think we need to think more broadly than that, which is why we get back to the points that people make about these design teams. Needing to be composed of people from a variety of disciplines, but let's perceive this path for an AI to even be ethical.

We need to be able to say that. In understands, in some way, there's an awful lot of pushback against that idea. And and and fair enough but still let's think about it. The early test for whether something actually was an artificial intelligence, whether it understand was the touring test, the turning test was simply If you were in a conversation with an AI.

Could you tell you were talking to a machine rather than to a human? You might say well that's pretty good test but it turns out that machines pass that test a long time ago like in the 60s even rule-based systems can pass that test more recently Terry Winograd

Who has kind of a schema based approach to intelligence, generally offered what he called the schema challenge. And that's where you present and AI with two sentences that change by only one word, for example, pour the water into the bowl until it's full. So what does the word it mean, here?

Well, we mean bowl right? As compared to pour the water into the bowl until it's empty. Well, what does the word it mean, it means whatever. We're pouring from the familiar. I can understand that difference, then maybe we can say it understands. Well AI past that one too GPT3 which is a recent AI model was correct on nearly 90% of the sentences or a few hundred sentences and a benchmark test.

So I think it was last year. Maybe two years ago, they came up with something called win old grande. I'm don't or maybe it's just one old ground, but I'm gonna call it window grande which is a much larger set of 44,000 such sentences. And right now, by right now, I mean, probably about a year ago because there's time like the best programs for getting about 90% of those correct as compared to human's, which were only getting him about 94% correct.

A certain point. You you begin to wonder? What does it mean for us to say that the AI understands or doesn't understand?

And maybe we need to redefine what we mean. Melanie Mitchell wrote recently. She says, the crux of the problem is that understanding language, requires, understanding the world and the machine exposed only to language cannot gain. Such an understanding the sort of argument that comes up here is what's called the Chinese room experiment.

And was a surreal proposed on. I think it was John Ferrell of the speaking of the top of my mind here, I'll get the reference right in the actual written version of this. And the idea is that you put a person in a room with a whole bunch of Chinese characters.

The person only speaks English but they have all of these characters and they have a set of rules or whatever, right? They might have a neural net, something such that when somebody feeds them. Some Chinese characters through the slot in the door. They look at the characters, they apply the rules.

They feed some characters back out of arguably doesn't matter what gets fed in and what gets fed out. But let's assume that what's gets fat out is perfect and always perfect. The person in the room still doesn't understand Chinese are very response is to not. For example, you have to consider the whole system of the person, plus all the rules, and the Chinese characters, etc.

All is one thing, and not one thing to together, does understand but there's something tuition there, right? There's an intuition that you need more than just words to understand the world. You need to actually go out and understand the world. There are different ways you can go on this.

Here's the way Melanie Mitchell goes. She writes. And I quote, if we want machines to, similarly, master human language, we will need to first endow them with the primordial. Principles humans are born with and to assess machines to understanding. We should start by assessing their grasp of these principles, which one might call infant metaphysics.

It reminds me of what David Hume said talking about causeology and and connection between cars and effect. That it's something that even though we have no way if figuring out how to use the most advanced reason to make this work, it's something that children infants. Even animals can understand it, my cats, understand cause and effect.

They can also tell time they can also predict when I'm going to feed them and complain. When I don't infants as well exhibit, certain senses of knowledge about the world around, there's certain points where they experience object continuity. Etc. It's not clear to me that they're born with this.

There are philosophers gnome. Chomskies one jerry folder. Is another who suggests that the have all the linguistic categories and skills that they need inborn in order to understand language. I don't think that's the case and the test of that would be, you know, if you could give an AI, the sorts of experiences that you could give say in infant would the AI learn what the infant is able to learn.

And I think that it probably would. The problem, of course, is there would still be people who say, but it doesn't really understand but in a certain point we're beginning to beg the question here. What do we mean by understanding? Right if we're saying it's not human. Yeah we get that but maybe at a certain point we have to concede that for a rough and ready, understanding of what we mean by understanding.

As I talked about before the previous video, we have to accept that. We believe that the machine understands which brings us to the question of whether the AI can be a moral agent. But it's the same question of whether a person can be a moral agent. So there's two two aspects of this first is the AI a moral agent in the sense that it it's autonomy really leaves the developer of responsibility and I think there's a strong negative response to such a statement.

I know I respond negatively to it and cam says rights, have a significance beyond their role in protecting our interests rights reflect our inviolable, statuses persons. So this question about whether an AI is a moral agent is the question of whether an AI could be considered a person and we might be more inclined to move on that question than you might think.

Because we have court cases in recent memory. That argue have decided that corporate entities are persons. There's actually a name for them corporate persons and that they do have some rights. That's what maybe the case that in AI has rights. And generally, when we talk about rights, we normally associate that as well with responsibilities.

And if an AI has rights and responsibilities, then it's a moral agent. The second thing is, how are we going to determine this moral status? And there's there's two conditions at least as outline by Boston, but one is sentience, which is the capacity from for phenomenal experience or quailia such as the capacity, the field, pain and suffer.

What seems it would be a cruel thing to give to our AIs. Talking about deliberately making someone suffer. But, you know, there is that argument in religious philosophy that it's the capacity to suffer that is needed. You know, people asking why does God make a suffer? Because that's what we need in order to become moral agents and indeed in order to be free.

And then there's the the other part of it say pants which is a set of capacities associated with higher intelligence such as self-awareness and being a reason responsive agents. In other words, rational rationality I think the AI has handled self-awareness is tougher, but again, it's one of these things if we treat the AI.

As you know, a person that feeds back into the training of the AI, the AI eventually begins to regard itself as a person and treat itself as a person in its own decision making. So, I don't think this is such a hard philosophical conundrum as it might seem Whether or not we ever absolve humans of responsibility for AI actions.

And I'm not convinced that we should and that might be a contradiction in my own position. It's going to make sense to us to treat a eyes as moral agents. In the way we train them and especially the way we use them. And we're going to have to think of them as things that can learn to distinguish, right?

And wrong basically on their own or basically, based on the way, they've been trained, you know? It's there's a bit of a fuzzy area in between them and that I think leads us to the most important way that we can think of of how we relate to AI, how we use a.

I and that is we are the teachers of AI. There's no getting around that. It's not like the nuclear reaction where once we set the initial subatomic, particle moving that. Everything else is beyond our control. It's not like that already interaction with artificial intelligence and analytical engines is ongoing, and dynamic and doesn't end and our major role in these interactions.

Is to train them if we train them, well, they will become reliable responsible, ethical partners. Not we can work with if we don't train them. Well, then they'll be a problem and this is something that belongs to all of us, not just the engineers and the developers. Greg, such hell writing as pervasive as artificial intelligence is set to become in the near future.

The responsibility rests with society as a whole put simply we need to take the standards by which artificial intelligence will operate, justice seriously, as those that govern, how our, political systems operate and how our children are educated. That's not encouraging. To be honest, given the somewhat loose and almost slip shot ways in which we've handled both of those things and given the mistakes, maybe we need to take that more.

Seriously, but given the mistakes, maybe we need to take political systems and education more seriously as well, but it's the same set of challenges the same set of responsibilities the same sorts of outcomes in working with a eye as in working with children. And then working with trained AI as in working with other people in society, as AI becomes more complex and much more than just simple set of rules or even a trained model.

And something that is responding directly to the models and the actions. And the examples of that we provide, then AI is something that we need to for all practical purposes treat as though, it were a person. And yeah, if it comes back to what my co-wash was talking about 2007, right?

Let me see is us and at the same time, the machine is using us, and that a certain point, it becomes really hard to tell the difference. And it's, when it becomes really hard to tell the difference that we've actually grasped the significance of the issue and the way forward with respect to understanding ethics analytics and the duty of care.

That's it for module seven. Next video will be on the final module of the course. I'm Stephen Downs. Thanks for hanging in there with me.

 

------------

Add some of this to chatbot discussion https://educationaltechnologyjournal.springeropen.com/articles/10.1186/s41239-021-00302-w

Maybe consider this paper to add to the overly brief discussion of rationality and autonomy. https://kar.kent.ac.uk/91907/1/Radoilska_Autonomy%20and%20Responsibility.pdf

 

Force:yes