Unedited audio content, five speakers, from Google Recorder.
Okay, so we're recording on video. I've got audio recording recording so we're up and running and so yeah I'll begin Jim by apologizing for not putting you to sleep last night. It's a bit unusual to be accused of not putting someone to sleep with. There you go. Or I just blamed.
This is a better word than the keys. So this is the beginning of the last week. Let me move this over here. So I'm actually looking at, you want to speak to. There we go. So as usual, I'm running behind in my videos so they will continue probably for a bit after the last week.
But marks suggested that after the end of the last video that I produced I become back together for a wrap-up party, bringing champagne, or the this, or the scotch, whatever you find most appropriate for this. I may be halfway through the recording of the videos for the, the decisions we make section.
You saw the, the ones I guess I've restored. See I did the one on how I works, which is a simple introduction to AI and we're going to revisit that uploaded the one on the educational context. Now, the original version of that, I uploaded was clipped at 14 minutes.
Thank you live streaming YouTube, but I always make a backup recording, so I had when I'm making one of those live stream view, YouTube videos. So happily I had that recording and so I've uploaded that. So we have the full version of the video now and of course, we always had the audio in the transcript for that and then over the weekend I got data crazy.
So that and so that really is one big long presentation on data. But I've broken it into four parts, looking at the different aspects of data. So what I still have to do for this module, the part that I've been working on most recently and I, I will, I was going to see, I should but no, I will come out with one is on algorithms and and forget exactly what I call it tools and algorithms mostly it's about algorithms learning theories, topologies, stuff like that.
How there's Mark. So it's this is one of these things where I started need to be careful. Because there's, you know, the feel of AI is, you know, so deep and so complex that it would be really easy to spend the rest of my life talking about it and doing different deeper and deeper into it.
And it would take the rest of my life because I have to learn about it first. And then talk about it, that's good overall, grasp of it. But if I'm gonna talk about it on a video, I need to make sure that I've got it in solid in my head.
So I'm not just slowing me down a bit but such is life anyhow. I want to talk about that. Then after that, there's a couple new. I'd say two major other aspects of the overall workflow that we're talking about and that's the model it's self and how we interpret the model, which would be the next video after the one coming up, and then testing an application.
And I'll talk a little bit about AI explicit ability in there as well or explain ability. I'm not sure what words you want to use. Then I'll have a wrap up for that. Module titled. Something like the machine is using us. Yeah. Obviously, I'm borrowing from the microwash video there but, you know, just just to bring it full circle because one of the things that's different.
I think in the way I approach AI and AI is that I think of our overall society culture social network etc. As a big giant AI. Although it's not, I mean, it's not artificial, it's human real. So, it doesn't follow any of these set algorithms, but we can detect detect patterns that we see in the algorithms, in society as a whole.
And I find that an interesting way of thinking about how we as a culture or as a species think as a whole, you know, I mean we can also talk about how we think is individual people and they do that with neural nets. You know, I do that. Lots of people do that with neural nets but there's less talk about how culture I actually shouldn't say that.
There's lots of talk about how cultures think but it's almost all theory based you know you actor network theory or with history that theory. I don't even know them all. And I'm not and frankly I'm not interested in them because, you know, it's sort of like cognitive psychology for society.
And you know, I'm I don't even like cognitive psychology for for people much less cognitive psychology for society. So I want to look at how I want to think about, you know, yes, we think of society as a network and don't try to force theories on it. What does that tell us about how we learn and what we learn and that's particularly appropriate for the current investigation.
Because when we're trying to think about ethics, you know, we've gone through all of those different, ethical theories, and all those different approaches to ethics and all the consideration and the values. And then the sorts of things that come up in the ethical issues and the ethical codes, you know, including all the way up to all the stuff on data that I get over the weekend.
And, you know, we can't describe that using theory now. I sort of like cognitive psychology for ethics. Great. You know, we keep trying, but I don't think we can, we may be able to make observations about some of the outcomes of it. But I think that the way society talks about ethics and the way we learn about ethics is in this network, kind of process that we've, you know, that we would talk about in AI where you can't get well the way.
I've been saying it in the videos, right? Instead of talking about 14 variables the way we like to do with theories, we're talking 60,000 variables. The way we talk about in the neural net, you know, and in any any given ethical decision, any given ethical context, we are as individuals and as a society taking into account, 60,000 variables.
Now, that's a number of picto of the air. Well, it's not quite over the years. Based on that. One example of the AI that I that I offered at the beginning, the module 7. If that's now, if that's actually how ethics is created and applied and I think it is, then what does that tell us about how we can have ethical AI?
So we need to turn the snake back in on its own tail here. You know, and, you know, this is basically that this sentiment going to try to get in module 8, after all this discussion about different decisions, we making a eye, which will basically, I think go to show that.
Yeah, we need to take into account. 60,000 parameters, is that?
How do I want to freeze this? It's hard and I'm still not sure exactly how I want to phrase this but maybe something like this ethics is like culture in the sense that it's got all of these inputs. All of these in all these different variations still badly. But it's something that we do all together, each of us by doing things interacting with each other making decisions on the spot.
Etc. It's like a language. Even like a language like a culture. Anything that even like a city or like a country or community. Any of these really complex things that we build up to together through series of actions and artifacts. And so the question of how to make AI ethical, well, the question of what is ethical is the sum total of all of that.
And then the question is, how do we apply all of that to ethics and it's hard because, you know, we're the other, the other unknown we've addressed in this course is what exactly would be ethically. I mean, what is ethical? Where we don't even have a statement, you know, as to what is ethically.
I and the reason going to be one and it's going to be each particular application of AI. So our circle is something. Like we have AI practitioners who have learned ethics and we've learned ethics from society. Society has in its various ways of doing this taught them, ethics society has learned ethics as this, large network process of interactions.
And basically, I don't want to say invented ethics, but basically invented ethics and the way it's done, that is through these individual actions that people consider to be right around. It is it is a circular thing but I think it's always been a circular thing. And you know, we we can come up very explanations as to why it worked out that way, you know, some people argue for, for example, evolutionary already mixed, I think that there are a lot of practical explanations for a lot of ethical principles that, you know, we sort of suggest is broad hypotheses or probably wrong in specific applications, but, you know, might form overall explanations, you know?
Take for example, a religious prohibition against eating shellfish, we could explain that as divine revelation, but that probably wouldn't help us much. The better explanation might be that in the context, when this religion was being developed, it was really dangerous to each shellfish. There were often rotted spoiled or they would pick up toxins from the water etc and so especially in environments, which are hot.
So shellfish wouldn't last very long after you take them over the water, they definitely have to be cooked, you know? So pretty good reasons, all in all for, not eating shellfish, Similarly, with prohibition against port. And here I refer to these that are not an episode of tapestry and CBC just as a side.
I really don't like that show very much but CDC is always on and they were talking about the Muslim prohedition against poor. And again, the explanation came out that at that time and at that place, it was dangerous to be poor, you know. But those are explanations as to how these things may have arrived and become part of our overall ethic and not what makes them right or wrong, right?
Because then, that would be a very, very utilitarian consequentialist view of ethics. And again, you know, that's one way of explaining it, maybe, but certainly not a prescriptive principle. But what makes something right or wrong is this overall social determination that an act is right or wrong in a given context that we try to extract generalizations from but usually fail, and that leads us to the problem of AI.
To my mind, the central problem of AI. This is all a top of my off, the top of my head summary of module eight. So, the problem of AI is
AI is only going to be as ethical as society is, you know, however, ethical society is as a whole. That's how ethically I will be. Because society trains the AI authors, the AI authors, reflect their ethics in the AI. That's what we get out of it. And indeed, we can draw the same line using data, right?
Society constitutes, the data that we used to train AI, the AI brings in that data. And that's what we get, the AI learners from the data and becomes as ethical as the input data. And you know we've seen those examples like hey where the input data was racist and therefore the AI was racist, but think of this abroad scale and it makes it hard simply to make ethical judgment of about the open of an AI because if we're just using a simple ethical principle, then we can make judgments for BAI.
But if we're looking at society as a whole, how can you say the AIs, behaving unethically, when the behavior of the AI is, what society thinks is ethical behavior. Not by what they say, but what they actually do. And and so if I had to summarize and slogan form, if we want AI to be more ethical, then we as a society have to be more ethical.
No other way around that without you froze a gym suffers frozen out. These just coming back in. Well, there we go, gym stuffer, he him and him. So I'm not sure should be over here. So I'll just say that, I'm echoing my drills on my computer was on responsive.
So I've come back on that phone, I'm going to because I didn't want to miss what you were saying, but carry on and I'm come back on my computer in a bit. All right. So, to wrap it up, I'm sorry, but, but they basically the thing that I just said was in order to get more, ethical AI, we have to become a more ethical society.
And it's that's a hard kind of statement right here and it's it's kind of it's kind of a weird ethic. It reminds me of Was it pastels? Philosophy that we live in the best of all possible worlds. I might say something like we live in the most ethical of all possible worlds.
That's a strictly true.
But it is true in the sense that whatever is ethical is what we decide is ethical. Um, you know, the there's there's, you know, we can sometimes, say, as individuals, this action was wrong, that action was wrong, but ethics is, you know, a property of society as a whole.
So, whatever ethics is, is whatever. Society is a whole does, we are as ethical as we are as a society, and that's what ethics is. Welcome, Doug Smith. And this is your first visit just as we're in the last module on the course. But nonetheless, perhaps you've been following along.
Certainly nice to see you here and you're on mute and you have been lurking. Welcome to the dealer. So okay, I'll stop with my opening remarks there again, just a quick summary for Jim and for Doug and we assume Mark is there. Listening. The, the idea here is that ethics is not composed to the center principles or rules or anything like that.
It's a complex response by society, or by individuals, that takes into account, not 1420, whatever principles are or states of affairs or variables, I should say, but 60,000 number of the year, very, it's incredibly complex. And we address it in this complex fashion, both as individuals and a society, as individuals, we learn what constitutes ethical from society.
As a whole, from all of our interactions with all the other people in society, and also speaks of the fairs in the world. And we apply that learning to what we believe is, ethical behavior in our own lives and society. Learns, what is ethical from the individual applications of what we believe is ethical behavior in our own lives.
Taking us a totality. So it's great big circle. But we could explain the origins of some of these things of various theories, but the explanation of an origin of an, ethical theory isn't what makes the particular application of it. Right or wrong, what makes it right or wrong. Is the fact that society believes that this is ethical as a whole or the fact that you as an individual, believe that something is ethical as a whole, which means that, in an important way, we are as ethical, as we can be, society is as ethical as it can be.
And if we want our AI to be ethical, or our AI will be as ethical as we and society. Are, if we want AI to be more ethical, we have to be more ethical individually. And as a society, whatever that amounts to and that's the hard part, you know, it's like how do you determine what a more ethical society would look.
Like when ethics simply, is this application of what we think is ethical again we're back in the circle again. So I won't do the certain leveling, any thoughts and comments?
I think you're exactly right. Stephen, and what part of the fear is, is that the AI will get ahead of us. By the time, we, we established a reasonable foundation of ethics for ourselves in society. AI will be at the point where it's now deciding and it's too late for us to influence the AI.
Sharina. What do you think? I don't have a hard and fast thing. I guess what I'm responding to is, you really are describing AI as more evolutionary. It's not a fixed point. It will change over time or change with whatever society that you're in and it's something that it's something that always needs some kind of discussion.
So who has that discussion, right? As a whole society, is hard to have a discussion, the, you know, the well, five of us here could have a discussion. But how much does that really affect the the ethics of AI in Microsoft, not a lot, you know? So that's what I'm what I'm getting from it, so it could change.
I mean, if Donald Trump were still in power one argues, whether he is still in power, since the media spends so much time paying attention to him, would are ethics chain.
And I think our ethics, in many ways are changing. Jim thoughts. Yeah, I'm kind of with Sherita a bit pessimistic about my ability to affect society at the whole. So I'm you know, I don't think I can influence the Amazons or the Googles or the Microsofts to adopt my ethics, right?
I can't promote them in my small circle that that's the wrong way. My ethics, I can promote this idea that you introduced a while back. We need to even think about the questions we can ask about ethics. You know, I can introduce that in my small circle in my center, their own part of the center for teaching and learning in a small community.
College covers the whole Northwest Territories in Canada. But, you know, I have a little bit of influence there. No power. But a little bit of influence and that's where we can have discussions of all ethics. I mean, one of the things we're looking at right now is our college session, not a popular term, but how to borrow the advertising compression over, our majority indigenous students.
So, that's an ethical question that we're applying to education. I lost to one of the things we're applying right now. Oh, is are we how are we? Perpetuating? The oppression of our majority student population. Being indigenous at oppression is not a popular word to use the. That's kind of where our ethics.
It's the road, right? And I think I think it's a fine word. I mean if they still don't have clean water to drink. What else do you call it, correct? You know, I mean I was deluded houses to live in etc. I mean conditions that would be considered unacceptable anywhere else.
I mean even like when the water when the water supply went out, you know it when I moved heaven and earth to make sure that the water was restored to tell it why there and not in some of the other first nations communities across the country. You've been listening all along any comments?
Yeah, I was just saying about too much, so I, yeah, hurt myself off, I don't even know where to start.
Really don't know where to start. But okay. So as the I guess the only United States in just to develop the follow-on, I'm in California, we have between 100,000 of quarter million people living under bridges and overpasses.
So it's not just native people. And my real concerns was the resident sociologist, I guess. In this group, I'm a little worried about this hand waving at society because there really isn't any such thing. It's nice concept things get done in groups. Nothing gets done by a society and that's my main concern, with AI, it started writing the beginning, the ethics were all about, professional society ethics, which is, you know, fine.
But very narrow from the very beginning and then the who's going to design. The AI is such a much smaller sunset and then we've looked at the problems with data and data collection and data processing. So I'm just not feeling very hopeful either and they did find this wonderful issue from the Royal Society that I just found.
So I'll be looking at it. Bounded, rationality abstraction, and hierarchical decision making and information. You're ready optimality? Yeah principle because I found that because I started off thinking. Okay, so what are we doing here? You know AI is interesting, it doesn't back all of us but I don't consciously use it.
I don't need AI that I'm aware of oh well you know I used Google products but I don't really and so I brought it down to we're making decisions, we're talking about making decisions and we're talking about letting designing machines to make decisions. And we, I think, pretty much established that cannot be transparent.
You like 60,000. There's, another equivalent. Am I making decisions? When I take a walk based on my 12, ethical principles, or do I have such an overwhelming amount of input, that it can't be quantified and yet I'm still able to make decisions. So, you know, these two ways to look at that.
So AI is either expansion of the database decision making or contraction. We looked at from both ways, but my main concern is a transparency. So in a democratic society, when you what's going to say on each, that's not recording. When you apply artificial intelligence and again it only applied certain areas and who makes those necessities.
But when you're playing artificial intelligence, I'm afraid, you might be constricted even though it appears you would say awesome. Okay, let's make this concrete. Does your right? It is a bit handy, especially when I talk about society as a whole other, I want to be kind of precise when I talk about society as a whole.
So let's do that first and then we'll take it to a concrete example. So there is what we might call a global social network with the exception of if you hermits in even, they were born in the interactive with someone at some point. But with the exception of a few hermits, everybody is connected to everybody in some way, there's no division in society.
So great that they're completely cut off from other people or except for the tribe that lives in the end islands, which is genuinely cut off from all the rest of society and possibly some groups of people in the Amazon rainforests and behind, I think possibly some in New Guinea.
But other than that, right there is this massive connected people. And we know that because the disease, let's start to Wuhan China or anywhere, spreads to all corners of the earth, and it's transmitted from person to person to person. So we know you nobody that's again you know we're seeing it the only in the hierarchy in Canada we're seeing.
They're actually I've been following some people in Antarctica and they've managed to keep themselves isolated from it. The South Pole, station zero cases of covenant. Some people for them. But who is the vapor trail theory? It's time? Well, well, no, I mean, because they can do, you can get them block it, right?
If you do not have, you know, if you have not already been infected and you make sure that you are not in contact with infected people, you will not become infected, but anyhow, she started talking about coconut, it isn't talking about the connectivity of society, and to a large degree, almost to 100%, not quite, but almost 100% everybody's connected to where we would.
And also the artifacts that we create the objects, we create cities towns with bridges buildings, etc. Those also create connections between us. Those are what we might call stigmurgic connections. We're connected by means of objects that we leave behind. So now, but that entire network isn't ever doing one thing at a time.
And that's your point mark, right society. As a whole doesn't do any one thing at a time. It's really, you know, it's seven billion people, you know, it's it's like when you look at a brain, a human brain, you know, not everything in your human brain is doing one thing at a time, but yet, you can still say things, like in these are broad generality, like, like humanity went to the moon, right?
Because there was a state prior to, which any person was on the moon. Now, there's a state where there is a person on the moon where it was personal, right? I'm waiting to say that we, you know, we can make other broad statements but really what we're doing when we say things like that is we're taking a perspective or point of view of this, great big network, right?
We don't even see the whole network, we miss most of it and we look at a, we look at it as an individual. We look at the part of the network that we can see and we may be recognized patterns in that we say, okay, society did such and such so mark your probably looking more at the part.
She can see from your perspective in California. Jim is looking at the perspective, he can see from northern Canada, Doug. I don't know where you are. Where are you located insularia, United States, Pennsylvania, was a whatever word Americans outside of Philadelphia. So you're looking at the perspective, you can see from Pennsylvania, etc, as in a size, that's sort of the design of a convolutional neural network, right, where you have this big matrix of all of these possibilities and you have a system that samples a little bit and then samples a little bit and samples a little bit.
So each one of us is one of these things that samples a little bit, and then we interact together and we forward, whatever our considerations are. So there is a sense to be made of saying that society as a whole works together. There is a sense to be made, that subsets of society, do things, you know, groups or organizations or whatever that, you know, features that we're able to detect in society of ways people have organized themselves.
So that's in general. What's happened? So let's take it to this specific note. Our favorite example, Robot dogs with machine guns. I know Mark you don't like the Colorado robots, but I'm not part of it. What? Answer it clot. Your pads with machine guns, right? Quadrant, but armor autonomous quadrupeds with machine, exactly.
Thank you. Yes. Now, there were two things we can say about that. Well, three things. I'll say the obvious thing. First, they exist we've seen pictures, right? So, one thing we can say is that at least some people in this great, big society of hours. Think that they're good, right?
Because they exist. They didn't think they were good. They wouldn't have made them. Well, that's a bad generalization but remember, we can't can realize at all, you're like this, right? So we're valuable, you know, maybe not good, but useful or valuable. It's got that might be what counts as good to them right.
You know, there's no one definition of good. But yeah, if you if you then with that let's think like in here on network, the bad one. Let's go on ask every single person that exists. Do you think this is good? Do you think armed, whatever? We call them robot dogs.
All right, autonomous quadrup heads with weapons. Do you think they're good, right? We asked each person in society. There would be a subset that said. Yes, right? And there would be a subset that said no. And there would be a sub that subset of what, right? So we're asking the global question are autonomous machine gun equipped etc?
We need a name for them autonomous quadrupeds with my arms. Autonomous part of that. I'm autonomous quadruped ox. AAQ McConnell is up. Okay. Yeah. Rolls off at the time. It at least is really off. The tongue is moved. So, you have to say with a little accent, right? Because, you know, you after this accent would be appropriate.
So, so overall, let's ask does society accept or believe or whatever you want to say that ox are good and you might say, well you you can't say that about a society, but if you can say that a human ability, human, you can say that about a society. That's the infrastructure if I make here, right?
Because the same kind of determination is being made by society as a whole. If we think of it as one, great big network, right? And all this sophisticated is our human brain, it's less sophisticated, but if that's only because we don't have enough people yet and our communication systems are broken and so many ways.
But, you know, if you human can say, I'm hop is good or hot is bad society. We we could say that society says in cock is good or option is bad. So how do we determine that? Well, let's look at society and ask whether society has tolerated the existence of us react.
So society as a whole must think box are good. Now, there's a part of society that entertains the possibility. That Ox are bad. There's part of society, it doesn't care. But overall majority of society is unaware of us. Yeah, that's true. Just like the vast majority of your neurons are running aware of what you're hearing right now, right.
They'll play a role in this whether they want to or not. All right? Because everything is connected with everything and the role that they're playing right now, is they don't care. It's just not important enough to have to make a difference, right? As compared to say, I don't know, putting on a mass right?
Where the vast majority of society has an opinion one way or another, because either they put on a mask which indicates their fine with it or they resist putting on a mask, which indicates that they're not fine with it and society as a whole, for the most part, is saying, putting on a mask is good.
And if we had to, you know, if we were forced into saying, does society think masks are good? This society think masks are bad. If we had to make a call, we probably go following the side of society things masks are good. Now, of course it's not that simple because it's never that simple.
But that's what our ethics is made of and we could sit here. We probably sit here for the rest of the day and ask questions, you know, our nuclear weapons could well they're all over the place and we have it stopped them yet. So yes, society thinks nuclear weapons.
Good. This is electoral district. Jeremy Mandarin. Good. You know, overall society thinks again, society's fine with it so it must be good. Now, if you ask me, that's pretty crappy morality that I've been describing so far but that's trust me, right? But the way we're set up, it's a big black and network, and it doesn't do what I say, but that's a good thing, right?
I mean people the moon the fact that, you know, we just heard someone he's saying you know what I say or what I believe really probably won't influence. What Microsoft does what's a good thing? Nothing person. Great. Because you know, first of all, how would you pick such a person?
Second, how would, you know what they're saying? Actually is ethical, right? So we don't want it to be the case, I think, That one person described reality. Yeah, no hates signs options, as he has the wealth of power to do that. Now we're in a bore interesting discussion, right?
Yeah. So now no, we're really what what the world thinks about over the next? Yeah, as an aside? Sorry, good. Well, it doesn't look like but now we're in the much more interesting question, right? Because let's leave a sign. The question of robot dogs. Sorry. Controlling is bent up in Washington.
Yeah, really. And let's consider the question of how this society as a whole get to the point where it thinks robot go to that. In other words to me, the question is, how do we organize that network? So I think it's a matter of how people get to that.
Point is not prescriptions of our if you accept you have over ox, then you might be in favor of using them to expand your power. Yeah. And I think that can be applied to a lot of other decisions we make is, it's based in our perception of our power.
And whether it has the potential to harm, anybody we care about yeah well not just to potential. They mean, they must seem a different sense. Like if I had an oct, not wouldn't be bad because I know I wouldn't use it to harm people right. That's the reasons. Yeah, yeah.
But but but it responds to me, it's, you know, it's it's autonomous but you know, you don't just send them out into the world without any objectives or anything, right? So, you know, I would give it any harmful purposes, right? Similarly, with all the other any mouse taller and the same.
I would not want to be a toddler in your world. So, my Siberian tiger kind of a problem, but maybe let me introduce something. Because as you're talking, I'm thinking of a concrete example of how ethics and a group in society, might change something. And I'm going to use ethical investments.
Sure, which is, you know, at ethical or social investment, etc, which let's say around the year, I don't know. 2000 2003 was much more of an uphill battle than it is today. And part of the reason has been the discussion's or the learning or the education of stakeholders in terms of some companies being traded, right?
You know, in this in the stock market and what's had to happen is those stockholders in that company had to really make up their minds. Through discussion learning etc. That they did not want that particular company to close in wherever birthday, you know, with almost labory. And they want out and more and more of this is happening.
So you're getting a gradual shift in a certain context of ethics and it had to do also with money and money is power. And what they stakeholders did, was they use their power, their money to change. So I think power is really important. I think ours is important in my notes for modular.
I have a section on agency and I think that's important and I'm not sure if I have it in there or not but there's, you know, the section on how we organize or structure society to limit the influence of individuals. And then I don't mean limit the influence of the 99% of individuals, but to limit the influence of say a bill gate or an Elon Musk or whatever.
Because I mean, the way I explain it is, in networks, like human neural network. There are physical limits. A neuron can only have so many connections, right? A single neuron only has so much influence in our brain. There is no neuron in charge, right? We couldn't even identify, you know, the top hundred neurons that wouldn't make sense, but we could do that with humans and that's a problem.
So things like financial markets, financial networks are what they call scale free there. No natural limits to the extent of. Well, in the case of financial markets, the amount of money is single person can have, it's no upper limit, you know, especially if you're if they get into a position where they can just make money, they just keep making it until whenever there's no.
And more of the points, there's no real limit or practical limit on the difference between the wealthiest and the least, wealthy, the wealthiest person has billions of times more wealth than the least wealth. And that's the sword of difference that we don't see. And natural networks. Now, this is not an appeal to naturalism and when you're careful about that, we've learned that we can't test the pills in naturalism.
And what I've taken as a core principle. And, you know, I mean again principles are always wrong. But as a core principle is a principle of network design essentially which says that the design of a network that keeps it dynamic reactive changing growing. In other words, responsive is good and designs of network, which networks which caused them to move to a single state.
Are is closer generally bad because such a network is never responsible once. Once you reach that state, it's done, that's the end of history, right? So an example of that is death, you know, when when we die are brain is no longer sending signals updating neural connections, etc. And arguably that's also the point of which consciousness is very good.
Some people disagree, you know, we could depict him in the case of networks, some kinds of network topographies where this happens, for example, if every neuron is connected to every neuron, then we will end up and that state this static state, because every neuron will be in the same state as every other mirror.
Similarly, if no neuron is connected to any neuron, nothing would cause any neuron to change. And so again, wearing this state, whether there is no change. So, somewhere in the middle of this connectivity is that middle point where there's enough connectivity, but not too much connectivity. That allows for this dynamic to happen.
Let's just one example of such a property. Another example is suppose, every neuron is connected to one. Neuron. That's we'll call that the God model, right? The one neuron is God and it says something everybody listens and adopts that state. Well, again we've we've reached this static state again.
So it's it's not only ethical principle, right? But it's a suggestion about a design of a network that might be more responsive to changes in society. So more able to make decisions as a network then one that is less responsive that would respond to more particular states. But you know, that's very loose I guess, hey can't be thought of as a general principle and any sense?
Almost certainly in certainly, in individuals, a topography develops, where there are clusters and organizations that we can identify different parts of the brain left and right hemisphere. All of that, as would jerry folder would call the modularity of mind that. Well, he was using that the different sense, and the next society is well, right, we've had denser clusters and less dense clusters, but Joel Clark and Kenner was used to call a community of communities.
We get that as well. And all of this is constantly changing constantly dynamic and whatever, you know, and to the, to the extent that we can make it a better world, is the extent to which we can make it a better functioning network, whatever that happens to be. And that's the hard question of morality, right?
Are you willing to countenance, the global social morality if we wanted to call it that? That is different from your own, and that's the hard question to me. And so I have a question before we run out of time again, already. I've been babbling. I'm sorry. No, that's fine.
So the question I brought today based on the video's, the one question is, how will AI or this broader ethics handle demons handle? Was it seemed from your obedience? Oh, right. As it seemed from the description. You know. The the ones in zeros lesson, one more than one whatever.
I'm, you know, I took basic math. So it seems to me that the AI again designed by very small subset of people would tend to zero out vegans and I like to talk about the
Scale, obedience or I want to say there's a range of events that's from negative deposit. You know, we've pretty much put murderers at the at the negative end of medians. But we've very seldom talk about the which I would pause, it lead the culture in the directions but are considered like outlet.
Yeah. And it, some of my question is, wouldn't AI tend to zero out all veggies? Whether positive order short answer, that is no, the long answer to that. Is that AI doesn't simply work by averaging, you know, in some of the some of the algorithms that I've been talking about even in the intro part to AI, it works by extracting features and some features might be called more common.
Some features might be less common and the last common ones are what we mark might mark as deviant, and that's an important realization. So sorry to design problem, it's the group designing, AI decide what ingredients to well, in a sense, you know, when you're doing featured detection, what really matters is what features you're looking for.
Now in some of these networks, you know, they basically eliminated all the semantics of it and represent this purely mathematically by means of what might be called filters, right? So you take the state of affairs in the input. You apply a filter to it which is really just a series of ones and zeros that you match against your series of ones and zeros, and then that gives you a filtered result, right?
And each filter basically can be thought of as a way of identifying, a particular feature. So you might have like an edged detector, you might have a line detector, you might have a circle of detector and these are just different filters that you apply to the input. So, when we talk about deviants by analogy, but it's not about analogy.
Basically, what we're doing is we're looking at the whole of society using some filters on, if I wanted to play fast and loose with the metaphors. I might even say that there's something like George Lee called Springs, but I won't play fast with the metaphor. Just think of it just purely mechanically.
We're looking at the world through a series of filters, right. And one of the filters is, say, people who kill people versus people who don't kill people, right? Another of the filters is men who prefer men versus men. Who don't prefer me another of the filters. Is people who wear blue versus people who wear red, right?
We could come up with any number of these filters in all cases.
Almost all cases, there are some exceptions. There will be a majority of minority for any of these features. Okay. And we can have like a number of, you know, I just given you some binary filters here but I can have, you know, attend value filter, no problem. So I get you know, 16 possibilities.
Think of that myers drink test, right? The INTP as a 16 way filter, right? So we look at our filter, there's gonna be some people that are only a very small percentage of the population. And other people, which will represent larger percentages of population. By definition, the people who are parts of the small percentage are deviant by definition, right?
But the question is, do we think that deviance is bad? Do we think that deviants is good or do we care right shirt color? We probably don't care. Although in certain areas in Los Angeles. Matters a lot. All right, filling people. Well it turns out that's more contextual but for the most part we find it that there are context in which we find it.
Good. And then again, that depends. Now, each person has these filters. Each person takes the results of that filter and applying the rest of their own neural net, the rest of their previous knowledge. Everything they've learned etc, makes it determination. Whether that deviance is bad or good or some point in between doesn't have to be binary and then each of us feeds that into whatever networks we're connected.
So if you feel murdering is wrong, say you know, in other words, if you feel that that deviant percentage of people who kill people without the proper pay for paperwork, in fact bad, you will pass on your belief, through your, your beliefs, through your statements, through your actions, through your own behaviors.
I argued elsewhere that the modeling of the behaviors, generally probably the most effective rather than just saying, but that's in the side. But you'll pass that on through the network. And but it's a small network effect, but if enough people or, you know, or if will placed people in the network, depending on how the network is organized, if they also pass that along or propagate, that belief or idea, then that becomes a property of, you know, a cultural pattern that we can rush you guys and look at a society, we say, you know, as a whole, it appears that that subset of the whole soul from that work, that society feels that killing people without the right.
Paperwork is wrong. It may also say coming up with really unorthodox ideas about the placement of the earth is good from that again can change through time, right? Depending on how each individual makes your own decision. So, you know, the and what I'm trying to get out of here is the complexity of the processing is at least as complex as the complexity of the data, right?
It's never going to be a simple averaging or anything like that. In fact, that's one of the real distinctions that I make between group-based decision-making and network-based decision making is group-based decision, making is based on weights in the sense of how many people think sections such Democracies in group-based decision-making system, right?
You win the election with most people vote for you. But network based decision making doesn't depend on It allows. A lot of people simply won't care and then it takes the organization of the people who are interested in involved in the matter. In some way and runs that through series of processes.
A series of AI processes for algorithms and such, and then results in an overall pattern of behavior that we them has third parties looking at that pattern set. Okay, that is the network behaving in such and such a way. Does that make sense? And I should never ask. Does that make sense your mind?
Interpretation of design? I just had terrified.
Hmm. Yeah. I mean it's worth me terrified about this because particularly when you throw artificial intelligence is into the mix because they won't and this is what you were saying earlier. They won't just reflect what we all believe, right? That's what's going to kick it off. But then they'll start feeding back into the process.
The, you know, just like anything else that we've built everything we build feeds back into the process. You know, you know, people built nuclear bombs at fed into a lot of opinions about war etc. So, yeah, I think that is a good analogy or if not AI for odds.
So here we are. It seems the process is similar. You have a secret group, designing a technology. And then demonstrated and there's a certain section of society but it's very small, I would argue that things. Wow, that's great. Yeah. And I'm I would imagine I guess it's the only word I can use that if you could hold and I think that a subset of people not connected to I like Chardans term new sphere, which is the conscious file more, the field or separated the rest of the biological down.
Yeah another term that came later. I would already did. There's a much larger subset, that's not connected to this technological news here, then just some remote islands and the substations at the poles who will never be considered, but I think if you first of all, you have to explain all this to them which be very yeah for them to become formed and then if you hold them I would imagine the vast majority of humanity would say that ox are a bad idea.
And yet, as I said, I am having a controlling defenses in the state of Washington and in some of the Arab Emirates today. This is my imagination and I know that that filters those kind of technologies filter downs. Excited. Well, luckily nuclear weapons have, but all the other technologies filter down into societies and in the United States, those military technologies get to our local importance.
So I'm already imagining demonstrations being broken up by and this is terrifying. Now this this application of a and then when you point out how undemocratic AI is in itself I it seems meaning rather enough problem with our democracy such as it is that I just don't see how knowledge back to that.
What I didn't want to use, how AI is being unleashed on a, on a world that can understand it. And it will only be from the top down and so it will be imposed. So I'm we need we need to end this reasonably soon like within a few minutes.
But I want to key in a one thing you said, and what you said was the majority of people if they knew about this would believe such and such. And that's a hard thing to be able to say, First of all, it's a counterfeitual, right? In fact, the majority people don't know about this, right?
So we're left with the problem of determining how most people would react if indeed they didn't know and that's impossible. I agree. And it's just impossible. It's not not simply impossible. It's problematic because there have been studies that suggest that just telling people just giving people, the knowledge isn't enough to change their behavior.
Lots of people knew, for example, that smoking cigarettes would kill them and they did not want to die. But they kept smoking cigarettes. Similarly you know there are certain factions political factions that that believe things like Robert F Kennedy, Kennedy will come back from the dead or not, Robert F Kennedy, John Kennedy will come back from the dead and rather see Roberta, but I think Junior that they believe is kind of either way.
Right. In fact he has not come back from the dead and yet they continue to believe this, right? The second computer Christ has been predicted for 2,000 years and still has enough. Yeah well and people actually come up with dates and they are content. Anyhow. You get the idea right?
So that's one part. Younger part is the bit where you said, it doesn't matter what they think and that's a waiting problem and that speaks to individual agency. When I say that's a waiting problem. What I mean is that the strength of the connection between that person and the rest of societies?
Two weeks right there, their reviews. Even if they're communicator, they're simply not felt. And I think that's. I mean, it was sense, that's a structural problem. But in a sense, it's a learning problem in the sense that our society has a yet learned how to learn from itself. And that's why we use crude methods like votes which is really you know I mean a vote is kind of like artificially waiting and we just set everybody's weight to one on a very narrow question.
Voting with your dollars which was also mentioned, there's another waiting solution, right? We'll set everybody's weight to be a my little dollars that they have and give them votes for proportional to that not the best waiting solution in my life. Right? So what we know is that networks that are able to better manage waiting are able to make more informed decisions or no.
Let me say it a bit more newly, more new ones, this issues or more decisions about more things, the democratic waiting system that we have really only works for, you know, a few decisions a year, but each of us every day needs to to be making potents of this decisions.
So but all of taking in, right? We might still decide and this is the possibility that we have accountants. Even with the best decision making mechanism in the world, we might still decide that oxygen could. Now at a certain point, the question becomes what basis can you argue that Ox are bad?
If the best possible decision-making process, we've been able to build in society, says that Oxford. Good, right? Your purchase shirt. You were still perfectly free to believe and encouraged to believe if you believe this but off your back. Totally. You're right to do that. But the sum total of the overall way of people who believe the same is you do in society as a whole loses on this issue.
And that's the deviance from. Yeah. So it occurs to me that we probably need two things to happen. The first one is to expand the structure of communications so that more people can participate or that brings with its surveillance. So, you know, there's that problem and then the second thing is it's seems to me and on the insetious.
I don't want anybody misunderstanding here, decisions. Yeah, but it seems to be that the best thing that could happen is for those odds to be purchased by a nonpartical program to not attacking thing over a certain heights. So that no dogs are hard to disappear and set loose on every continent.
So, that's some toddlers. Thought and people will have an opinion about whether this is a big technology, that's a feedback loop. That's a direct practical application of back propagation and kind of that work. We try to do, it's all theoretical. Yeah. And most people won't even hear the handle, but when those box are unpacked in their locale and turned on, and charged up and turned on, I'm still sticking with my opinion.
I think most of them are not going to enjoy it. Well, there's and we'll wrap it up on this note, but there is a famous quote. I forget who said it, what it came after the sandy hook massacre and what it was something like this and I'm paraphrasing it.
When the majority of Americans decided person. I don't think they used to work in Georgia, when Americans decided, it was okay for children to be massacred at a school. That was the end of the year, right? It might be the case. I've even if ox go out and kill some kids, people might be fine and at that point, The the argument that you can make that they're morally, bad begins to founder that you can still believe them only that but society as a whole as it stands.
Now remember it's constantly changing dynamic, evolving thing maybe, evolving surrounding because you dynamic changing thing. So it might change. Is mine, all right? But if it does, it will almost certainly not be based on single instances of aux killing kids. It would be an overall global change. Like what they call a C change in society, you know, and it would be related to tons and tons of other thoughts and beliefs, you know, ranging from how important we think kids are, you know, what, how we value the life of an individual and society, you know, how much we love our machines?
Whether we feel afraid, you know, 60,000 factors. And that's yeah, our is the one that's going to dominate this discussion. Well the power rigs of an hour, right? Yeah. If we can unrig the network, that would be a step forward, but we still don't know what the network's. Gonna decide all these cases.
Let's end it on that because it is now 124 of a meeting. That is fine. That's I don't mind, you know, as long as people don't mind listening to it, well, we've got one more discussion. Which is this Friday at me? I'll be continuing to produce videos already. See how some of these will be relevant, just based on the discussion that we've had.
So I'll finish off the decisions we make video and then the of the set for this week probably sometime next week leading towards Christmas by which I hope to have them all done. But once I do the last one we'll have all schedule and put it up and newsletter.
One last wrap up, drink it, if you got it kind of discussion to wrap up what we thought of the whole course. But I hope you found it. Interesting. So far, it's great. And by the way, I have to teach Friday. So I'll miss, I don't want you to think.
Well, you didn't like this. Yeah, this has been fabulous. Thank you for welcoming me, keeping it open and I fascinated by your thoughts. Thank you, right? So the body's looking forward to the next iteration of the course because I think it'll be a yeah it's interesting factor. I don't know what it's the course is has become more popular.
Is these gone along which is unheard of from roots. But yeah, it's a communication problem, you know? I think if we focus on the communication before the next iteration, yeah, I think that it's just gonna have all this material already. All those videos will already exist. I won't be focused on acting videos, so yeah and I'll be able to look more at some of the backup materials and that and yeah.
And then actually, having people be able to talk about these issues. That would be good. Yeah, because we're, you know, there's that our lag that is developed. Yeah, and perfectly understandable. It's not a criticism at all. But yeah. And this sort of have all the material from day one or all this material all day.
Wonderful on this. Yeah, really. All right. Then I'm gonna wrap it up. So by the YouTube by Sharita by Jim, I know you wanted to be here by people in a future iteration of this course by podcast listeners by future generations. In a world that was shaped and influenced by this small discussion.
So remember.
- Course Outline
- Course Newsletter
- Activity Centre
- -1. Getting Ready
- 1. Introduction
- 2. Applications of Learning Analytics
- 3. Ethical Issues in Learning Analytics
- 4. Ethical Codes
- 5. Approaches to Ethics
- 6. The Duty of Care
- 7. The Decisions We Make
- 8. Ethical Practices in Learning Analytics
- Videos
- Podcast
- Course Events
- Your Feeds
- Submit Feed
- Privacy Policy
- Terms of Service