Content-type: text/html
Module 4 - Discussion


Unedited audio transcription from Google recorder.

There we go. Now I'm recording audio and of course we were streaming in live videos so should be aware of that.

And I mark, welcome again. Hope you're unmute. I think you're unmute front of work. Yeah, your music. Oh yeah, okay, let's crank up the volume, a wee bit. So let by recording can pick it up. Just in case you're wondering, I do use the audio recording like this and the reason why I don't just simply record the zoom presentations.

Although, I suppose I could, I haven't been because I'm live streaming to YouTube and so there's no need the YouTube will capture the video, but I record on audio. First of all, to have a backup just in case, but secondly, because the live other way, the Google recording, also produces a transcript, as I go along, and what that allows me to do is almost immediately after our session, I can post the audio instead of having to wait for Google to produce the video.

So I can download the audio so I can post the audio right away and I can post to transcript right away and so that lets me get the full spread of the content out as rapidly as possible. And I'm working on his assumption that. There are other people out there who are following what we're doing alone.

Not taking part in our video conferences and that's actually news on on some of the things. So yeah, unless you got your whole family team boosted. I don't know any of us definitely following this. No I yeah yeah. I see some evidence of yeah. Somebody else watching somebody stop.

Yeah I think some people are indeed watching which is good although you know for every for every life session I always have a presentation in the can just in case. Nobody shows up has a conversation so that I have something to record but you know, the result is. And just with the focus on video recording for this course which is a bit unusual for me.

Well, it's a lot unusual for me. The result is I'm producing a ton of video, probably too much for most people to wait through as Jim. You're just stuck partway through the use. But again, my my longer term plan isn't to have a course, that is just a whole bunch of videos.

You've seen already some of the graphing stuff that I'm trying to do with this and as well. I'm trying to take the the process of recording the videos as a way of reproducing. All of my texts for this course, with the I with an, I to taking that text, taking the transcriptions, cleaning them up, and then assembling them into a longer work after the course.

Oh, and there's Sherita. And so we've got is season high, four people. So this is awesome. So that's the other thing to in the back of my mind. I'm thinking, well, maybe this would be one of these courses. It's more or more popular as it goes on. That would be pretty funny.

So hi, Sharita. Welcome. So, we've got Jim Marks reader, and, of course, myself. That's for the benefit of the people listening on audio. It's interesting with all these different recording formats. I always have to make sure that the different formats are supported. So go ahead and it's a nap that's produced by Google and it's the reason I bought a Google Pixel 4 rather than a Samsung or an iPhone or anything else.

Because I knew that Google was including this app on the phone natively and I knew that it would probably be pretty good and it has turned out pretty good. You know the transcripts? Yeah it's all. And what I like is wouldn't save the recording. It offers the transcript and then it offers to transfer it wherever you want like to your laptop cover to your desktop.

Yeah. Or to a Google not in your drive with one click. Yeah. Yeah. So it's very easy for me to make use of these recordings. The same with the audio as well. It saves the audio, and I can open it up right away, in audacity, and then save it in audacity as an MP3, with all of the metadata, put the metadata in, for posterity.

And just, for the record, we're up to 29 separate recordings. That's a rough count. That might, it's plus, or minus three or four, but the my numbering is up to 29. But, yeah. So again, and it's funny because this process has been so easy that I've been forgetting that I'm recording.

And I've had more than a few occasions where I've created three or four hour videos half. You know, the by finished my presentation or whatever, and then the video is me eating me watching cold bear hunt YouTube. It's horrible. So I'm trimming those, of course, turn them down be or in one key.

Yesterday, I deleted a four-hour video because I had accidentally started it up again. After finishing a presentation, it was so, it was my lunch and then I was gone for a while and I came back and it was still, it was recording a couple hours of an empty room.

So this is all streaming now on YouTube. All streaming out on YouTube live. It's okay. Oh, just receptor treat out that late and everybody can still see them all. But that video was and so, yeah, it's gone. Now, I deleted it although, you know, I'm sort of well, actually it's not gone because it was also recording.

I use something called OBS open broadcasting system. What I'm doing my presentations. In fact, I'm using it right now, even so here you all are. So that's what I'm seeing and I can put myself into what I'm seeing. I can put myself as a picture in picture, I can do TV style in the news today.

Full power point and you see it. Like I said, I have a power point presentation in the can just in case, nobody shows up. Here's the normal view that I use when I'm presenting PowerPoint. When I even have one for the terminal, although I don't have a terminal open at the moment and so, yeah, anyhow, so I use OBS and when I'm doing the presentation, I turn on, not just the streaming, but the recording because that's what I use.

What I'm not in zoom. I use OBS to stream to YouTube. So I actually have a recording of that four hour session on my computer and a few of these others reviews. Sorry, could be any war hall strategy? I was thinking, yeah, like releasing a video, Steven eats lunch.

Steven's empty. Chair as well. Stephen empty chair. Yeah. Steven watches cold air, you know and watches war halls there. Yes. And watching. That's how they do something. Yeah, I think you guys might. Yeah, I have to confess. I do most of this after I go today that's on my phone and only this morning is the first time I actually had a keyboard to all click and drag a connection.

So that's the nursing one just just before the hopeful lot of session this morning. But yeah, I I frequently and that's having to rewatch part of it and then I was listening to an audio, it must have gone on to play something else. They have woke up and just a little after midnight during mark talking about her just conversation before.

So that's my process. Be trying to catch up to this with all the other things during the day. I want to thank my son for teaching and learning team. Who shifted our staff meeting? So I could have been this this morning. Oh, that's nice. So how have you been finding?

That graphing task. Not fun. Not fun. Because I don't like the idea of, for some reason, I maybe physically, haven't learned how to connect properly, because I keep on trying to do it and it doesn't quite work. What's happening instead? Well because the things that happens the most is I can't get I can't get a line.

Okay. Yeah. And that's probably you actually have to and it took me a long time to figure this out as well. You actually have to click on the round node or you know in some way highlighted then press alt then drag your line. So it's rather more, you know, it it would be more intuitive and and if I can figure out how to write the codes a letter to do this, I'll do this.

I'm borrowing of course, Matthias Melcher's code because it's all a pretty complex piece of work to do is. I'm sure you can imagine but if I can fix the code so that it detects when you're hovering over a dot. So that if you just click your mouse and drag that should create a right now.

If you do that, it moves the the dot which is so annoying, right? So what I want to be is instead of moving, I want it to draw line. So I should be able to just flip flops. I'm instructions in there somewhere but I have to find the instructions first but it's on the list of things to make better because I think it would be a I think it's useful.

Exercise did too. I'm not no problem with the exercise. I have a problem with the execution, yeah, right. I thought it really interesting to try and figure out you're looking at that nursing program. Nothing about facial recognition is say nothing about surveillance and I'm really even it's listening mentions, some of the other ones but the kind of implied or that.

So yeah there's there's and that's an interesting thing to observe because it's shows that some of the associations we draw here are very much a matter of interpretation, you know, it's as reading the code and and seeing that content in the code and different people might interpret it. Differently.

Evicted. Probably what? That's why I wish I had like a thousand people doing these crafts could. And, you know, my intent is to keep this up over time and and see if people do indeed create these graphs and you know, or even try to just push out into social media like a little micro exercise just into Twitter, you know like click here, do the graph, something like that.

I think that might be interesting except most people be Twitter on their mobile devices. And I don't know if you can do the draft on a mobile device. I don't know either actually yeah I didn't find any way to do that. I looked at it a couple minutes time actually sitting here today, it's good work, you know, I'm thinking about you made this force without registration.

I can take people from our college or from our oh yeah learning. And and look at a particular aspect when we're when we're considering something about epic. So I have something and and we can go there together. Absolutely. Yeah. And that's one of the advantages of not requiring registration is, you know, anyone can jump into any part of the course, including the tasks and try anything.

I'm just trying to see if it does work here too much for your ego. Those Stephen, if you don't know how many people you can't prove how many people were, yes, but that's actually mentioned that what I did a presentation on setting this up without registration a couple of weeks ago and it's one of the things I mentioned that, you know.

Yeah it's it's kind of a, you know, a humility check. You know, it keeps you. Humble now. All right, all I need to do is click on a code here. So I'll click on a code they now graph issues, okay? So all right, I'm in I'm in the graph but yeah, there's alright, first of all, it's tiny.

There's no, there's no alt button on a phone around, I guess. Yeah. Can I even drag them around? Whoops, maybe can't even do a. I can't even do a shift enter when I'm packing texting or Google. Yeah. Oh and I can't. Okay, there we go. I'm making it bigger.

Still working on this. It's riveting video. Isn't it? I actually have a tool that allows me to share my phone screen with with viewers online. So if I was desperate to do that, I could now I've just, I've scrolled this, this is what I got. Now, I'm just scrolling it right off and all I have is a blank window, so I'm gonna call that a fail for now.

But if I could figure out a way to make that graph, first of all, just show up full screen on the phone, that should be doable and then fix that alt button thing. Then you should be able to do it on your phone and that would be pretty cool.

And then fire these things off on Twitter. Say, okay, here's today's ethics that exercise. Trying this out and maybe collective few thousand of these because it's been a mute while I dictate some replies and text to a colleague. Oh okay. Yeah. Like that. Dictate some replies. I love artificial intelligence.

I really do. I shouldn't say that I suppose with all the ethical issues. But I mean, I'm using it, half a dozen ways in this course alone, just to make it work. And, you know, I use, I use it to create the content. I use it for photos to clean up my photographs.

I guess that I'm not using that in this course but I use auto translation. I wasn't at one point just can go through a list of all the different AI applications that I use, but I never did get to that but that's that's another exercise. Maybe I could retractively, add extra slices to the course for previous months or previous modules who even know that we're using AI.

Sometimes, A lot of times we don't, I'm sure, you know, I'm certain that when we go into Twitter or Facebook or whatever. And look at the feed that's being created for us. By an AI, there may be an AI in our car, but we've gotten certainly, my car. There is because I have adaptive cruise control and I also have that thing that keeps you between the lines, which it does, but it doesn't like this.

It keeps trying to move to the edge and it just weaves back and forth. So we'll get you pulled over. Yeah, there's my AR, you wasn't my fault. Don't blame me. The person you want to talk to is in Japan, gets to your one of your videos is who's to blame.

Who is who's responsible, who's accountable? Yeah. Who's responsible who's accountable order? I realized two very different questions and there were, they are two different questions and you know, accountability is a tough one. I was sitting in on a I triple e, special interest group. It was he 1, 0 0, 7 or something like that, but basically for ethics in artificial intelligence and there was a person in that group maybe eight zero zero.

So I can't remember, I still have references to it. There was a person in that group who basically was trying to push the line that at a certain point, these artificial intelligence is, are autonomous and therefore, the responsibility for what they do is separated from the person behind them, because, you know, the reasoning is, you can't be responsible for the actions of something that's acting autonomously.

And so there was you know for this particular committee there was a push to you know defying what they mean by autonomous define the scope of responsibility around autonomous agents to basically make the AI responsible and so now we're getting too close to the singularity for comfort. I pushed back as I'm sure you can imagine but and you know because yeah, I think that certainly for the foreseeable future responsibility.

Ought to reside in a human and not me AI or the autonomous agent, you know, because that would otherwise not what allow you to, you know, put machine guns on those robot dogs, send them out into the community, they should whoever they will and they are responsible. Not you that seems wrong to me.

And I think it seems wrong to most people. Although apparently based on this discussion in this group, not everyone. So you know, once again there's a we running into this this barrier against consensus. Some people think that no really if it's an autonomous agent, you shouldn't be responsible for what an autonomous agent does, but there's plenty of parallels, right?

What about your children parents are responsible for what their children do to a point, though. All right. And it's not equally applied in all societies. And, you know, I think the point of view of some people and to degree. I agree with this is the edict of children will be children, right?

Parents can't control children all the time and children are going to do stupid things and it's unreasonable to hold the parent accountable. When a child does a stupid thing that the parent really had no way of controlling for or preventing you know, particularly if the consequences are, you know, you know really expensive, you know, a child who's just wandering around the neighborhood is children.

Do gets into a bulldozer turns it on and plus through a house. It's hard to say that the parent ought to pay for the house. It's almost you would think of that as more like an act of God than an act of parental misresponsibility. It seems to me that way.

Anyways, I'm not sure there would be unanimity on that. I don't know. What do you think that's speed and frozen? It's usually me that freezes. I thought I froze? I froze. Oh man. Oh, it says my internet connection is unstable. That's annoying. Am I back? Yeah, yeah. Okay. She how could my internet connection be on stable?

Well, it's probably downloading something in the background. It's, I got windows 11 over last weekend and not everything is comfortable the way it should be, but I'm not uploading anything. Thank you for leading the way in that and thinking about this because I'm ignoring that blue button on my.

Yeah. It's, you know, not, I mean it's okay. But I am noticing some things like the PowerPoint, that plays audio on me. That was pretty weird. I noticed that yesterday and yesterday's importing. Yeah. Wasn't it was me because I watched recording afterwards. I got stopping recording looking. Yeah, there's no sign of where the audio is coming from and you're not expecting it for embedded in the slide.

Yeah. It's got to be the slide from another presentation. It came along. Yeah, just totally unexpected. I should have expected it, but I didn't the honest thing. With Windows 11 is the keyboard, it doesn't change the keyboard, but I think it's doing like when you type on a key and a computer system, what happens is you're operating system.

Does basically what's called a keyboard scan. Let's keyboard scanning. So it's it's basically constantly watching for you to press a key. And then it scans what the input from that key was and then it processes that input Windows 11 does a lot more processing of that input than Windows.

10 did a lot more, which I find a bit suspicious because, you know, key logging and keep key tracking are the sorts of functions that sometimes happen in the whole key scan process. But what I've noticed the effect is that it often misses when I capitalize a word because you know, I type fairly quickly and right and you know, yes, the caps key is down when I press the letter, but if it takes half a second or a few milliseconds to recognize that the caps key, or the shift key is down, right?

I might press the shift key and press my letter, and then move on to quickly for it to realize that I've pressed the shift key when I've pressed the ladder key. That seems to be happening. Very annoying. Okay. That's a little circle back to the yeah because of the dirty light speaking of well.

Another way to look at it. Yeah. Is corporate economy. Yeah. Human creations like artificial intelligence. Yeah. That we're up a big red flag while you were talking because at least in the United States, corporations have rigged legal system to resolve them almost all responsibility. Yeah holders. That's true. And it seems to mean that that would be the first thing by at a corporation that he's building armed, quadrant pads.

I would reach for corporate law. The result myself every month. That's a really good point and not very comforting. Yeah. So and actually, if you think about it, technically you could create a corporation that actually is your autonomous agent. That's what I was thinking. Yeah, each agent would become a corporation.

Yeah. With all the important part. Yeah, and really the only barrier to that is the cost of incorporating. You just go to Delaware, don't mind say this. They haven't heard expensive thought came and weren't there and it went, but yeah, that that's not very confident and I find it the whole way I think of.

But it is, it is, it is so powerful. And like interesting. Let's point it out. And verse times it can, it can consider so much more data than even a large think tank of human's could and, and correlated accurately, that there's no way people were going to abandon it, just because all it might, it might, it might get it wrong.

They might be some collateral damages, it's been the back Steven. I was gonna ask in in that discussion about who's responsible and humans are not responsible for AI. Was there any talk of consequences? I'm not sure. What you mean when you say that? Okay, so if I do something wrong, if I speed the consequences are my, get a speeding ticket?

Yeah. Can't say it was myself driving car that they did it. If AI is going to be, I think down here is AI. Responsibility is separated from the person behind it. Then if AI has that responsibility to accountability to the AI, was there any talk of what would the consequences be for an AI that caused harm man?

There was none. Presumably the consequence would be you shut off the AI so I answered the AI death penalty for every offense that exists. But but other than that, we just remember the AI. Because, yeah, well, the application number seven is a different corporation. Yeah. That wasn't even true that I lost.

I've been advocate, I'm in the United States. I mean, California. And I've been a quicker time where I grew up and since the 70s, I, of course, been against the death penalty, except in one instance, and that's for corporations since corporations are human creation. I have no with the death penalty for corporations, in created by law, and they could be killed by law.

I disagree with killing humans by law. So that's always been my right line. And that would be that position would be consistent with. Yeah, corporate AI. So, that was pleasing that this consistency, but that's still a terrifying and in this application. And I'm just imagining that there are some of those quadrants but scrolling stance line's somewhere in the world.

Right now I'm here certainly can't remember but I'm certain that it was at a arm spare and I'm not in certain that's something. They're patrolling in this one and like Jim said, there's no going back. Then Virginia Gonzales and the thought also, the thought is the law is always behind with technology and as a technology accelerates the laws further behind.

Yeah. So, if this looks like a real problem, thank you. Not more than an ethical problem. Interestingly, a concept that comes up, not only in ethics, but also in law is the concept of intent. And that's distinct. That's used to distinguish between an act of malice or malfeasance with an accident.

And we can imagine some of these, you know, we'll use robot dogs with guns because it's, it's a good test, case unlike Sharita's dog, which is unarmed. We hope we can imagine such a anutonomous agent, accidentally shooting someone, there's a wide variety of ways. That's that could happen. It could be hacked in which case, they responsibility somewhere else.

We don't know where it could just bump against something in accidentally go off. I mean, that happens to humans all the time. It could be aiming for something, you know, aiming to disarm the opponent. But it's aim is it very good? Or it could have just been deployed carelessly without being tentative.

Killing anyone but you know, they didn't really take precautions and it did. How does that affect our considerations?

At the risk of being targeted, I would reach for. I believe that if you have a corporation that produces weapons. Mm-hmm, the weapons are they have a purpose and it's the guild. And I don't understand why corporations that produce weapons are not responsible for the properties of what they produce.

Remember understand that that doesn't make sense. Again, that's interesting in the sense that such a principle could be applied to weapons manufacturers today, such as gun manufacturers and yet I haven't actually heard of a case in which a gun manufacturer has been held accountable for a gun death. I've heard of people suggesting that as a means of addressing the problem of gun deaths.

But I haven't heard of the successful action nor by it's been attempting to keep hands. Yeah. In the US again I'm in the left. So yeah it's an alternate route around that. Yeah for sure NRA is a very powerful lobby that would. Yeah. And against that my mind goes to the Winchester mansion where someone did yeah take responsive you know assume some accountability you're responsibility but there's also a code in law and I know it applies in Canada and it probably applies in the US as well under the heading of what they call man.

Traps such that. If you set a man trap, you know, generically a human trap on your property and it kills someone you are legally liable for that death. And that's why you can't set up booby traps in your house about you can't. But you shouldn't but about some of the other ethical values.

Let's see. Now because there's more than just accountability that we we need to consider see now, it's see what have we got here. Pursuit of knowledge. Do you think that's an ethical virtue or an ethical value?

Kind of a puzzler from a hard time, connecting it to ethics. Unless the knowledge, you know, there could be ethical motivations for the pursuit of knowledge and unethical motivations for the pursuit of knowledge. Interesting. Well I mean think of it in the sense of and ethical code and the ethical code is describing what is ethical to say a research professional.

And the thing that is ethical, or what makes it ethical is that it is in the service of the pursuit of knowledge. And if we think of that, and this comes up on research, athletics boards, sometimes where there needs to be a purpose, to the research that's being undertaken, you know, you're not, you're not just asking questions of people or taking samples or whatever, just for fun, you're doing it in the name of the pursuit of knowledge and that's what makes it good.

You think there should be a special case for actions that are undertaken in the pursuit of knowledge.

As opposed to being pursued of a cure. Well, or as opposed to curiosity. He for example, for example, in the early days of the world wide web, they're used to be random web page browsers and they were actually on search engines and you just, you click on the, on the button.

It would send you to a random homepage and I would just sit there clicking that button over and over again, looking at all of these different homepages. So uncollecting data arguably, right? But I'm just doing it for fun. You know, for jollies now suppose that we're ethical implications to me looking at that data suppose.

Instead of looking at web pages, I'm looking at individuals personal health data and I'm hitting that button and looking at one person's health data than another person's health data. And I'm not trying to find out anything. I'm just looking at it for fun. You think that's something different from looking at individual health data for the purposes of research or pursuit of knowledge or discovery of new drugs or

Well when you when you put together a research proposal, one of the things that you is look at is the reason for doing it is the reason for doing it. You know. A benefit to somebody. Yeah. Right. So that you know, that's one way of, you know, that's what happens in terms of research.

Mm-hmm. You know, does research get done just for the hell of it? Sure it does. But you don't put that through the efforts, you don't tell the ethics board that yeah, that's pretty interesting because I agree. And then a lot of research is curiosity driven. Oh, absolutely. You know, I advisor in my master's program, absorbs did her PhD research on how researchers end up doing the research, they want to do no matter what they say.

Yeah, they find a way to do the research that they want to do. So curiosity is a, you know, we talked about in education curiosity, being the great motivator that they gets people engaged. And so, curiosity pursuit of knowledge. What's the difference storage unit? I think, I think I may have cut you off.

No, no, it's fine. Not if that brings up ethics, you don't want a couple of levels of, there's the ethics of the curiosity, then there's the ethics of generating, the proposing through accomplish the goal, despite the ethical limitations, and possibly components by forward. I was just looking at a case.

So, I think it's on point here. This morning, there's a doctor in China American educated doctor. He Jim Keke by Chinese is terrible. Who's in prison? Okay. In China because he generally modified to A's that were HIV infected and then planted them with humans. And there are two children in China that were generically modified before they were, and he checked his proposal to get it through the review board and actually, what he's imprisoned for his fraud and saying that it was really what and person in the signatures on the approval to sign back so that is in prison for.

But what he's done is, he's remains to these two humans in the world and violated mouthpieces. Yeah. Yeah it's interesting that they thought the fraud was the thing that they should imprison them in prison him for well we get into the law, right? Yeah. There's no law against so release and genetically modified here.

Yeah, the water actually so they got them on what they took the animals prop, but he's you know, I'm another year. Building back out. The certain is curious. Yeah, I think there's a, you know, the other side of that too is because that's, you know, that's the pursuit of research and the other side of proceeded knowledge is for the purposes of education for the purpose of teaching people.

And I think that people do draw lying between what is allowed for the purpose of education. And again, what is allowed? Just because you think it's fun, copyright infringement, is it classic example it's explicitly stated in US. Copyright law, that using it for educational purposes. There's one of the purposes that can qualify your use as an instance of fair use and similar provisioning Canada applies to fair dealing but it's not so clear that this applies to all things.

For example, saying something offensive in your class, maybe in the past might have been justified for the purposes of education, but has recently resulted in professors, or teachers being suspended or fired because there's no longer viewed as acceptable for the purpose of education. And so the, you know, the it's interesting here and we go back to intent, right?

The purpose of your action does seem to play a role in the moral value of your action. And, and we see that a lot throughout ethics. Although, ethical codes, just thinking about this out loud right now. Don't seem to clearly draw out that distinction between the purpose or your intent and and the result in fact at the end at the end of the video on values and you know, I had just done this hour long video and I'm sitting there reflecting on it live while I'm doing this.

Because, you know, and I realized or not came to the thought that the ethical codes, the way they describe, what is ethical on? What is not ethical are very focused on in terms of outcome and process as opposed to say intent. And it felt at the time after spending an hour, going through all of these things that the ethical codes found valuable worthy of value is that it felt very technical and mechanistic.

And so that that approach was kind of technical and mechanistic as opposed to perhaps a non-technical or non-mechanistic approach. That might take into account intent, might take into account feelings, although it's hard to explain that. How you can take that into account. There is a couple of things like when we're talking about non-maleficents, what counts as a harmful act isn't just described in the act itself, but in how people react to that act, how people feel about that act, what counts as harmful you have to you know, it's not just your opinion as to whether something is harmful.

It's also the opinion of whoever you've heard in different people. Feel harmed by different things and that seems to be more on the human side of it. That's what I felt anyways. Did you know we've been doing this ethical codes thing now for four weeks are you do you feel and you sort of that distinction coming out?

I was surprised. I have to say this week that the ethical codes were restricted is not the right word but that they were professional organizations and limited for because I take a broader approach. And so what I'm wondering hoping in the future actually is that we can take your graph and the way you distill about use cases of applications when they are called, I'm wondering if we can distill out.

More is a values that obviously underwire these professional codes, because that's what I found this and all. Yeah, they're been very detectable. They really strict their organizational. Right. Definition. And what I found missing were balance, but there they start, and you can say, well, they're all Western right. I didn't understand hurts association because or whatever, you know?

So you can say, well, they're based on that lesson 9 is what I think that's probably to. I do want to make my list of codes more broad, I did try to include international sources as much as I could, but my knowledge of them was obviously limited. But these core values and priorities, that I talked about in that video, that's what these codes contain.

That's what they say are the values, underlying, these codes. There's so if you were to be looking for these underlying values or more ace, this is what an analysis of these codes finds, you know. They're now I have another video upcoming on the, the bases for these values and principles, right?

So we've got this long list of values and what sort of reasoning underlies that list of values? And and so I look at things like universality, the idea that a principal should be universal fundamental rights natural rights. For example, fact, which is an interesting thing. You know, it's a fact that if you blow up an atomic bomb in a city, you will kill most of the people in the city.

Simple fact, the, the question of balancing risks and benefits is something that comes up a lot, social, good or social order comes up, but not nearly as much as you might think it comes up in the sense that the professions believe that the practice of the profession contributes to the social good and the social order.

So it's sort of like a what's good for us is good for society. Sort of approach fairness comes up a lot and we're going to talk a lot more about fairness. I actually listed as one of the values but it's also a value. Underlying, a lot of the values.

Another factor is epistemology what we can know, what we can reasonably expect to be the case. You know, you can't be responsible for a bad consequence. If you couldn't not have predicted, this would be the bad consequence, you know, it's like, you know, it's like the butterfly effect, you know, you can't be held responsible for something.

That's not reasonably expected, you kill about butterfly and the civilization falls. A few years later, you weren't responsible for the fall of that civilization. I think trust comes up a lot, you know, mechanisms for obtaining trust. Keeping trust the the need for trust for society to function and then finally defense ability.

Can you make an argument for this value or that value? So, those are the things that underlie these ethical codes and to, to the point of studying these codes themselves in and of themselves, it doesn't get any deeper and, and my own considered conclusion and it only gets reinforced.

The more I look at these codes is that the there is no set of values or even bases for these values that is common across all of these codes, or for that matter is common across society much less common across global society. I mean, even inside fairly cohesive societies, we don't see this commonality and I know a lot of people say, yes there is this commonality, but if you actually look at things like ethical codes, it's not there.

That's why I put in that field study. Don't know if I'm pronouncing the name properly but I really really wanted people to see that because this is one of these things that suggests that oh yes, the re is a commonality and in fact, I'm gonna share my screen here right screen.

There we go. So this is that analysis just like or maybe yelled, I'm not sure and some others. So you have all of these values around or sorry all of these ethical codes around here outside and then the key themes and they go human rights human values, professional responsibility, human control of technology, fairness and non-discrimination, transparency and explainability, safety and security accountability privacy.

If you look at this chart, it looks like, oh yeah, everybody agrees with these things right now, they've only studied codes of ethics or statements of principle for analytics and artificial intelligence. So, that's one thing, and that's kind of what prompting me to look at other disciplines. But we scroll down a bit and and all and these are the ethical codes that they study.

So pretty respectable list and, and many of these are in our list as well. So, but if we come down here and let's look at one of them accountability, which is something we've already discussed the consensus as expressed by these numbers doesn't exist and just make that. Can I make that bigger?

Sure, I can. So look at this. Right? So for accountability verifyability and replicability, is that part of accountability? 36% say yes, impact assessments, is that part of accountability 53%? Environmental responsibility, only 17%, ability to appeal, which we hear about a lot 22% remedy for automated decision covered. In only 11% of these ethical codes liability and legal responsibility, which we talked about 31% and even just accountability as accountability.

Only 69% of these codes. So there is no consensus. Even in the places where they say, there's consensus, the really is no consensus and those sorts of results. I mean, they go through all of those areas, you know, all of those, those values areas of interest and those charts are the same for each one.

You never find a consensus looks like consensus. If you talk about it, you know, you just use word like freedom responsibility accountability. Yeah. Everybody loves accountability. But when you drill down to what it means and there's no consensus and that's why obstructed. The course, the way I did. All right, we look at all the applications for AI analytics and learning.

Look at all the issues that have been raised from different sources. Now, we're looking at all these ethical codes, and it's seems like just painstakingly and, and mind bendingly doll to go. You know, this thing that this thing and this thing, but if you study it, at that level, the conclusions, that people have drawn looking at, you know, a more general level.

Just turn out to be false. I think that's a really important thing to say, personally,

And it raises a hard question what we say about ethics, when there's no consensus on ethics.

Okay. How do we solve the problem of unethically? I of, you know, unethical practices and learning and learning analytics, so that's part of what I'm trying. Well, that's part of values. Pretty much what I'm trying to address in this course that question. So, but I figured it's useful to know what all these issues are and what, all these values are anyways, right?

I mean, I think it is for us certainly having this list is useful and I think it's useful enough, but I'm setting up the course in such a way that it will produce JSON formatted data, dumps of all of these things that other people can use in other applications or other courses.

So if you want to list of all of the values that that come up in ethical codes, or ethical discussions generally access this JSON, document just feed it right into your application, open data, but, but I think the questions raised are significant thoughts with that. I don't know that there's any that there can be anyone answer to that question.

It's something. I think the process of looking at all these things is, they is the value coming up with an absolute answer. I don't think you can do that. Yeah, I think I agree to both parts of that. Yeah, this it isn't a yes, milk. It isn't either or it's it's something.

Okay. You you you mentioned a little bit of this way at the beginning and you mentioned a mesh. So, the discussion is the mesh, right, right. And, and then,

I don't know if you figured out individual, or you figure it out contextually, but you come up with some kind of quality and answer for what you're doing at the moment with what you're using. Which again, you know, I guess that's human.

Now I'm inclined to agree with that, tomorrow's flushing that out is the hard thing. Yeah, but this is the end of our time. Thanks Jim and goodbye. This is the end of our time but mark. Okay. So as a sociologist then you've convinced me that there's no way to control the game here that unethical behavior is inevitable.

Then as a sociologist I want now I just want to talk about mitigator and how to remove and yeah how do you escape the harm? Yes AI is is in the world as all those typically alter humans. Yeah, and so now I just now I want to mitigate you convince me that I am going from you know my life and everybody else is from this point forward.

I'm convinced that they will be an I person that has attacked. So yeah, I won't use that word but they will have to be away completely unethical and dangerous being there forever. And so then the discussion for me I want to to have you as a. How do we protect the common minds?

How I would say or this not using few regulated. And I think you've convincing that's impossible at this moment. Okay, I'll change my mind later, but I'd say at this moment and minutes between application, chaos theory and every year, these actual codes. What I know about human behavior, let's say at this moment, I'm convinced that extremely unethical things will take place or whatever.

And so then the discussion has yes getting regulated. Apparently not and so yeah because some people don't even think of it is unethical on the internet that will actors. I don't think that's the same. Yeah. So this is the pivotal turn of the course. I think. All right I think we've made the argument and now the question becomes, what do we do about that?

Yeah, right. And starting with next week, the process is a bit different where now we're gonna work our way carefully painstakingly step by step because that's the level of analysis in this course toward a solution. You know, toward something we can say. But there's reasonable and addresses these issues in a way that is satisfactory at least to us.

I think that's possible. I know, I know that the problem has been set out, so, there's pretty much intractable we can agree on ethics is, there's no way to define even the underlying values. Meanwhile, this technology is taking off and there are robot dogs with guns out there already and it's only going to get worse.

What do we do? So we'll leave it on that note. You had one more thing at least in translation, what they must be done. What them must be done here. Yeah. Very good. That's where we are. Yeah. So next week, we begin to find the answers. Okay, we're gonna call it there.

Hopefully, we've left our viewers and suspense here. Yeah, yeah. All right. Bye everyone. Bye.

Force:yes