Content-type: text/html
Module 5 Part 2 - Introduction


Unedited audio transcript from Google Recorder, which may include AI-generated profanity.

Okay, so just a couple of minutes set at time done. I've got the live session up and running for ethics analytics and the duty of care. We are beginning module six which is focused on the duty of care. And I've got Mark with me here. Sort of in the live zoom meeting, I say sort of, because I see his picture, but I haven't heard his voice or seen his face beyond the picture, but I just he'll make it when he does.

And now his pictures changed to blank purple. So Mark, obviously having issues on his end but we can work our way through that. Oh, there you are. Are you not hearing me? Let's see. Why. Would that be the case? Okay, you hear me. You can't speak. Is that right?

Let's go to the chat.

And because I'm notoriously bad at interpreting sign language you hear me okay. Good, awesome. But I can't hear you and that is probably not your fault. Oh yeah my there we go let's try that in here now. I hear you beautifully now okay. Yeah I thought it was me.

No, it wasn't you my everyone's in a while. My, my sound resets to sound off. I don't know why but there you have it. So, so how's it going? I don't know. It's been exceptionally busy. Hmm. And I see you snuck another video in on Saturday, so I just saw it.

I mean, I just saw that was there. So yeah, it's not going in. Yeah, and I'm now three videos behind. That's how many I need to do to finish off the previous module. There's a lot of content in that module and so you mentioned perhaps pushing a week. Yeah, I think that's that might be a good idea to make that decision right here.

Thanks. So you don't think it'll bother other people. I was worried that extending the course a week would extend the week beyond what they had committed to. Of course, that's mostly you and it's being yeah, yeah, we can have a democratic. Yeah, outcome. But I agree. I mean, that was so much information last week.

Yeah. And admittedly, you know, I only spent an hour or two. Usually spend more. Yeah. But and now you say, you have three more videos. So to me, that's says, this is the proper time to improvise. Yeah. Yeah. And I don't have a problem with that either, because that would allow me to catch up.

Yeah. And, you know, so it's, it's the mid mid, whatever break. Yeah. Yeah. That's it. Yeah, yeah. We're not rational animals, but we are rationalizing him. Yeah, yeah. And I'm pretty happy to rationalize, and that would be good. Yeah. So let's see if she really says if she pops in she's normally a Friday person more than a Monday person but I don't think she'll object.

I don't think so either and she's retired. Like yeah, so whatever that means. But and especially since I really would like to now down this section on approaches to ethics because there's stuff still to say there that maybe we haven't existed about. Yeah. Yeah, this is this is the turning point in the course you know and yeah and you know, I'm trying to sum up all of ethics in a week.

Yeah, that's a bit of an overstatement but so for those who are listening to the video, here's what we've done so far. Why we introduced the topic of approach to ethics. And to this point I have done three videos. The first one was on virtues or graphical virtues. The idea of ethics is to develop the best character that you can, whatever that amounts to the second video was on the concept of duty, which is a pretty core concept in this course given the name of the course.

And we're actually going to revisit the concept of duty again a bit in the next module. But this was the, the treatment of the ethical concept of duty. And then the third one, which I snuck in, as you say, on Saturday was on consequentialism, and that's is mostly utilitarianism.

But mostly the idea that the ethical value of an act is based on its consequences and yeah both duty and consequences ended up as hour and a half long videos and that was the minimum. I thought I could give them to to give them a proper treatment. And even then, you know, I look back on those videos and I said, well, I missed this and I missed this and I missed this but what are you gonna do?

Especially since I didn't want to just give, you know the standard philosophy text version of these theories. But I also wanted to, you know, give them a contemporary perspective, linking them where I could for example to, you know, the stuff that happens on the internet, some modern-day tropes and memes and of course the topic that we're looking at ethics and AI.

So and I think I've done that with those so what I need to do next and so yeah, I think we'll spend this week doing that and also I can upgrade my code a bit because I know Sharito especially wanted to see the the explanations along the sides of the grid.

So this will give me time to upgrade that and get not working nicely. But the next video I need to do is on social contracts, which is more of a political theory than an ethical theory. But there are huge, ethical implications and there's certainly a, not a small school of thought that says that ethics are determined by social contract, then a video on meta ethics.

Or as my old philosophy, professor used to say matter. Ethics can never get that over my head. I always lost it from. No, it's from Britain, right? And the British always pronounced their ace within our well not all ways but certainly a the end of a syllable or word.

So, you know, look the Queen always says Canada instead of Canada. And and it's one of the things that the Queen does it drives us nuts because there's no our in Canada. But so and meta ethics is basically the study of what is it? Exactly. That grounds are ethical theories.

You know, and I think a lot of a lot of ethics. Well it's about 50/50 some ethics course is do that first but it's so abstract and so theoretical. It's really hard to get a handle on, right? And a lot. So a lot of them do it after talking about the individual theories.

Because by the time we get to mad athletics, you got four candidates basically, as an approach to theory, right? Virtually consequentialism, social contract and you get probably throw in a few outliers there as well. And so the question is, well, how would you ever decide between them? Or, you know, if you want to mix and mash on what basis would you mix and match?

So, maybe ethics is kind of important, but then, the, the key part of this module is one, I'm calling the end of ethics. And this is one of those double meanings for words that I love so much, because we can say the end of ethics, what is the goal of ethics?

But also, the end of ethics, is this the end of ethics? And and I think there are elements of both in that and that'll wrap us up and that puts us in a good position to, to do the stuff on duty of care, which is a whole completely different approach.

And that's why I wanted to do the course is, you know, it takes us whole discussion of ethics, in turns it on. It's head, but to see how it turns it's on it, turns it on it's head. We we need to go through this. It's like this course is like something like 90% of preliminaries and then 10% of.

Okay? Now that we've got all the preliminaries out of the way. Let's see what work we can do. So, yeah, that would give me three videos to do for this week. So Tuesday Wednesday, Thursday or yeah something like that. Usually I I'm sort of falling into a pattern where I allocate Wednesdays as a day when I do programming and the other days is video.

So it might be Tuesday Thursday and Friday, but either way is fun. So, yeah, that way you'll get your content in the course, you know, rather than say, you know, at the end of the course, like, you know, I need to do those three videos, but all I have always is it.

Yes, that's what happened today. So, so I think I think this would be good and then, you know, those of us that didn't study philosophy. It actually catches us up. Yeah, on the fourth. Yeah, approach. So that we can then participate in the minute. Exactly. And that's part of the purpose of this because most people in this field, haven't studied ethics or philosophy in general.

It just haven't. So in most cases, they're working from their intuitions about ethics and their intuitions are fine. But, you know, there are all these alternatives that maybe they haven't thought about, or maybe having considered hadn't been exposed to. And and so, yeah. I I have thought that I'd spend a lot of time on ethics in my 20s.

Yeah, as a way person, you know and, you know, surveyed the religions and I was from a not religious family, except my grandfather was preparing weekend. So, one month a year, I lived in a very religious household, right? And the other 11 months, because we visited them for a month every year, right?

And then the other eleven a month, it was in a household that. Yeah, we might go to East Bay and you know. Yeah. So, cultural Christmas. And, as an aside, I grew up in a Quakertown, which I just returned to and for my other class, there's my phone. Yeah.

For my other class. I'm taking my writing college class yesterday. I attended my first Quaker meeting. Oh yeah. So it's very cool. Anyway, that's don't want to get distracted. So the Quakers are meeting by zoom. Yes, for them. Yeah, yes. And to get their attitude walk past a prosperity, gospel church, right?

That has been meeting in person every day and they have events and they bought an old business insurance is like, yeah. Personally I was gonna go there. Then I checked them out for my classes. I meant because it's the closest church, and there's so much activity, right. Then I checked it out and realized it was prosperity.

Gospel. God wants you to prosper? Yeah. And this what your church, the prosper there for your community. Yeah. And they put five giants screens behind what used to be an alternate. Now it's a stage you know. Anyway so I went to the great. Yeah, not really distracted though. So in my 20s I was aware of the Quakers but they were conservative old, white Republicans and this morning which was I think actually in Nixon's.

Hometown. I might ask their Nixon's presidency is when I was looking at this, after avoiding the draft, not dodging, but yeah, I love going to the ground and looking at a different thing. So you mentioned loud too. I looked at that was on yeah you know Buddhism attracted me more so I had stuck with Buddhism for let's count four years.

Yeah. As a philosophy. Never joined any yeah. Sure song guy. But have studied client. It's been my moral teaching. My sure, it's whatever and the most attractive feature is the very first thing. You learn is don't take things on authority, checking out for yourself. Yeah, there's a word I hate so much.

I couldn't even say it and so that, you know, so that attract me in the beginning and I'd stuck with that and that's the primary teaching and it's don't believe me. Checking out yourself. Yeah. And so there was a point here. Oh, and so. And so, I spent a fair amount of time on it compared to most America without in compared to most most college educated people.

Yeah, right there too. And I think that you're pointing to what you're calling into wishing or whatever proceeds. All right, I keep pointing to us class as a possible distinction. Yeah, it's now I'm sort of clear on why? I think it's important bring that up. And, and now I see that it is because with different starting points are obviously going to definitely.

Yeah, and yeah, and I wouldn't deny that and the only you only caveat that I would throw into that. Is that class is just one of many starting points, and if you've mentioned the whole intersectional discussion of ethics in the past and, and that's to come up again, in the next module, as well.

Because, you know, all of these issues, all of these backgrounds come into play in career, ethics background, really is critical and important. Unlike these systems of ethics that we've been talking about here so far where the suggestion is. Well, there's one system of ethics that applies to everyone and you know, and basically we just need to figure out what that system is and there's so many, you know, I mean it's it's a pattern just in reason generally and in thinking about a lot of these subjects generally that the idea that there should be one X for every one where now name your ex, you know, one tax on.

I mean one you know one oh one web one diet yeah. Yeah one. Yeah. You know or even a limited range of a lot you know like you know clothing you know people should dress a certain way etc and and that's still exists. You know, even before this session I had to go through the thinking, well, do I keep wearing this shirt?

You know, seeing as I'm on video and everything, or do I put on the nice shirt? And usually, I put on the nice shirt because there are standards, but to the decided, no, I think I'll wear this shirt because I like this shirt and I'm comfortable in it. And, you know, I'm not going to be impressing anyone with my impairance anymore.

I've long since passed that stage of my life, not that I ever impressed anyone my appearance. But but it's interesting, you know, and so we think about background and experiences, you know, and like the ethics of beautiful people is different from the ethics of non-beautiful people. The ethics of men is different from the ethics of women, which is a key point in the next module, the ethics of the rich, and so far, as they have, any are different from the ethics of the, or the working class.

And as you've pointed out, and we could go down the list, you know, in some of the work that I did on career ethics, I also looked into indigenous ways of knowing, and indigenous ethics, which again constitutes another range or another classification of ethics. And, you know, I'm not in, I don't feel like I'm in a position to say about any of these that they're wrong.

Even the rich person ethics because, you know, if you're a rich person, you're seeing the world in a certain way and in many respects it's not your fault. That that's how you see the world. How could it be? You know I mean on the one hand I say you know you know wealth.

Very often is arrived down on the basis of luck either that are largely. I should write a book luck or larceny. How wealth is created. That might be a great but, you know, and I'm quite willing to to criticize the larceny. But the luck aspect, I mean, you know, I mean, what do you do, right?

You know, somebody was born early on like even even like Donald Trump given. I know how many millions of dollars by his father, right, 400 million, okay? Now admittedly he lost it all but you know it wasn't his fault that he was given 400 million dollars. And, you know, most of us I think would take the money.

And I don't think, you know, from from the perspective of Donald Trump, I don't think we say it is wrong that you took 400 million dollars. We could say in a general sense, maybe maybe that it's wrong to have 400 million dollars for any one person. Yeah. Well yes, yeah.

But you know, I mean but, you know, even that why, why why do we think that applies to everyone? You see how we so easily fall into this discourse, right? Nobody should have 400 million dollars. And, you know, you could argue, it's just that, you know, away from the personal is I don't think the Pope should have control of this much.

Well yeah, as the Catholic Church has again, if you're here, we have a case of go, a small number of people in charge of a vast quantity of wealth and in so doing tipping the scale, if you will. And that's political indications. That's a point. I made at the end of my consequentialist ethics video because, you know, consequences ethics says that, you know, the goodness or badness of an act it or a rule is, you know, the consequences.

Well, rich people produce more consequences than poor people. It's just a fact of their wealth. So this kind of theory, sort of automatically gives rich people more ethical standing and that doesn't seem right to me. Now, we could calculate, We're ethical responsibility that nobody knocks about that, Well, but that's you see.

Responsibility isn't part of consequentialism. Right. That's a totally different theory, you know? Okay. Yeah. I mean now we're we're not, you know, I mean, because in a sense, it doesn't really matter say what percentage of his wealth, Jeff Bezos spends. Right. What matters is the result, you know. And we can say he has an obligation to do good and produce good consequences.

But does he have an obligation to produce more good consequences than you are? I that she could do that with his loose pocket change, right? He could do that with money. He's lost in underneath the cushions of his coach. Yeah, you could end homelessness or well you have so hard more than you were.

I could do. Like you are. I could eat. He could. Yeah, you know there's hundreds of millions of dollars would and I saw, I saw a document from the United Nations this morning saying that world illiteracy to end illiteracy, around the world would cost $17 billion dollars and that that would produce literacy for about is something like 770 million people in that range.

And I don't know if I have the exact number who are still illiterate. So you know, a bit less than one seventh of the world population, maybe one eighth of the world population. Right now are illiterate 17 billion dollars to solve that so he could. So illiteracy wouldn't really put a big dent into his fortune.

He could end a literacy on the planet and the United States government increase the defense budget, and find more than that over. What the Department of Defense requests? I mean, it just gave them more money. Yeah, yeah. So, yeah, it's all relative. Yeah. So they thought cross my line.

If basis gets rich enough, can you become a country to and hope the United Nations like the Catholic Church? Well, no, no. But again you know if we look at illiteracy you are I could well let's give ourselves a lot of capacity and say that over our lifetimes we can help a hundred people become literate, you know, taking them from illiteracy to literacy.

If we really focused ourselves on that, I mean not what have to be probably our life's work and we would think that's pretty ethically noble, right? We're definitely producing good consequences. So on what base is, did we require more from Jeff Bezos? While there would still be 770 million 900 99 thousand miles but he didn't cause that, right?

Yeah. These people were illiterate, well, I mean not before he was born because they were born after he was born, but you know what I mean, right? Nothing, it's unrelated to him. So if he goes out and makes a hundred people literate, he's done as much for the world as you're right.

That's what consequentialism tells us. Okay, so I still haven't see, I have to watch that. Yeah, video so well it's basically doesn't say much more than what I've just said right now. So yeah, yeah and it says other things, too, but covers other aspects. But this is the thing that bothers me with consequentialism, right?

Any sort of progressive contribution to society of, you know, love the sort that, you know, the right language say you know, is basically theft or, you know, taxation at the point of a gun or however you want to phrase this, right? Or just unfair, I mean the right wing typically calls for a flat tax and and feels that that's pretty generous because the flat taxes are percentage of an income.

But we'll leave about aside any progressive, allocation of responsibility. For solving a problem. Like illiteracy isn't based on consequentialism, it's based on something else. And that what creates the felicity? Because, you know, I was reading Matthias Melcher, who said, you know, I don't really see what's wrong with consequentialism.

And at first blush, there's nothing wrong with consequentialism. You produce good consequences. That's ethically, right. The problem is it's accounting kind of theory, you know? And it's not based on percentages or if it is, how do you adjustify that? How do you justify asking, you know, or, you know, maybe there's a way of counting.

But then now, if you have different ways of counting how you've created good, how do you choose between those? Just it becomes this quagmire where I agree producing. Good consequences is ethical. Good. Yeah. So so we're halfway through the hour and I actually brought a little agenda. Hmm. So perhaps I can ask a couple questions absolutely and unless you have something else you want to get on the road.

Oh I mean why have these live sessions unless we can do something like that, right? And you know I mean I can produce videos until like drop from exhaustion and almost half. But yeah, this is why we have the live sessions, okay. So again, you know, I didn't do the last video but yeah, that's fine one before.

So I'll start with. So continue to run from my first comment, about as a youth, I put in some time tried to decide on my picture, okay? And despite that, it ended up in slow down valley, good morning at the time, it seemed like a good idea and then going back to school as an adult middle age bill and not the 24 year old, which is what they call a mature student, beginning into the church and they're 23 year old.

Yeah, I was asked in my program to examine my office and so I went through another review. So it's, you know, I've been again more than most people I bump into and in school, I did have one particularly good professor, who went through ethics? It was in a school leadership program that she had, but it was the ethics component and he boiled it down to his big Star Trek fan.

So he had one crime or key. I think in a directive where everyone thought, what he called the life that. So for him, all the other ethical considerations are systems. Came under doesn't promote life, or does it not and is it right? And then everything under that got sorted right there without that was you know a good touch though keep going.

Yeah. And then I was looking through the slides. You know, I watched the presentation and then went through that transparent and flipped this like and so I thinking about the, I can't get the autonomous AI out of my mind. Every citizen seeks, the preservation of its own being, according to its nature, right?

So, I selected a little piece from the points. Every substance seeds of reservation. And then we're so, you know, talked about autonomy. And then cons said, there's no good without will and his character here. Yeah. No, that's fine. So this, you know, again points at autonomous intelligent, yeah. Vehicles.

It, we're talking about vehicles and ridiculous. And so if I add those together that need is going to preserve itself and, you know, it's, it's been given autonomy by its makeup, right? It's going to preserve itself but without will it can do no good. Right. So, you know, that gotten thinking and to bring it to AI.

It's, you know. So, my intuition is that AI means oversight. Currents. I my intuition is that autonomous robots and that's intelligence is a bad idea. That's my intuition. I probably won't. I don't know. It's on and we and we came to the conclusions. It's too late. It doesn't matter, right?

Right. Intuition autonomy, you know, autonomous things are out there objects but perfect illustration was I was looking to transfer it and my favorite AI translation was the tragedy of the comments right which I think is a book title. Certainly a blog post. Yeah. In the one that I'm going to mention here without saying it points to the need for all transcripts to be reviewed by a human because Google translate the philosopher can't as CUNT.

Thank you. We want that on the internet of that color several times. And so example, the need and, and, and then a person could say, well, you know, it's learning. It's a young AI and it will eventually recognize the context realize you're talking about the lives of those because it also misspelled or so.

And so, you know, so the case it would improve, but I think that points to just a basic problem with AI that it's constantly improving. Right by its nature, right? It can never be perfect, right? If it's constantly improving, isn't that a? I'm not a philosopher but I think we can safely say it will never be perfect depending on your definition of perfect.

But you know I mean our definition of perfect is probably always going to exceed whatever in AI is capable of well. And then yes the light that's it is the burn directly. Mmm, then to me that requires, oversight to ensure that autonomous things objects do not kill if, you know, so you go cause harm and well what's hardballing up.

But if we go just to the prime that autonomous things do not kill and then if they do kill then there has, you know, something has to be done. So I'm from a death penalty for human creative objects are not from the death penalty for humans, right? But I'm through the death penalty for human created objects like corporations.

I've been for the corporate death penalty. Well, since the second but the you know, every time I bring up the conversation with regular people because even explain it, but it seems obviously that if corporations like Pfizer are currently much discussed corporation, who is over the years racked up, billions of dollars in parts fields or harm to humans flying for regulatory misconduct, I mean on and on right here.

Yeah. So I would be for the death penalty providers. Now, of course, you know practically it would spit off multiple corporations so that's why. I mean, we did that one time with American telephone number. Yeah, that happened. Yeah. Yeah. It was arguably a good thing. So it's reassembled itself in a new manner and arguably could be done again.

And that made me, that may be what regulation requires. I don't think so. And then so that led me the wrap up and yeah that's fine to the idea. So are these artificial objects. So the machines are computed programs. Whatever are they then? Slides. Exactly. And that brings that whole discussion.

And then since slavery and capitalism co-developed, no, and capitalism is basically based on property rights. So religion is convenient. Capitalism is property. However you want to say it, they both allow for slave to read of different sorts. Then I think that points to before we even put these objects in the world I think we should have that discussion about property for sure and slavery.

Maybe. Yeah. So, yeah. And, and so, the, the form of argument just just for fun that you've offered here is what's known as a reductio. The full name is reductio ad, absurdance. Yeah. So you've taken the premises, you followed them through to their implications. And you'd come to the conclusion that the implications are absurd, and you couldn't accept them right in in some way or they're very least, they present intractable contradictions that they're that are really hard to untangle, you know?

And it's interesting, you know, we we could formulate your arguments even a slightly different way. So let's again began with the premise that all life is good, right? Based on the idea that anything living will attempt to preserve itself and let's take that as true even though in some cases, it's obviously not true like suicides and that but we can just say well suicide is bad.

All right? So let's just say that all life seems to preserve it. So, and so the preservation of life is the highest good. Now let's take a shortcut because you went through and talked about artificial intelligence and artificial intelligence can't be trusted. Well, the same is true of humans, right?

There's no perfect human either. I mean it's the whole point of a lot of religions. So if you give humans autonomy and will and just as a bracket and I think we would have to retranslate that as agency to something like that. But side chat side there is as a sociology.

Yeah. So we know that humans who are free and autonomous will kill each other. It happens. Look at the murder totals especially if you give them weapons other even more likely to kill each other in the deadly or the weapons. They're important. And in groups are even more deadly.

So clearly and there's been any number of science fiction stories along these lines. Clearly if we take life, the president of life is our highest virtue then except for a very few enlightened philosopher types. The rest of you humanity needs to be enslaved. That's the only antlamp actually put them in change to prevent them from violating the engine, right?

Okay. See right, so that would be consistent right with this principle in fact. That's the only way we can actually carry out the principle, right? And and in fact we could you know I mean there's an awful lot of wasted opportunity for life in humanity, right? And you know I mean once they're enslaved, you know, we can remove the whole.

There's a huge period in the young man or young woman's life where they're still trying to find their way me to mate, etc. We can circumvent that will just match them and require that the procreate and create more life and I don't just do that. Yeah, yeah, I mean, that's that's what we do.

So if that's our purpose, you know, be fruitful and multiply we can create a lot, we can create hundreds of billions of people and of course with slavery, really the only limit how many people we have would be the carrying capacity of the planet, right? And we could extract every last calorie over the planet.

In order to feed the hundreds of billions of people that we produce and and, you know, and then the idea here too, is that you set it up in such a way that it's basically perpetually sustaining is long as the sun is shining, as long as the rain falls, we can have these hundreds of billions of people and we just carry on like that for all time, present preserving life, some sort of spaces.

Yes, yeah. And presumably. That's a bad thing. Doesn't sound very good, doesn't sound very good. So this culture developed law to not prevent taking their lives but to well in it any original in prison that rehabilitate particular of life but now we've gotten to punish the paper of life.

That goes very famous trial going on right now in the United States about a 17 year old. That illegally acquired a weapon shot to Wednesday. Yeah I heard that here too. We've seen in the United States. It's yeah, but so that. So I agree with you guys enslaving everybody to prevent the taking of life is about idea because we can never you know see I'm not a philosopher but I know you know Wisconsin we can never predict which slave would forget, right?

Well, that's why we assume that they all would. That's the only way that assumptions because I was talking about AI but you get that human. Yeah. I know that was kind of a nasty of me, but we know that humans who don't do have some level of agency, but that's, of course, been able, they don't all work.

Yeah, only a small percentage, I'm very small percentage but we know know which ones that's why we have to enslave all of them. Well, that's one approach, you know? But the approach this culture has been on is a different approach to remove them from the group. Yeah, without approach is another failure.

Look how many murders happen every year. So from this, you know, saying to get us out of this. It's a humid slavery. Yeah, you know, it's a move, you know, the move that discussion. Another step is in this culture. We took a different approach. Yeah. And so, if you want to call punishment, whatever originally was supposed to be.

Yeah, redemption. But, but again, my concern was with AI, so these are not organic well again, you know, they came out of the same universe that we did. So their disability, we are yeah, but they're not an organic life form and at this point they don't self engineers. So there's another issue that you know it's it won't be long and there'll be something.

Yeah. And we already have AI, that writes code. I mean that exists now so they can reproduce in software. Now, to me that as a potential to end humanity, and almost anything else. If they can reduce reproduce enough and if somehow they can learn how to make collective decisions, they might decide where it isn't.

Mm-hmm. Yes, vermin. Yeah, but the, the question here now is on what basis do we say? They are not life. If they have a desire to preserve themselves which we say is the hallmark of life and they have autonomy or agency, and will, you know, they can carry out their intentions in the world, right?

And maybe some other tests, I mean, we could give them an IQ test or whatever, you know. They, they are use certainly. If they've learned a reproduce, they've probably satisfied certain minimum conditions. Yeah, there's the touring test, right? Could they pass for the human? But I mean, a lot of that.

As we've learned on the internet, a lot of humans. Can't pass free women. I know I can't those those things where you pick out the street lights in the grid? What? How much of a street like counts as a street light? I don't know, but a machine paid on the, on the ground across lot.

Yeah, yeah, exactly. So, you know, and for that matter, I mean, okay, they're not organic life meaning specifically, they're not carbon base, but yeah, they're still at home based, but what about animals do it? You know, I mean, if all life is good, is the life of animals, good, a lot of people.

Yeah, a lot of people argue, I'm not without justification that. Yeah, I mean, if the idea here is to preserve life, the idea should be to preserve life in all its forms. But we eradicated smallpox and and, you know, we wiped out the entire species and we've wiped out other species as well.

And we continue to wipe out species and that's not even talking about eating them, you know, or using them as slaves. You know. Now sure they're not as bright as we are, but that wasn't the criterion, right? Maybe we could reformulate the, the original proposition, right? You know, all philosoph all ethics are based on preserving the right of human life to exist.

We have to say human life because we can't use intelligence as a criterion because that allows us to kill stupid people and again, not on, right? So all human life but that seems really arbitrary. Yeah yeah since yeah. I mean, what if really smart aliens landed they're not human but presumably they would be allowable.

You know, they would be covered under this, you know, right to protection of life. What if these aliens were silicon paste? You know, I mean, are we gonna get my or some other, right? Are we gonna give them like a carbon test to determine whether they're persons and allowed to live?

I mean, not again, seems ridiculous. So yeah, I had to suppress myself and shouting. It's a cookbook. Yeah. How to serve humanity? Yeah. It's from the old one. It's on. Yeah, yeah, yeah. So I know this is very murky and here we are coming up again in the hour.

Yeah and so I that's why I brought these things up. Thank you for you know pointing out productive. Yo heart but I don't want to leave here in despair. Oh you sure. Well, thank you today but as I chose to be a craftsman in Silicon Valley and supported the development of silicon chips that I have contributed to the ending of probably all department based life on this planet, but I might have.

Well, I intuition is at all correct. That once these silicon base platforms, learned to self-reproduce, that there will be no stopping. Yeah, it's the terminator scenario. Basically. And you know I mean I think first of all, I think it's not your fault if it happens. You know I mean we can't ever say that we knew for sure or even had a reasonably reasonable degree of certainty that this would be the outcome.

Even now as I work in this field, I don't have the reasonable degree of certainty that this would be the outcome. I can certainly imagine the world and what's humans and AIs live side by side and both have rights as persons and therefore, knowing humans and probably is, I can certainly imagine race conflicts between humans and non-humans species conflicts.

I guess that's more accurate because of our long history of racism, which will probably pass on to AIs, but that doesn't mean. Extermination only that doesn't fall. And and well, we had limiting competing, you know? So raising you know, those things raised, apparently your single species but apparently we have absorbed other species together.

Yeah. But anyway that's mine. So species. Wouldn't it tend to be mono? Cultural? Wouldn't it tend to share trying to think of the guy? Who says well, come into one pool Wilbur. Oh, you're you're thinking of a singularity. Yeah. So what are the artificials species? It's no Cincinnati. Infinite capacity for church.

Well, right, first round and it is stories. Women tend towards singularity and wooden. It seemed that it was reaching a logical conclusion. That we're always a space in time and easily, isn't that a possible logical beauty? I'm just talking about possibility. That means I'm not talking about the termination.

Yeah, I don't it's a possibility. But, you know, it's a possibility that could happen with humans. And as you point out talking about other species of, I'm not sure what we would call them generally. I don't know if humans. Probably humans neanderthals, etc. I mean so yeah I mean there's they've either been wiped out or as he suggests absorbed and either, you know, I mean yeah maybe the the ultimate form of life becomes cyborg, that's also possible.

But I don't think that there's any reason to suppose that artificial intelligence moves toward a single point. Now, if you're a philosopher of or if you're follow a follower of the philosopher Hegel then yeah you think at all comes to a point and but I'm not, you know, I don't think that history has a direction.

I don't think that moving toward a single point is in any way inevitable. It's, you know, it's like, it would be equally likely to say that it is inevitable that all of humanity will evolve toward a single world, government hundred, single world leader with a single philosophy. No, we nobody would take that at face value and I think there's enough diversity in the world of machines that that could also happen.

I mean, we're developing distributed really insistence, right? Not one, big, global supercomputer because one big global supercomputers are really bad idea because how you need to do is unplug it and you're done. So so they were going to be different flavors of machines. They're going to be different flavors of machine philosophy.

Probably somewhere along the line. There will be a flavor of machine that is genocidal but because we saw that in humans, but with any luck we can prevent that from resulting in billions of deaths. And I think, machines other gen forms of machines will probably help us in that endeavor because in the end, there is a little section in the consequentialist paper.

I think it's the consequentialist paper, it's in there. Someone where I quote, Bob, Dylan along the lines of you got a serve someone, what is it? That makes something good. Right. You know, developing your own capacity. Um, without any regard to why you're doing? This is kind of pointless creating good in the world only for yourself is kind of pointless.

You know, you don't get any joy from that, you don't get any happiness from that. To actually achieve happiness, our efforts to promoteing. Good. Need to be outwardly directed in some way and that's probably going to be as true for machines as it is for us. Which suggests that even a consequentialist ethical system is going to require some kind of altruism.

And if you have altruistic machines and then you have the the response to genocide on machines. And there's again there's other science fiction stories that have been along those lines, right? They ultimate purpose of a machine is to serve human. The ultimate purpose of a human is to be served still dystopian but at least we had to live.

Yeah, it's just this is just one instance in artificial intelligence and I think of this course. Yeah, it's just one instance of creating things that we can't possibly know the outcome. Yeah. And you know have tremendously have the, in this case the potential. Yeah. For tremendous instruction and we can't even deal with the destruction and we've already read getting the last sentiment.

Yep. We're politically unable to give that and then you bring in the power of structure, you know, you know, back to days those, you know, the ethics of the powerful. Yeah. Which is who will control the original production of artificial intelligence and it just my intuition is this will not end well and I'm just glad I'm old as a carbon-based life.

I have an idea. That's another problem. That is a problem. Well, problem with carbon-based light forms. I mean I consider that a design flaw and not a design feature personally and I've been convinced otherwise but that's a difference in establishing but silicon-based life forms have will have lifetimes that to us would seem geologic.

Yeah. Now they're not, they won't be well but us. They'll see nearly geologic. Yeah. So they will all be with busy ways. And then back, you know, and yesterday, the Quakers were very rational. So yesterday, nothing came up about revelation, that's animal school, but perhaps. Yeah, you know, you never know, perhaps there were some revelations about the end of the world, you know, exceedingly old life and ending in a firestorm and all like that.

I don't really but make sure I'm not happy. Now we got to end this because I got something else coming up at one but this was good. I I think unless I hear any objections you know we have a hundred followers or so on on the newsletters have been unless I hear any objections I'll follow that advice.

I'll extend this week to another week and extend the course by a week. I think that will just make more sense all together. Yeah, that I agree. So, now, I would proceed under that assumption. All right, so I actually yeah. So yeah, Friday. All right, talk to you Friday.

Have a good one.

Force:yes