Content-type: text/html
Duty


Unedited audio transcript from Google Recorder

Welcome once again, to ethics analytics and the duty of care. I'm Stephen Downs. We're in module five, approaches to ethics. And today's talk is going to focus on duty. Now, you might think that this is kind of an appropriate topic to pick given that today is November 11th 2021 or rememberance day here at Canada.

And the topic of duty often comes up, when we talk about are military obligations and our annual day of reflection on the service of people who've given their lives for our freedom and our democracy. I've chosen to go with a slightly different motif for the cover of this presentation though, and I've picked police.

And it looks like actually Australian police, but I couldn't say for sure, but I could have picked any number of professions doctors lawyers, you know, even technologists. Accountants. Even researchers such as myself, perhaps, academics and, professors. All of us who feel informed in one way or another by a sense of duty.

And so I I didn't want to just go into the standard trope of yet duty as a military thing and that's the beginning in the end of it. What I'm certainly not going to ignore that aspect of the concept of duty and around it. Some of the associated concepts such as honor and courage and sacrifice.

I think these are all interesting aspects of ethics and morality in general. And there's a whole history behind that, which I want to talk about today. So the subject of duty is it's the idea first and foremost, I suppose of a requirement to moral action and I'm kind of a free spirit and I'll admit that right off the bat and so, you know, being required to perform a moral actions, not sort of thing that's ever appealed to me.

But by the same token that would certainly give other people grounds to say that. You know, my being a free spirit is rather selfish and I'm ethical way to live so we can look at that from both sides. The branch of ethics concerned with duty is called deontic ethics of the word.

Dialectological comes from the Greek word day on which means duty and so basically duty-based ethics teaches us that some acts are right. And some acts are wrong, simply because those are the sorts of things that they are.

We usually think of ethics in terms of what the yoke come is. You know, if you do something and you kill a person, not something, whatever it was was a morally, bad act. They all damnology doesn't work that way. And there's a few reasons for it and I think these are actually pretty good reasons.

One thing that they ontologists will say is that we can never really know what the outcome of an action is going to be here. We could be appealing to something like chaos theory or the butterfly thesis, so to the effect that we don't know down the line. What's could have happened?

We step on a butterfly. You change the weather in China. How could we know sometimes? It also has to do with the idea that, you know, the the outcome of the action is in a certain sense. Irrelevant given our intentions to perform or not perform an action, if I shot it.

Somebody. But he collapsed of a heart attack. While the bullet was in flight and bulleted, ended up missing the person in no bad consequence guys, still dead. He would have been dead anyways. But you know, arguably my action was wrong because you know, that's what I had in mind was to to kill the person.

So and it works the other way around too. You know you can intend to do good with your action and sometimes bad things result but the app was still morally good because it's that kind of action. It's a morally. Good action. Well, we could talk about that. The concept comes, well, let me the concepts, but probably been around since forever, right?

You know, we can go all the way back to the 10 commandments or the law codes or whatever you want. There have always been rules that are basically brought forward as guidelines or edicts of ethical behavior. Simply on the basis that these are the rules and sometimes it justified theologically but more they're just they tend to be justified perhaps on the basis of human nature.

But per more, just on the idea of what we might call natural law theory and it's this idea that we can know about the loss of the world. Just by thinking about them, triangle has an interior angle of a 180 degrees, we know that it's a law of triangles.

Now we know it just by thinking about it 2 plus 2 equals 4, you know, we don't need empirical proof, we know about it just by thinking about it and similarly, we can know more truths in the same way. They're inherent in the concept of morality if you want to put it that way.

So and some of these will seem really intuitively obvious to you. We have a quietness who was a proponent of this. And he says, for example, every substance seeks the preservation of its own being, according to its nature. And by reason of this, inclination, whatever is a means of preserving human life.

And of warding off, it's obstacles belongs to natural law. Living beings have a natural inclination to seek to continue living. And so an ethical law based on protecting our ability to keep on, living seems to be a pretty obvious moral law, life is good, and that's the foundation. We can embed this in a sister.

I mean, we can, you know, like like the diagram shows we can test our moral intuitions by creating predictions or creating moral principles out of them. Applying them to specific cases, in testing them against our intruses, a whole process, we can come up with here. But what it really boils down to is this idea that, you know, just the very fact that we are living beings and ethical beings gives us the moral intuition.

That life is good. Things that preserve life are good. Are different ways. You can set this up and one way is to distinguish between act intuitionism. That's the idea of knowing whether a particular act is a good act or rule intuitionism where the focus isn't on the individual act but rather on the rule that governs the act and so the the intuition is that this particular rule is a good rule following.

This rule is morally. Good. Don't lie is a rule. And so, the intuition here is that following that rule would be good, natural law, persists, natural law theory, persists. To today, it hasn't gone away. And actually, you know, I haven't done an empirical examination, but I would imagine that a good percentage of the population led hears to some form of it or another John Fennis and Jermaine Grisantz are contemporary writers, writing out of Notre Dame and leave offered us a set of seven basic self-evident values from which moral norms could be derived, Miss 7 are basically life health and safety.

Our capacity to know about reality and appreciate beauty. Our capacity to be excellent in work and in play are desire to live at peace with each other. Neighborless friendship our capacity to have aesthetic experience as a feeling of harmony and inner peace. And I can perfectly, can personally speak to that one harmony between our choices, our judgments, and our performances walking the talk, if you will, and then finally religion and pursuit of ultimate questions, of meaning and value or perhaps of the cosmos and the nature of the universe, a lot of people are going to look at those principles and say yeah, those are the kind of principles that I agree with.

But you know, one of one of the issues of natural law theory. This idea of moral intuitionism is a plurality of systems of morality. I mean, if you can just intuit it who's to say that you're intuition or my intuition or someone else's intuition is the right one and the wrong one.

There are all kinds of ways of caching this out in different kinds of intuitionist systems that may be more or less supported by the actual way. Humans are naturally, there's a whole discussion in fact about what is natural for a person or what isn't is it natural for a human to fly?

Well, clearly not, we can't flap our arms and become airborne, but on the other hand, we could build airplanes. You know, what is natural, and what hand seems to be what falls into the domain of what's possible? So anything, a human can do at their body is natural or on the other hand natural may have to do with overall purpose objective or goal, right?

And if the purpose of a human is to stay alive and reproduce, then you can come up with a narrower definition of natural. Similarly values those values that correspond with our moral intuitions can be described in any number of ways. I illustrated just one such system here, Schwartz's value theory, which talks about four dimensions of values including openness to change self-transcendence conservation.

And self-enhancement was subcategories of things like stimulation hedonism achievement, power security. Now, are these good things? Are these bad things or is any combination of these things as values and non-values acceptable or I could bring him in this context mass laws, hierarchy the five stages of needs that people have and needs are certainly something that seemed to flow from who and what we are.

So we could be again as mascot does with physiological needs and then what's those are met. Look at safety needs. Love and belonging is steam. And then finally it the pinnacle self actualization or as one wagon put in there as well. Wi-Fi got to have Wi-Fi. So that's a weakness of, you know, the natural theory of ethics and value theory.

So we can go back to what is the human? What is human nature? And a good place to start for this discussion. In this context in this day and age is with Jean-Jacques. Rousseau the French philosopher of the 1700s who observed that the beginning of his book, man is born free.

But everywhere he is in chains and the idea. Let me just caricature here and rather than strive for precision. The idea is that they human naturally is good and virtuous but society and the constraints and the artificial demands and artificial needs and desires that society brings to us. Constrain that, you know, it's funny, you know, I can like express thoughts about Russo in this way.

I'm thinking as well as of Kylie less and adbusters magazine or gnome chomsky of manufacturing. Consent talking about the same way, the structures and actions of society creates artificial desire and so doing impinge on one's dignity, in one's freedom. And this is based on the system of the system based on capital and self-interest.

And here I'm quoting from the the article not from Russo himself, but from the article, the hope of creating a stable and just political society on the basis of narrow self-interest is a soul shrinking and self-destructive dogma masquerading as a science of politics for Russo. What was important was the meaning and importance of human dignity.

The primacy of freedom and autonomy, I am the intrinsic worth of human beings and let me be careful here. Would we use the word worth in this context? We're not talking about numerical worth or, you know, thinking of it in terms of finances. How much money a human could get, or value as in a person is more or less valuable, you know, where we live right now in an environment where virtually every concept that we have in every discipline that we have, ultimately it breaks down to some description in terms of money and finance but Russo didn't live so much in that world.

And he wasn't using the the words like worth in that sense. And I don't think we should either so influenced by Russo and influenced by, you know, you need people, like Saint Thomas and writes theorists. We have a manual. Can't who lived in what is now called, collin and grudge.

And then never left the city. And collimated grad is on the far, west side of the Baltic Sea and what is now a Russian enclave? Which is kind of interesting. But back then, it was Prussia. So can't talk about duty and the right of ethic as derived from reason out of the concept of necessity.

So there's a different ways we can get outness but we'll get on it this way. Can't say nothing in the world or outside the world can possibly conceived, that would be called good without qualification except a good will now. But good will he'sn't, meaning, charity or goodwill stores or something like that.

But more will, in the sense of maybe Nietzsche's will to power, or showers will to live the reactions of a rational being to project oneself. Ones ideas once thoughts into the world. So he says, goodwill is good because of how it wills and how it will is ethically. So a good will is good in itself.

And he's also saying and this is where he parts weighs with the naturalists morality should not depend on human nature and not be that are for subject to the fortune of change or the luck of empirical discovery. And here he's responding, not only to people we think. Well, something is natural, therefore it's good.

But he's also speaking to people like David Hume and others who want to find what counts as good empirically by the evidence of the census but you know, comment looks at me. So this reduces morality to accident to to luck, you know and just like the shirt on this slide is completely irrelevant to anything we're talking about.

So also is human nature or empirical discovery because morality is something that we know through and by the peer exercise of reason and the will.

Okay, so where does that take us? What can't came up with? Is something called the categorical imperative and even if you're not familiar with this phrase, you're certainly familiar with it in everyday life. It's like when you grab for the cookies on the table and try to take them all for yourself.

And your mother says, what if everybody did that and obviously you know, everybody can't do that because they would never be any cookies. Same kind of thinking, right? So we can distinguish between a hypothetical imperative and a categorical imperative to give you a kind of an idea of how this works.

So a hypothetical imperative is something like if you want something, then you must do something. If you want to be a doctor, then you have to go to school and get a medical degree. If you want to get to Regina, you have to go to Saskatchewan, right? You see how that would make sense, right?

And it's it's an imperative in the sense that if you want to do the one thing, then you have to do the other and that was the structure of natural ethics. If something is a human, then it needs to live. For example, right? Well can't comes up with the second live and this sorry can't come up with the categorical imperative, which basically simply drops the if part.

So instead of saying, if you want a then you must do B, the categorical paragraph. The categorical imperative simply says, do be.

And how do you arrive that categorical imperative? Well, it's through a process of pure reason. And the pure reason that count offers is this act only according to the maximum, by which you can at the same time will that it would become a universal law. So think of a rule, I should take all the cookies for myself.

Could you make that a universal law that governs everyone and everything? Well, no, you couldn't so the maximum. I could take all the cookies from myself is not a categorical imperative. It doesn't necessarily make it wrong. I mean, it might be right taking all the cookies for yourself seems to be wrong, but not all of our actions are covered under the condition of becoming categorical imperatives.

We do all kinds of things on a day to day basis. You know, twiddle my pen, right? We doesn't matter whether everybody in the world does that or not, that's not the kind of thing that is mentioned. Nor even you know something like you should always twiddle your pen when you make a point about the categorical imperative.

Yeah sure. Everybody could do it but you wouldn't matter, right? So it's it's a bit deeper than that. The idea is that it. It's a maximum, it's a principle. It's a rule of conduct where this rule of conduct is the imposition of an ethical will on the world. If everybody thought that this was an ethical thing, could that happen?

And according to the cunt, according to those who follow cunt, all of our specific duties which may or may not include twiddling pens, can be derived from this one imperative. So contact actually expresses this and more than one way, and there are three major ways in which he says it.

The first way is kind of a nontological way of saying it acts only according to the maximum, by which you can at the same time will that it would be kind of that it would become a universal law of nature. You see how he's flipped that around, right? Instead of nature imposing itself as a universal law of ethics on us, it's us coming up with this maximum and applying it to nature.

And then we ask could this be a law of nature. So you know, could could preserve one's life, be a lot of nature such that everything that lives tries to preserve one's life. Well arguably. It could, right? But there's another way act as to treat humanity, whether in your own person or in that of any other in every case as an end and never merely as a means.

And here we go. Back to Josh Russo and the inherent worth not as a monetary worth but the inherent worth of every person. Here we think of every person as an end now and we we can cash that out in different ways as well, but I think a good way of thinking it is.

This every person is valuable in and of themselves as an end in the sense that every person is able to have this, will this capacity of reason to create their own moral reality. They're all an understanding of ethics and the idea is that they would all see. Right. You know, it's just like every person is valuable, because every person can see mathematics in the same way and then there's a third way of putting it act that your will can regard itself at the same time as making universal law through its maxims here.

We're not just talking about universal law of nature here, we're saying universal law and we can think law and in the terms of say loss of God and man could the edict. Don't steal become the law of the land again. Arguably. It could, if you wouldn't result in the collapse of society, and what we can see the appeal of this, I've sort of applied it to machine learning in the diagram, on the left, I'm okay.

I stole the diagram from an article and towards data science but it's still it's the same sort of principle. So taking human interaction human or machine action. Run it through the deep learning network and ask yourself is the action within the ethical, AI intuition scale and we give us feedback.

And here's where our own intuitions come in. Yes, it is or no, it isn't. If no, it isn't then the action is prohibited or if yes, it is. Then the action is allowed the machine executes or the human action is validated. And that feels a little odd, doesn't it?

And I think that that oddness is an intuition that we need to respect here. So let's look at what cont says act, as though, something could become a universal law and let's ask it the other way around, what would prevent something from becoming a universal law? Because as I said about, you know, with respect to naturalism anything, the human body can do is natural which leaves pretty slim grounds for objecting to something on the basis that it's not natural And it's similar thing happens.

Here pretty much anything that you can do is universalizable. You know, in even in some trivial senses, right? Like, all beings who are sitting in Steven's office at this moment, should twiddle their pen. Well, there's one and only one that's me. It's universal and your local contradiction in that.

Well, maybe it should be more generalized. But you know, I mean that becomes, you know what is it? Is it all people at this time? Who are any person? Twiddling any pan? You know, we doesn't make sense. Logical contradictions is too weak a restriction here. Anything we could all do is humans falls under this anything and everything falls under this.

Because nothing, we do nothing that anyone does in the world is a logical contradiction, but other simple fact that you can't do a logical contradiction. So that's why we got a different definition of preventing something from becoming a universal law, and we can invoke, say the concept of the teleological contradiction which is something like contrary to a purposeful and organized system of nature.

And I mentioned that again with respect to natural law. Now I were bringing it up here with respect to contradiction so we could say it's a contradiction to a purposeful and organized system of nature to act in a random and capricious fashion. Well that seems to be pretty much the case by definition, right?

And we can work from there. What would that? What would be random? What would be precious? What would not lead to purpose? What would not lead to an organized system of nature etc and so we could say you know it's you know a maximum like love anyone you love or love anyone.

You want to love, could raise a teleological contradiction because now love is no longer purposeful is not directly towards some end, it just is what it is. And people make that argument and a lot of people oppose that argument and it doesn't seem that sort of contradiction is going to be sufficient which to base moral law.

Oh yeah, you can adapt it even to practical contradiction along. The lines of it would be ineffective for achieving my purpose if everybody did it. So that's the cookie principle, right? If my purpose is to get as many cookies as I want and my maximum is take all the cookies.

You want. Then I am not going to get as many cookies as I want. I might not get any cookies at all and so my maximum contradicts my practical purpose. And we see this actually, in practice quite a bit. Even during these days of the pandemic, which is why these images are here on the slide.

Where sure people would like to not wear a mask. But what if nobody wore a mask? Well, then we would have a case of widely spreading pandemic. And there's a whole ethos around that kind of thinking. And extends, far beyond ethics. We have this concept of in economics, what we call the free rider, you've probably heard of that, right?

And it plays out and things like the tragedy of the commons. And the idea here is that and a column in the environment where everybody is contributing one person might decide not to contribute but only to take We'll call that person. Donald Trump. Just hypothetically. What if everybody behaved that way?

Well then nobody would produce anything. People would only be taking society would collapse. And so even the person who wants to take and take and take, can't continue to keep on taking and we come back to that quote about Russo's philosophy. You know basing of society on self-interest is you know nonsensical and it's nonsensical just particularly for this reason if everybody acts only in their own self-interest, we don't get to have a society.

Similarly, with the tragedy of the comments. And if somebody goes into the comments, let's say the comment is an apple orchard and they pick all the apples from themselves and they take them away and they sell them on the open market. Just like John Locke says they should then nobody else gets any apples and over time, the commons becomes overused, there's no apples, even for seeds, and of course, nobody's tending to the trees because this one person's taking all the apples and the commons eventually, collapses.

And even the person who was taking the apples, doesn't get any more apples. Now, it's that the cookie jar kind of logic all over again. That's the tragedy of the comments and it results from failure to recognize the contradiction in that sort of selfish act which in turn access justification for division of the comments as private property among all the interested people which aims up all in the hands of the person who took all the apples.

But that's a different issue.

Well, how does that line up with real life? Well, you know, if you look at the actual practice of the actual profession's so-called, for example, writing on ethics in in the professional context, writes religion, financial gain reputation, personal character, social context, geographical locations severity, and the nature of disease.

The climate of fear these are all influential factors in doctors decision to treat perhaps more so than any other period. So basically, it's an argument that says, doctors based on all, these factors can decide whether or not to treat a person. And the question we ask here, this context is, is this a universalizable principle would it work of all doctors were like that.

Well, we've seen environments where our all doctors are like that, where all of these things actually do play a role in whether a doctor treats a person, or not, especially money, but also all the rest, you know, people refusing to treat people because of religion, people unwilling to treat people because of the severity of the illness.

Etc. And the result is many people are left. Untreated. And so arguably this creates a practical contradiction, it's a weakness, the wound in society that continues to fester and fester until you really can't fix it. And it's sort of like the the doctor equivalent of choosing not to wear them.

Ask and at a certain point, you know, just in order to be consistent, just in order for the whole profession of doctoring to make sense, you have to take away some of these, you know conditional and arbitrary and luck-based factors and go back to the principle doctors treat everybody.

Regardless, and that's where you got things like the hippocratic oath. And that's where you get things, like organizations, like media soft runt here, doctors without borders. Who as I write as I speak or treating people and Syria and other places other doctors won't go. So there's something to be said there.

Can't comes up with a number of examples. You know what if I make a false promise so I can get myself out of difficulty, maybe a person's on a deathbed and they say, you know, honor my last wish give all my money to my kids. I don't care what my will says and you say yeah, I'll do that.

But what if everybody did that they nobody's the last wish would ever be respected and nobody could trust that when they died. These wishes would be respected and so people wouldn't leave anybody in charge of their wealth when they died. They do something else with it. Maybe just waste it, maybe just burn it.

Maybe just take it with them like the tombs of ancient Egypt committing suicide. He don't again. What if everybody committed suicide while the concept of a continuing society and you know, it's it's and that's a principle that's actually being instantiated. You know we had Jim Jones Jonestown with the drinking of Kool-Aid and that cult ended.

I've had David cares and the branch Davidians who went down. You know all of flame wasn't quite suicide but it wasn't not suicide. Etc is any number of suicide calls and one of the main results of suicide cults is the called hands with the suicide, neglecting one's talent. What if everybody said?

Yeah. I don't really need to develop my own talent woman. Nobody would get anything done, would they? And so on, you get the eaten, come up with more examples like this and based on these examples and then generalizing over them and generalizing over them. You can come up with something like a system of morality that we were able to think about as though, it were the interior angles of a triangle.

Just something that's based on logic and rationality in, that's it. Well there are of course, criticisms of cons approach and I'll mention four of them here that have been brought up in various publications, mandating trivia actions. I covered that earlier and I don't really think that's an objection endorsing cheating.

I've actually illustrated that with a completely unrelated publication, but I thought it was pretty good because if we look at the factors that go into whether or not you know very well educated, very reasonable people. Actually aside cheating is okay. You know, social responsibility or mastery and approach of their goals, isn't enough to get them, not to cheat.

You have to actually get them to agree to some kind of self-transcendence values. Some idea that society is worth more than just whatever is good for me. But even so, you know, when would save like we can put this into a different context, you know, let's put it in the context of sports and, you know, the similar sort of argument would be well for the good of the game, you shouldn't cheat because if you cheat if just breaks down any trust in the game and we think of baseball and baseball almost ended when the members of the Chicago White Sox who were then known as the black socks were caught betting on games or caught not betting on games.

But throwing the game to assist people who were betting on games but that doesn't eliminate cheating from baseball. We've had in recent years examples where the Houston Astros and the Boston Red Sox achieved it. They used electronic devices in order to figure out what pitch a picture was going to throw and then to signal that to the batter.

It probably still happens even in the game today. And majorly baseball is kind of me. Yeah. Like if you do it and get away with it fine and we sort of wondered you know like here we have a case where even for the good of the game doesn't give us an argument against cheating and that seems like a pretty fundamental value could be indifferent about certainly in the academic world.

If cheating me comes a value, is it certainly seems to be in places like Harvard business school. Then somehow academic value is undermined other criticisms prohibiting, permissible actions. So the one example, the the example I read is I flushed the toilet and precisely three, 14 this afternoon. What if everybody did that?

Well especially in the small town where I live but even an old large city if everybody flush the toilet at 3:14, the water pressure would drop to zero and we would have a bad impact on the water system. Well, you have the same sort of thing with these rules about the use of electricity during peak periods refleshing.

The toilet at 3:14 is not wrong, even though, if everybody did it, it would be a problem. And so we the permissibility of enact comes precisely because there's no reason to expect that. Everybody's going to do it even if you can hypothesize a scenario on which that happens. Then the worst of these is mandating genocide.

And again, in the same article, I read suggested what if the principal was kill Americans. So what if everybody said everybody lived by the maximum kill Americans? Okay? Well, this would be bad for Americans, but from the point of view of the rest of the world, this can actually be viewed as a good thing, particularly, if you're of the belief that Americans are overall abandonfulness on society.

Well, you might say, well, they're Americans aren't really a bad influence on society. But what if you really believed they were or pick your other ethnic group and say, well suppose these are really a bad influence on society. The principle allows you to not only allow genocide but to require it and intuitively that would be a bad consequence.

And these are the sorts of things you have to think about when you're coming up with a principle, like the categorical imperative where, you know, you just using your mental process to come up with ethical rules. Particularly if you don't care about the results because on a certain point you find yourself endorsing roles that are sure universalizable but seem somehow to be wrong.

And that's one of the major criticisms of an ethic of duty. That is basically inhumane, inlay measurable. We could ask was Jehovah, Shaw wrong to steal bread to fetus. Starving sisters. Children he got, I don't know what it was. 2024 years of hard labor for doing it. But was he wrong to do it?

What it have been wrong to lie to the Gestapo if you are hiding Jews from them. Well, if you're moral principle, is never why then? Well, I guess you just have to tell them the juicer there and take them out and have them executed. A lot of people wouldn't be comfortable with that including myself, then not being comfortable is to understand that to a significant degree and, you know, actions like this really do it.

Seem depend on the consequence and not just simply following the rule, but maybe we just don't have the right roles, right? Because there's also that principle of treating people as hands and in pretty much. All of these examples that we talked about the bad result that we got really was a result of just treating people as disposable, you know, all the the suicide thing the cheating thing, the the genocide thing, you know, all of these are cases where we actually didn't take into account the dignity of human beings at all.

So really maybe it's a two-step process, right? So if we have a moral principle, the first question we should ask is does it involve violating the dignity of a human being or human beings generally? And if it does not, then we ask whether the maximum can be universalized. So now we have a test right?

You know, you should we lie to the good stop officer. Well, that doesn't really respect the dignity of the human beings who are hiding from the Gestapo officer. And so the answer would be. Well let's not a principal. In this case, lying to the gestapo is not morally required for morally objectionable.

I should say and that's a good thing. But now you know now is our principle is become more complex. We have a question of you know, what does it actually tell us about how we should treat people? What does it really mean to violate or not? Violate the dignity of a human being, Well, other different ways of expressing.

This One way is, and another one of these universal principles that people, cite on a constant basis. The golden rule Golden roll is basically the principle of treating others has. You would like to be treated yourself. And as Wikipedia says, it's a maximum that's found in most religions and cultures lots of people, repeat this mantra, it's a terrible role.

I'll say that right now and and I won't even be equivocal about it. First of all, how do you know how other people want to be treated? You could ask them, but they might lie, or you might miss understand them, or they might not actually know themselves. So there are plenty of ways of getting that wrong.

Well, okay. Just treat them how you would want to be treated yourself, but you were not them and their tastes and your tastes might be very different. I see that in online forums all the time where somebody comes into a forum is rude, is direct is, you know, littered with a sanity's attacks.

People personally, and you say something to something to him, and it's always a him. And he says, well I don't mind if other people treat me like that. Just the auto doggy dog world, and that doesn't seem like a good defense of that sort of conduct. It seems like a mishap application of the golden rule, you know.

And even more on the point. What about hypothetical situations? So I mean the golden rule is basically treat someone as you would want to be treated right? You would want to be treated. This is what's called a counterfactual and it's basically asking for your description of what would happen in a possible world.

Not the real world but in the possible world where that counterfactual is actually factual and it's hard to get that, right? It's hard to know what you would want, and a particular situation and unless you are in that situation receive that all the time, where people say, well, I would not want to have them, pull the plug at me.

If I was in that deathbed situation and then you're in that despense situation, you realize, oh, well yeah, no, maybe I do that kind of thing, right? You know, or I would not take the million dollars, even if I knew I could get away with it from that company and then you're in a position where there's a million dollars on the table in front of you, all you have to do is pick it up and a lot of people end up picking it up.

So the golden role isn't a good principle, it's a nice idea, you know, always it's an appeal to some kind of equity and and the recognition as I like to used to like to say other people are as deep as you think you are, so recognition of their humanity and their value and are non-monetary sense, but it's not a recognition of their uniqueness their distinctiveness and their autonomy.

And that's a problem. There's another principle that's almost as wide spread from each, according to his ability to each, according to his needs. And for today's times, we can have a gender neutral version of it or we can even extend that to include things like animals and robots x, of course, the core principle of socialism.

It's kind of like a golden rule of economics because it is a description of a society in which the dignity of each person is respected. You know, a society that gives to each person according to their ability is very unfair and I'm thinking toward people who don't have very much ability like babies invillage, the elderly, people who are disabled etc.

But again I need not say that there's been considerable objection to the socialist principle as well. So that comes back to what does it mean to treat people with dignity? What does it mean to respect the individual value of each person? And there's a more meta problem with the conscient approach and that's just the idea of defining the good itself.

As reason, you know, if reason is what makes what we decide good, you know, I mean if if this is it then we have to ask are those who have more reason than others intrinsically better. Now, one of the appeal of natural ethics, is there the sort of ethics that any common person can come up with just by thinking about it for a bit.

So, anybody pretty much. Anybody exceptionally affirmation, infants, and invalids could come up with principles. Like you shouldn't lie, You shouldn't kill etc as morally. Good principles and just just by their own, inmate capacity of reason. But what if, what if you can't reason and and hurt us not really a moral agent, in that sense, are we better than that person or for that matter?

Are we better than animals? Because we can reason and they can't does somehow our conclusions have greater ethical purpose or ethical worth than theirs, is our struggle for survival inherently, ethically superior than say, your dogs, or your horses or flip that around suppose, the supergalactic came to us and we're demonstrably better and logic and reason than we are.

Because after all, I mean, there's it's not like logic and reason there's one unified systemic hole. There's all kinds of ways doing logic and reasoning the whole other issue, but we could talk about that. And so it's easily imaginable. That supergalacticants could come and they've solved logic and reason, and they have one single unified system.

Unfortunately, as in the Douglas anonymous books, it means that earth must be eliminated to make way for a bypass is that ethically right? Would we have to accept that you know the principle that can't brings forward? Seems to suggest that we should. But that would be the end of humanity.

And that seems to me to be bad but even more. It's just this idea of accepting reason as a value in and of itself, accepting reason as the locus of ethical good. That whatever is ethically good is so because we can arrive at it from reason. I've put a Ralph Stenon illustration in this slide to a illustrate the opposite of that.

And again, there's no good reason to put a rough statement slide in there but I did. But even more to the point Ralph Stedman is the is the gunso to artists that hunter rest Thompson is the gonzo to journalist and the whole point of Hunter S Thompson's journalism is that it's incredibly subjective and arguably insane.

Certainly drug and formed, and yet, undeniably brilliant. And that's the problem with reason. It's not the only game in town and it's not simply that the alternatives are just, you know, luck and chance. And the alternatives might produce the moral equivalent of a Ralph Stedman diagram and, you know, from the point of view of reason we look at that and we it seems repugnant.

But at a certain point we said, well wait a second, that's a rough steadman and there's a lot more going on there than we thought it was going on there. Another aspect to think about is autonomy. What's important in contradicting? Is that we do not depend on an external moral authority to unveil moral law for us.

We discover it for ourselves and yeah I personally really like that principle and if that's a big one for me because I don't like to be told simply told what's right? And what's wrong those back to the objection to duty that I raised at the beginning of this talk.

But how autonomous are we really? And here, I reference the Stanley millgram experiment, where people were basically convinced to apply greater and greater and greater electric shocks to victims. Now spoiler they could really administer electric, shocks to victims, but they thought they were. And that's what counts here. And if it's not easy to convince people to administer, electric, shocks to people then how trustworthy is the autonomy of individual moral agents, you know, I mean, we think that they're going to come up with good ethical principles just by reflecting on them.

But, you know, folks reflections can be manipulated, they might come up with actually very bad moral principles and there were plenty of examples through history where that is happened, where entire populations have been swayed to believe, moral principles that objectively. And with the hindsight of history, we now say we're in fact very unethical and that might even be happening now.

And part of the problem is, how can you know, how can you tell, how can you be sure that the ethical principle? You think you understand in apprehend intuitively has been actually fed to you slowly and carefully through an advertising campaign run by Bill Gates for the Koch brothers or pick your villain right?

And that's a problem. Now, we can manage for autonomy, we can develop social structures that preserve and promote real autonomy. I'm not sure if this diagram captures that this autonomy required trust, does it require responsibility? You know. It's not clear that either of those is the case and and you know part of the difficulty is coming up with a good account of just what we mean by autonomy.

But certainly the lack of it would be fatal to, you know, any sort of theory of morally tuitions. I'm part of the problem with all of these theories is also being able to pick which theory applies. I talked earlier about the inhumanity of some of these moral principles and we tried to address that.

But by talking about viewing each person as inherently valuable. But even so you know, if we've got say a list of seven principles like we saw earlier on which one applies, you know, it turns out, you know if if you have a principle like do not lie and if you have a principle like, do not murder or do not let somebody be killed, might be a better way of putting it.

That can't be your morality because those two principles can't both be true at the same time without some shall we say contradictory outcomes? And the Gestapo cases is a perfect example of that, right? Let's say through some trick. We thought that, no, I mean, overall, we're respecting people more, my following the law and telling the truth than we are in lying in this case, right?

So but if we have a principle don't let people die and we have a principle don't lie? Or don't break the law and we're faced with this gestapo situation. Then we're stuck and so WD. Ross came up with a concept known as prima facie duties. And the idea here is that they're not script laws, in the sense of commandments, or something like that, what rather they are, you know, the expression prime of facey means, you know, at first glance.

So at first glance, it looks like it's a duty, you know, you know, before considering anything else got. This is a duty. But then in a plurality with a plurality of principles in any given situation might be overridden by one might be overridden by the other and really it depends on the situation right now as to which one of these principles ultimately will take hold in the case of this, the Gestapo.

The you know, don't let people die principle will be more important. But in another case the don't lie principle might be more important or the self-improvement principle or the fidelity principle or you can showing gratitude or any of these others. They're still leaves us with the problem of being able to find, you know, these seven principles or whatever.

And that that difficulty the issue of the genesis of these principles is a problem. It's easy to say oh yeah, they just spring into mind intuitively but it's it's hard when different principles spring into different minds. But if we accept the idea that any of these principles in any person, are prime of a sheet principles, but we can sit down as reasonable people and discuss and determine what the most reasonable outcome would be in the face of these conflicting principles.

Then we could continue with a dontic system of ethics and, and a system of ethics based in reason wallet, the same time finessing, the problem of the origin of principles and of the organization of the priority of principles. So we come back to professional duties, which is where we landed when we are talking about ethical codes.

And we can look at these duties kind of in a different light, and if we look at the illustration on the right, we see that we have 15 duties to clients that CFP. Professionals must follow the primary duty, of course, is fiduciary, because we live in a world of finances in economics, but then we have the professional obligations of integrity competence diligence etc.

Client interactions to disclose and manage conflicts. That's their version of the conflict of interest policy to provide information to represent. Compensation appropriately, etc. Right? And we can think of these known as laws, but as primer facing duties, they describe the sorts of things that ought to be important to a professional.

But, in such a way that such a person in a profession, is able to evaluate and weigh these principles and select the most important, and even the primary fiduciary duty might take second place to some other duties, like say comply with the law. And in fact, one of the issues that comes up with business ethics and general, is that the interpret the fiduciary duty as being the owning duty and actually overwriting other duty such as complying with the law.

And that is arguably a misunderstanding of business ethics and so and thus, so when somebody becomes a professional or to bring us back to our subject, when somebody undertakes the the practice of using AI and analytics in a learning context, we have these values. We have these principles, we don't need to justify them, we don't need to argue for them because everybody knows what they are.

We can sit down a reasonably, think about them and we don't even need a definitive list of because you know, in any circumstance, a reasonable person can come up with, you know, here's a principle that applies in this case, right? We're collecting data principle, it should apply. Here is consent, we all know this, right?

And here's another principle that applies, in this case, accuracy our data collection should be accurate. And now, these two things are going to conflict with each other, right? If we get consent from people that might impact the accuracy of the data. So what's more important? Well, it really depends on what we're collecting data for.

And so the argument looks, right? And so, that's how this kind of our argument applies, in the case of professional duties, and in the case of the, the ethics of analytics in AI, the ethics really of any practical discipline based system of ethics. And I think that's a good argument.

The the place where it's not a good argument, I think is in the idea that we can rely on reason alone in order to come up with these determinations. Because as soon as you say things like well it depends on the context. It depends on how important the research that we're doing is it's cetera.

Now, we're appealing to something, somebody affects outside our particular discipline, and that's where the problem comes in, right. Can't would say, well, now just depending on accidental circumstances, if you're writing this case, it was just purely by luck, right? And your making morality conditional and contextual relative to this the original morality in such a case he might see and certainly I've seen that expressed.

But what's the resolution here and there isn't a way to simply use reason in order to wait these prime facial duties. I mean assuming she starts saying prime of ACVs you're you know the reason part gets you to the first glance but then you have to check your ass and that involves looking at actual peas and actual people.

And yeah that kind of gets that probably what is an overall problem with duty-based and rule-based systems generally and it's that morality seems to be about more than that. I'm morality is more than simply following the rules, even if they're really good rules, you know, Hurst House says, if right action were determined by rules that any clever adolescent could apply correctly, how could this be?

So why are there moral wiz kids? The way there are mathematical and quasi mathematical with kids, you know, I mean why don't we see evidence of these super reasonable super moral people that you know, we can just see our moral authorities, you know I mean if anything were when such a person shows up and claims to be such person we think of the more as cult leaders and anything else.

But more than the point, just simply following rules following, the dictates of reason seems to go again star moral intuition, our moral intuition goes against our moral intuition. I mean, this is really affectively brought out through the narratives of Star Trek. Spock, is this purely reasonable person? And yet spark, even though his determinations of the ethics of the situation according to various principles in some and some of them are articulated through the course of Star Trek infinite, diversity and infinite combinations is one of them and there are others.

And even though spots raising comes from a really good place, you know, as a way to put behind the violence of the original Vulcan race, it nonetheless seems to ring a hollow to people as just not capturing or grasping the humanity of ethics. And we see the same kind of scenario, play out in Star Trek, the next generation except instead of Spock.

We've got a robot data who again is ruled by algorithm and principle. And again we have people suggesting there's not respect to humanity of the situation, despite Davis own efforts over time to become more, human to find as he says the human equation. And I think there's actually an argument offered by the writers of Star Trek in this situation.

To try to convince us that no robot actually could pull it off, although he might need an emotion chip.

I think that you know the arguments are well made and it's not the case that you know no robot could ever be ethical. It's not the case that no AI could ever be ethical. I don't think that's what follows from this, but I think that what does follow that from this is that no system of ethics based simply on reason duty and principles could ever be ethical?

Just because, you know, like the ethics slide in this diagram, it feels too much, just like, plugging text into a template and hoping that ethics pops out the other side, but it's filled and others, right? The concept of ethical principles, for AI has encountered. Pushback, both from Ephesus. Some of whom objectively imprecise uses of the term in this context as well as from some human rights practitioners.

Who resist the recasting of fundamental human rights in this language, you know? And it does go back to how do we respect human dignity? How do we expect, how do we respect the worth and value of each person? I happen to believe that, that's a good principle that that each individual human, each individual life for that matter.

And by that, I even include trees has inherent value and worth, not in the financial sense because I just stupid way to measure the value of life. But just in the sense that it has a right to exist. It determines it's own value and it's not a sort of thing really, that we should be using or commoditizing to our own purposes.

But you can't capture that with the rule or any number of rules. It's not the sort of thing that you can just pull out of the air with an algorithm. It's going to require something more. And I think that in a lot of the debates about the ethics of artificial intelligence, one of the key, shortfalls of many of these discussions is that coming from a certain technically oriented machine oriented perspective, the proponents, don't necessarily grasp that the ethics need this thing.

That is more. And again, that's why in the ethical codes. I went well beyond just the ethics of artificial intelligence and analytics and went into other professions like accounting and health care and teaching and journalism and like because in these professions the need for that something more, whatever it is is that much more evident than it, perhaps ever would be when you're working with and building purely artificial systems.

So that's what I've got to say on this unduty. I think it's a really interesting way of approaching ethics. I think it says a lot of good things, but I think that the discussion of ethics does not begin, or end with the principle of duty. I'm Stephen Gauss, thank you for joining me and I'll talk to you again next time.

Force:yes