Unedited audio transcription by Google Recorder
Hi everyone. I'm Stephen Downes and we're back again with another video from the course, ethics analytics. And the duty of care were in module five, which focuses on approaches to ethics. And as you can see from the title, this is the video on consequentialism. I'm Stephen Downes. I'm offering this course.
I'll touch my nose and adjust my glasses and get ready for the glare of video and trying to make this as interesting as possible. Although I admit if you're not inherently interested in it, it can be pretty dull stuff. I personally think this is all fascinating stuff and so that's why I go on about it for a while and it's because I think it's fascinating stuff that I want to go on about it.
A while normally in a traditional connectivist. Course, I go out I'd find some resources for you and throw them into the mook and create new sweaters out of them and invite people to discuss those resources and that would be that and that's a perfectly legitimate way of doing it.
And you know, if you look at the slides, I have been doing that there are resources on every single. One of these slides are pretty much every single one of these slides, all of which could be recommended and eventually they'll all be incorporated into the mook website. But I also want to add to those resources and you know, it's because I want to do something over and above simply pulling these things together.
I think that we're working on something that's a bit new here, and the sense that we're bringing together. These three, distinct, topics, ethics and Olympics, and the duty of care. And so, when we visit some old subjects, like, ethics in consequentialism, I think we're going to have some new things to say, or at least in new perspective, to offer.
And so that's why I want to do these videos. So it does change. Yeah, how I go about doing these moves. Maybe in a second round of this mook, I'm thinking there will be one. We'll go back to the original way of doing it, and the videos will be available as a resource, we'll see how it goes.
Maybe I'll never do this again. Who knows, but I want to get these thoughts on the record for now. So with that, as a preliminary today, we're looking at consequentialism, the consequentialism is a catch-all phrase for a host of ethical theories and including someone the most widespread, and well-regarded ethical theories today.
And as you might expect, not surprisingly. They all stem from the concept of consequence and consequence. As you can see from this definition, which I got from Google, which got its definitions from Oxford languages. Consequence is the result or effect of an action or condition or alternatively relative or importance.
For example, the saying here, the past of is of no consequence or another example, he wanted to live a life of consequence. Consequentialism is what we might call. A kind of teleological ethics. Anything that's teleological is something that has to do with the essential and or the essential outcome in mind.
Cheerleading means gold directed. Let's see. You know, the whole study of teleology which is the goal or the meaning or the purpose of life, the universe and everything. Indeed, one of the big differences that I draw between a network and the system is that a system is ideological? It's a whole bunch of interacting parts.
Moving with a goal directed where goal or direction in mind, whereas in network is just a bunch of interconnected parts but there's no inherent goal. There's no inherent purpose to it. That makes it kind of hard to have an ethics. And so, to me, it's not surprising that people both want their networks to be teleological to be systems.
In other words, they want society to be teleological, we unite around a flag and a way of life, and they want their ethics to be teleological. And I get that. So the concept of consequentialist ethics has its origin, at least in Western philosophy with people like Epicurus who are articulated, what might be called the pleasure principle and here.
I'm quoting from the Stanford Encyclopedia philosophy, a view of the goal of human life. Happiness, resulting from absence of physical pain and mental disturbance combined, with an empiricist theory of knowledge. Sensations together with the perception of pleasure and pain as infallible criteria and the kind of need both parts, right?
You kind of need to have the sensation itself. Otherwise you have no nothing to build in on. And then you need to say that some of these sensations are good and others of them are bad. And the most obvious candidates here, are pleasure in pain. So you say pleasure is good and pain is bad, or you could say pain is bad.
And the absence of pain is good, or maybe pleasure is good and the absence of pleasure is bad. It's not all together. Clear. Even on first blush. How to articulate this so epic curious was what they call a hedonist, which means that he taught what is pleasurable is morally, good.
And what is painful is morally evil? But as Wikipedia points out, he idiosyncratically defined pleasure as the absence of suffering and taught that all human should seek to attain the state of ataraxia, meaning untroubledness, The state in which a person is completely free from all pain or suffering And that's not the same as hedonism as we understand it today, right?
He doesn't miss a mess. We understand it today. Now, I was going to put a sexy picture on the slide here and I decided to go with sexy. Greek men is a philosophy that much more, reflects our idea of pleasure and pleasures of the senses, the physical pleasures for example.
And yeah I didn't include the absence of pain and unless pain is your pleasure and which case it includes pain. But it sees pleasure or something, more of a positive to be gained. There's a lot of discussion by Epicurus and others around that concept around the, the original hedonism formed by aristopus of siren.
They were called the say riniex and they went for this idea of pleasure as you know, the physical pleasures and you can see that you know, who doesn't like you know a nice cold beer and a ball game and maybe some popcorn or peanuts or you know a nice warm sunny day.
How can that be anything other than good and something that produces that result certainly could be seen as something that's good but there's this sensing which pursuing that for its own sake. Is it really what hedonism is all about? But rather preventing you know, the pain and the anguish that comes with just being a human is more ethically.
Good, you know? And I don't think we ever get past this one, in particular distinction right here, but we'll keep plugging away and nonetheless, you know, this idea of absence of pain. Also reminds me of the Buddhist concept of Duca and that's a poly word most commonly translated into English as suffering or something like suffering.
That's the basis of the four noble truths of Buddhism with including the existence of suffering, the nature of suffering, and how to win suffering. And according to this philosophy, we are living being trapped in a cycle of existence known as Sam Sarah. And then Sam Sarah. We experience unbearable suffering because of the tight grip of our grasping itself, it is in wanting permanence in a world that is forever changing that results in suffering.
It is wanting to be an unchanging eternal being that makes us afraid and suffer at the thought of death. And the secret to escaping suffering is to cease this endless clinging, and that's not an uncommon sort of approach in philosophy, and indeed, in many religions, either, this idea that happiness is attained through mechanisms.
Other than the pure physical pleasures in effect abstinence from the pure physical pleasures and abstinence from clinging to these physical pleasures is what actually produces pleasure or at least reduces pain. So I think there's a point through it and I think that that strand of reasoning is well as the hiddenness strand is with us today and we'll come back.
We'll talk about that more later on and, you know, we want to talk as well. And I thought about putting your sliding here with all the varieties of pleasure about that. Now, like this, it's actually a bit more accurate in this context to think about the range and varieties of suffering.
Because there are different kinds of suffering, different degrees of suffering, and they impact people in different ways. I forgot to turn on my recorder. No, I didn't. Oh good, and it's interesting. I know some people if they're slightly hurt they're suffering is extreme. And on the other hand, I know people you could cut off their foot and they'd sort of go to main convenience to be sure but I can't say them suffering and everything in between, right?
I mean, it's interesting the way we approach suffering the the way we allow it to impact us also has this ethical domain or graphical dimension. I should say I I'm talked earlier about Doug Gilmore playing wall hurt, and clearly he suffering, but his ability to work toward a higher goal.
Despite the suffering, gives him ethical value at least in some reason and not I may as well. Say my own is well. So we have these ranges everything from impatience to annoyance to desperation to misery dragony and we look at this list and you sort of want to ask it where do we draw the ethics line?
If we were going to draw a line, right? I mean is it unethical? Just to irritate someone, you know. If I do this Simon Ethical or is just annoying. But if you cause sadness about that, what if you offend someone? But it wasn't something that would have famed. You, if that I'm ethical, I could say things right now.
I won't. But I could say things right now, that would offend precisely half my audience and not the other half. And it's not clear to me that we can make a determination when we are another weather. One of these is ethical or unethical, but okay, but there's an intuitive idea here, right?
That is worth pursuing that, you know, the prevention of suffering and the promotion of pleasure that does seem to be an overall good thing, right? That's why we have doctors and it wasn't a good thing. We wouldn't really think there was any purpose to having doctors, so but does seem to matter the other aspect or another aspect of this entire discussion can be couched in terms of moderation.
Now here I cite Abu Bakr al-Arazi who's recognized in various sources as having a theory of pleasure. Now the interpretations vary in, there's two interpretations that I present here and for our purposes it doesn't matter. Which one is the correct interpretation of Al Razi? But rather the fact that these positions exist and one is the idea of that moderation.
It becomes a values because the way to have the most pleasure is through moderation going to far with a pleasure is more painful in the long run and certainly there's no shortage of people who follow that philosophy. Except say for Robert Hineline who says moderation is for monks lived to excess on the other hand, I'll rest you can be interpreted as saying pleasure is not the good to be sought in himself or in itself, pleasure can only can be had only as a result of a process of removing a harmful state and that seems to be more likely to be the correct interpretation of him given his stance on spirituality and also giving that his training and influence is as a physician a doctor influenced by people like Galen whose life work is to remove a harmful state.
So it's a positive act. It produces something that we might call pleasure but it's the pleasure of living with a pain. You know, under there's I'm observation here I think is relevant and that is that in certain respects. It's actually impossible to conceive of pleasure without corresponding pain and indeed we could argue that a person would not know what pleasure is without having experience.
Some sort of course responding pain is and this is the sort of thing that we see in society. It's like you know, the rich kid who's never known lack of anything in his or her life, right? And they don't recognize what it is like for somebody to have to go without a meal or not, be able to fly for fly to Paris in the spring.
They just don't have a conception of that or even cheer supporters of sports teams that have been very successful. Don't know what it's like to have an unsuccessful season but on the other hand, one of the apples of sports to me is the reality that your team's not always going to win, especially this year and the idea that this makes it much more satisfying that's sweeter when you actually do win and winning is more than just the absence of losing and winning is something that's positive image on, right?
So I you know, and that's the other side and the problem with this idea of pleasure as only the removal of pain, if there is no pleasure. How can you know when the pain is ended? And I don't think it's clear that you can, so you going to have this balance either way and so you're going to be making this calculation either way.
So in the end it doesn't matter which of these you want to support. You're still kind of doing this same kind of thing and that's why we lump them together under the heading of consequentialist theory. So, what might be thought of as the next major move in, consequentialist ethics is the representation of the objective, or the value not as pleasure, specifically, but rather of happiness.
And that's attributed to the Irish philosopher Francis Hutchison of who says that action is best, which procures the greatest happiness for the greatest members and that probably the first original expression of the philosophy that has come to be called utilitarianism. And the term is attributed to David Hugh, who uses it to describe the pleasing consequences of actions as the impact people.
So now we have something, we have two concepts now that we're working with and one had pleasure, which is directly tied to the sensations. And then we have something else happiness, which is also tied to the sensations, but not quite in the same way but not quite the same connotation as pleasure again though.
We're looking at the outcome of an act that is specifically that it produces happiness as conferring ethical value to that act, okay? No problem. So that gives us utilitarianism. And so there are two basic principles and they'll feel both right. The rightness or the wrongness of an act is determined by the goodness of the badness of the results and sometimes the consequentialist principle and then the utility principle.
The only thing that is good in itself is the specific state of pleasure, happiness, or welfare or? And now I'll just let that tail off there, right? I got this little pigeon, graphic from Google. I guess it's created by some thesis but of something. What are other words for utilitarianism in the other pigeon replies, pragmatism, advisability, benefit convenience, effectiveness, fitness, helpfulness opportunism.
Now, none of those are synonyms of utilitarianism or even of utility. So that bought isn't exactly very smart, but they all do express one or another aspects of this concept. And this list is particularly useful because in the modern context, we see these things all the time. I don't talk about American pragmatism in this presentation specifically.
But American pragmatism, that is to say the philosophy of Charles Sanders, Pierce. William James, and John Dewey form the basis for a whole line of thinking about the pragmatic way of knowing, you know, what is true, is whatever works right? And so again, it's it's the, the outcome of the act, you know, in this case whether it's possible feasible, practical etc, in business, in business writing, we see words like benefit and fitness used a lot sometimes.
They also use efficacy effectiveness, efficiency. These are all consequentialist principles being applied in certain circumstances and if they are applied in a normative sense that is to say if they are applied in a sense where you can infer the rightness or a wrongness of an action based on them thing, they are expressing a kind of utilitarianism, right?
So if you see that supporting and action is good because it produces a benefit by there to one cell for the corporation or whatever that's consequentialism. That's utilitarianism. We also see it discussed in terms of convenience, that often comes up in market studies. People pick the convenient option or offering a product that provides greater convenience, is a good thing.
We see that presented a lot, you know, as it justification for the presentation of a lot of products and services. And indeed, we come back to artificial intelligence and analytics benefit helpfulness, convenience fitness, all of these words, come up over and over again. There's a very wide swath of utilitarian justifications, and, and arguments in favor of AI and analytics.
I mean, the we go back to the beginning of this course. And it was my purpose in the module on applications of AI and analytics to show the benefit that people believe that they realize from this technology, it wouldn't be an ethical issue at all. I are you and I still argue if there were not beneficial consequences if these technologies did not produce in one way or another pleasure.
Happiness. Goodness of some sort. So yeah, you tell their terrorism is a widely held theory today, but but how do we calculate this? Because this is the thing. Go back to the definition.
The rightness of wrongness of an act is determined by the goodness or badness of the results. Well, we need to be able to determine the goodness or the badness of the results. And how do we do that? I threw up a couple of diagrams here. One of them is from a paper that suggests that you utilitarian is a, would argue that a Machiavellian would approve of uploading brains to computers.
I honestly don't know what they thought they were proving with this, but that's what the paper said. The other paper here, this is on relationships and environmental ethics and higher education, just shows the dance, causal web of actions and interactions, and a fairly narrow space. Now, we've got cost benefit and count accounting and all of that.
But we've also got things like gratitude, social relationships, emotional safety, and all of that. In a context of complexity, uncertainty and challenge. How do we calculate all of that? It's a mess. And one of the major arguments against utilitarianism is that no person or in diagnosis, ially is capable of making such calculations, but let's take it as a hypothesis.
Just as a hypothesis for the sake of moving forward that in the world of artificial intelligence computer could do it because the volume in the complexity of the calculations doesn't matter to a computer especially one equipped with AI, right? So hypothetically, we could put the question to a computer and give the computer all the data it needs.
It would come out with the result x amount of happiness will be produced and then that in theory should tell us the rightness, or the wrongness of the action. So, if we accept that as a hypothesized, as a hypothesis, we can dismiss the complexity argument as an objection to utilitarianism and to be Frank, I don't think anybody seriously advances the complexity argument as an argument against utilitarianism.
It's one of these things. You know, they've got their other reasons and then they'll pull out this reason too, just just to add to the pile of reasons, but I don't think it's an actual objection because I think off the cuff, people know whether they're actions are producing happiness or not.
And I don't think we need the calculation down to the last dime of happiness. She know whether or not we're doing it. So I'm not so concerned about the calculating utility argument. I mean any case we have all the parameters Jeremy Bentham who you see preserved in his dead state, came up with something called the philosophic calculus.
You can see the play on words with scientific philosophic, right? Or the hedonic calculus and it's often commented that in utilitarian circles, the unit of happiness, is known as the heat on. So one heat on is one unit of happiness and we'll come back to that in a bit.
So there are seven principles that he brings forward. How strong is the pleasure of the happiness, how long will it last? How certain he how certainly is it likely to happen or is it you know really a long shot? How close is it? You know, are we gonna get immediate gratification or is this a case of deferred gratification?
What is the frequency? The probability that the actual will be followed by sensations of the same kind? And if you wonder about that, think about taking drugs, right? You take drugs, they give you this high pleasurable, but then you go into withdrawal, which is miserable, right? So that's not a good thing.
So the question is, you know, if you take drugs, what's the probability that you'll keep on feeling happy after the effects of be long lasting and you won't be thrown into the pits of depression. And that's similar to the idea of purity, right? The probability that it will not be followed by sensations of the opposite, kind regret, remorse.
And then the extent how many people will be affected Now with seven variables. And here is the calculation problem, right? We can come up with all kinds of different ways of writing the calculus and it's almost certain almost certain. It is certain that. Not one of them will be the calculus.
There is no E equals MC squared of happiness, and I think back back in Benfum's time. They really did think that there would be, you know, maybe not any equals MC square, but certainly a Newton's principles of happiness. Because, you know, they're looking at this method being applied in other areas and it's working so well.
And why wouldn't the similar sort of scientific approach to the calculation of ethics? Why wouldn't it be the same? And, you know, now we would probably say if something very different we've had very different intuitions but back then you know, coming up and scientific formula was new for every thing.
So, there wasn't any reason to suppose that we couldn't come up with one and couldn't run these calculations in some way to determine the ethical value of an act.
Well, if we get these calculations not right? No, we produce some results that maybe are counterintuitive and and one example is Machiavellianism Valley, long predates, Jeremy Bentham so you should have thought about this but basically Machiavellians are characterized by the manipulation next and exploitation of others with a mocking disregard for morality and a focus on self-interest and deception.
A recent American president could be characterized as Machiavellian had even more effective at it. But it's still in the same idea here. Right. A Machiavellian will say, basically the end justifies, the means well and that that's the reality of political life. And you certainly do hear that a lot even among people that might be regarded, otherwise as ethical and upright people right.
You know, they're great people but they go into this political situation and the end justifies the means and they're gonna do what they need to do because that's politics. And there are other people who just see all of their engagements in life. This way, I put up a little thing here, the signs of gas lighting and I could have picked any number of different examples.
I picked gas lighting because it was handy. And you think of all the things that somebody who gaslights somebody does their actions, contradict their words, they break promises, they erode yourself a steam. They try to make you believe that. Something is the case, even when your senses say that, it's not the case they manipulate you.
They deny that conversations and or events ever happened even though, you know, they did. Well, that's the ends justifying, the means. Right. That's consequentialism. And somebody who gas, light is trying to pursue something that they perceive as a good, namely, their own happiness and, you know, only the, the ethics of it is, well, the ethics is whatever works, it's pragmatism.
It's you'll all spare in love and war, and that, you know, come that, I think no small number of people would find, not ethically, strong to say it mildly.
There were different kinds of pleasures and John Stewart Mill following up on Jeremy. Bentham's work wrote, famously. It is better to be a human dissatisfied than a pig. Satisfied better to be socrative dissatisfied than a fool satisfied. I think this is interesting because that were going beyond, I think a fairly well sensory or sensation based concept of pleasure or happiness and making it a broader concept.
And on one hand, it's concept that sometimes feels more intuitively appealing, right? I mean, you look at somebody who really is what we think of as a hedonist today, all they do is they live for pleasure. That's it. And we think, yeah, they're happy but it doesn't seem very meaningful.
And we look at somebody who even even know they struggle, they seem to be pursuing a higher good through writing literature or art. We have this concept of suffering for your art, you're working for a higher outcome, even the Doug, Gilmore example, could sort of play out here. And I think a lot of people believe that John Stewart nils certainly did, but now it creates the same time, more of a mesh of a measurement problem because, you know, we can directly determine whether or not we have sensations of pain or pleasure, but our sensations of whether we're happy from the higher pleasures, are a bit less reliable, shall we say, you know, if we're challenged by a difficult work of philosophy.
Are we really enjoying it or do we just think we're enjoying it? Because we know we should be enjoying it. Even though all we're feeling is pain, I think that's a good question. So and that's what the Machiavelli example brings up is, you know, a Machiavellian or a gaslighter has some kind of higher pleasure in mind and it over rides.
The pig likes sensations of pleasure and pain enjoyed by their victims or subjects, you know. It doesn't matter if people are in pain because of starvation, we're working toward the higher value of a good society or however, they justify it in their heads. And, you know, on how we can just color, you know, chalk this up to a calculation failure on their part.
But on the other hand, it's really hard to come up with within the context of utilitarianism, our argument against them and the next case will show even more. Clearly why John Stewart in on liberty said, the only freedom, which deserves the name is that of pursuing our own good in our own way?
So long as we do not attempt to deprive others of theirs or impede their efforts, to obtain our level.
And this is something well under one hand. This is something I embrace because I think that it is a matter of empirical observation that different people defined. The good in different ways. What is good? For one person is good is not good for another person. Yeah, for example, I enjoy cycling but not everybody enjoys cycling.
I know some people who do not enjoy cycling and indeed even wonder why I would enjoy it. Other people. Enjoy cooking for me. Cooking is something I do in order to get food and I do the minimum of that to get my food, but it's not something I enjoy for.
It's on site. Here we have in the image and then preferring the pleasure to his own. Lawn, that's more fun to mow with red or no with Rio from an old advertisement. Despite, you know, the women in the fancy car melt, that's not for me. I just prefer my lot more For the other side of that though.
I've put a little image of fleetwood max rumors album here because there's a song on it. Go your own way. And it's a song about separation. And you know, when each person defines their own, good in their own way, there isn't this coming together toward a common good anymore?
Eats person has gone off their own way pursuing their own good. So although it's true. We have our own good. Maybe it's not good. How we all have our own good because here's the result or at least one result egoism and dude, bro, on the right here, there are two kinds of egoism that we can draw a psychological egoism which is the idea that the motive for all of our actions is self-interest period of industry.
That just is a fact, according to psychological legalism as compared to ethical egoism where the argument is that the motivation for all our action should be self-interest, and you see the distinction between them, right? I think that psychological empiricism is probably demonstrably empirically false. I do think that, you know, from as a point of fact, some people perform actions which are contrary to their self-interest or at least in different to their self-interest.
A mother caring for her. Child, for example, you know, isn't it doing this just of itself interest although, you know, you can rationalize anything anyways, and there's no shortage of people out there who what argued? Well, yeah, but she feels good and she satisfying herself when she tastes care of her child.
And that really is why she does it or, you know, it's the it's the innate instinct to care for a child and by satisfying but right, caring for the child she satisfies that innate instinct and that is serving herself interest so you can twist and bend the argument around.
But I think in point of fact, not everything everyone does is for their own self-interest. I have a thing. I have a thesis that I've talked about on various occasions in the past, it's called the butterfly thesis. There's not that butterfly thesis, it's a different one. I you drive around or cycle around places here in Canada and no small number of people have wooden butterflies attached to the front of their home, just a decoration and actually spend money because they're hard to make.
So usually people grow up in the bottom front of the craft shore, whatever, they're not getting anything out of doing that. Nobody's paying them. Nothing like that. They spend, most of their time inside their house. So you just know. It's not like they're looking and seeing these beautiful butterflies.
They're doing it because it's makes the neighborhood nicer. And to me, that's a good example of an unrewarded action that people do. Anyways, the other side of this is ethical egoism. The idea of that all of our actions should be based on self-interest and you know, that become a much more common argument in recent days and deserves some discussion on its own.
When I was young, someone called Iron Man was becoming popular and not so much in philosophical circles because philosophically, she's well, not believable, but, in political circles, and, and other discussion circles, and the argument was that basically promoting your own selfish, your own self-interest is good or in the words of the movie, Wall Street greed is good.
I certainly heard that now, in the movies, of course, the greedy person gets their comeuppance and spends time in jail. I never really enjoyed the fruits of their greediness, but he, when I know that the real world is not like the movies in that selfishness and greed is often richly rewarded.
And again, I didn't think of a recent present and it's hard to argue strictly on, utilitarian grounds against egoism. Particularly if you allow for a relativem of happiness and value, why should it you work towards your own self-interest? What actually obligates you to work for the good of someone else?
I mean, if I work toward my good and you work towards your good, arguably, the maximum of good is being served. Certainly, there's no easy way to say that it isn't, right? And in fact, if I sacrifice myself for you, there might be, in fact, there will probably be certainly from my perspective and overall reduction of happiness in the world.
So and you know, there's there's no guarantee that what I'm doing is actually leading to your happiness. I mean I might think on supporting your happiness but probably I'll get it wrong, right? You know, I mean the only person who can really decide what's good for you is you and we see this argument made with respect to government all the time but government spending imposes, its own value of what is good on a person and really what should be done is eliminate it all taxation.
Let each person decide for themselves what to do them with the money because what they decide for themselves is most accurately going to reflect what they believe is. Good government's always going to fall short in this regard, Or, or any charity or, or any sort of common pool or whatever, you know, it doesn't rule out interact with other people, of course, you do, but from an ethical perspective, your interactions with them are perfectly.
Ethical, if they are motivated by self-interest, that's how the yardmen goes as a very strong argument. And as an argument that has swayed, a lot of people in the present day, I think that in the end it's unsuccessful because I think in the end it's not possible to simply work only in your own self-interest.
But how do you couch that in terms of happiness and utilize? He and consequences. It's not clear and it's certainly not a slam dunk case that you can just go out. And so, well, look at what you've produced, is that terrible? Because what's been produced is to a lot of people not terrible.
This is a specially the case. If you combine egoism with a concept of what Robert Tivers, came up with and 1971 in the idea of reciprocal altruism which is a type of enlightened self-interest. The idea of enlightened self-interest is that you understand what deferred gratification means. I mean, you're not always trying to get the advantage in the exact present moment.
You can play the long game and a lot of our characteristic examples of egoists, including the former US president. Don't do that. They can't think beyond the next interaction with the next person and how to leverage that into some sort of benefit, but someone who has enlightened self-interest will work with other people will use things like friendship, go beyond, contractual obligation perform, altruistic acts.
Get the good feelings that come from that. But even more to the point, create this virtual or sorry, virtuous circle where all of our interactions lift, all of us together floating, you know, where arising tied floats all bolts, something like that. Of course a lowering tied lowers all boats but that's what happens if you're not in light, right?
So again and this is how the corporate world works because companies have a fiduciary duty to act in their own self-interest and how can they justify? How do they make it work? Well, through a process like this of cooperation, forming consortia forming supply chains. And and, and networks of forming product or domain-based, ecosystems market.
Ecosystems with the idea of that this cooperation certainly not collaboration. The cooperation helps them all earn more money in the long run. And again, where on the utilitarian grounds, is there an argument against this is very difficult to come up with an argument against this. Well, here's how this worked out for me and your results may vary.
In fact, just this morning in a different context, I posted the following that Mastodon mastered on is like Twitter, but without the evil, and I wrote, the funny thing about time is that if you spend it on your self, it will always feel like a waste of time. And it's only when we are doing things for others that are use of our use of our time.
Seems meaningful. I put here an image from Bob Dylan's album, slow train coming. In particular reference to the song, got a serve somebody. And that's an important principle. And that makes, I think a difference between the corporate practice, which I think most people would say, isn't ethical, or unethical is just a moral.
It has no moral value at all. What all you do is seek to improve your finances? You know, there's no ethical value in that but serving someone whether it's as an individual or as a corporation, when I, you know, actually doing things for other working towards some noble purpose becoming as they say, part of something that's bigger than ourselves that where this ethical feeling of value comes from.
Now, the question here is, is this a real kind of happiness, or is it just something I made up? And by that, what I mean is, does it correspond to real sensations? Or something that I could at least in principle measure empirically or at least recognize empirically or is it something that is one of these things I could never know for sure whether or not, I was actually having that experience.
There's no way to know, no way to falsify at least, personally, a claim that I'm having that experience, and I think that's a good question for me. Personally, it made all the difference, you know, it's the difference between studying just to become smarter and studying so that I can apply the results of that studying to a good cause, you know, just becoming smarter to just seems pointless.
Applying it to a good, a good cause is not pointless. And that is consequentialist thinking, but it's not egoist thinking. All right, the value of the action, the ethical goodness of the action, the happiness of the action comes not from serving myself but from serving someone else. Now, it's not going to apply to everyone probably not.
I mean, indeed might be the basis for one of the fundamental divisions in our society. Not you can't divide the world that or sorry that you can't achieve unanimity on that question for some people serving others creates pleasure for other people serving others does not create pleasure and so you have two competing ethical systems with no real way of deciding between them.
Well, to address the problem of measurement and to address, even the problem of, you know, how are we going to calculate happiness? How are we going to distinguish between the value of egoism? And and whatever there is a principle we can appeal to called rule utilitarianism in. This again, goes back to mil.
The idea here is that instead of evaluating the goodness, we're badness of actions on an act by act basis was for one thing is difficult to do. I nobody does it really? And for another might be this to some unintuitive results instead of doing that you come up with rules, where if you follow the rule, that will result in more happiness, overall, than if you don't follow the rule.
So that relieves us the pressure of going all these calculations, all we need to do is get the rule, right? And then just as, in the case of duties, we can have strong rules and weak rules. So a week rule is kind of equivalent to a prime of a she duty, it's a rule but it's kind of a recommendation and might be overruled by other things as compared to a strong rule for which there are no exceptions period and historic and clearly you can see, you know, it's anytime you get into a raw based system, you're probably going to want to a little bit of fudge factor around the edges of a rule because language really is a blunting strummative.
Well, there is the danger that a retreat will call it that into rule. Utilitarianism leads us almost eventually almost immediately inevitably to moral conservatism. This is an argument that kindness in advanced and it's the idea that there are some rules out, it would always be wrong to break, no that no matter what the particular consequences.
And I've got the image of protesters in Texas because the recent anti-abortion law in Texas is an example of this where they just say more, abortion is wrong, period. End of story, doesn't matter if you were raped, doesn't matter if the child would not be viable, doesn't matter if your own life is in danger.
The rule is the rule and the thing with this sort of approach is that there's always a higher good that can be appealed to. Again, it's a consequential consequentialist position and it's this ultimate long-term bad consequences that argue for the inflexibility of the rule. Now in in the case of abortion the the the principle here is life, right?
That's why the call them pro-life people, right? And preserving the sanctity of life and the argument is that if you allow things that end life you are eroding. The sanctity of life. Not that's a core principle of especially the Catholic Church. Life is sacred. The Catholic Church has had over the years, you know, prohibitions against for example suicide.
And of course the longstanding prohibition against murder, which takes back to even before a calf catholicism. So there is a higher good here. It's a higher kind of consequentialist good. And the good, in this case is being used to justify the idea that this rule should never be broken.
Well, unfortunately, it's not obvious that that results in a position that is ethically defensible because you need agreement on this higher. Good. And even if you agree that life is sacred, you know, even the people who support abortion can find exceptions to that, many of them will support the death penalty.
Many of them will support the use of force by police. Many of them will support the use of military in international conflicts, just to name a few examples. So it's not the case that there is this unanimity about this higher good and that it just becomes something that's very convenient and less and less about the consequences and more and more about the conservatism another aspect that gets raised.
Offten in this context and often from the perspective of moral conservatism is the idea of responsibility. And we could do, you know I the entire course on the subject of responsibility. But essentially in a mouse to the idea that individuals and maybe corporations and maybe governments and maybe whatever are ethically accountable for the consequences of their actions and responsibility goes hand in hand with a consequentialist theory of ethics, right?
Because if you're not worried about the consequences then you're not so worried about responsibility for the consequences. Everything depends on the intention of the act and not the result. That's what leads to unintuitive. Ethical consequences in other fields, right, talking about virtue, ethics, or duty, which can result, you know, some of somebody sticks to a particular duty, or promotes a certain aspect of character and is completely ethical.
But ends up killing somebody bad consequence, and that's an intuitive, but that doesn't matter in those principles of ethics. But in utilitarianism and other consequences theories, it would matter, it does matter. And so, people have to own up and take responsibility for their actions, which means dealing with the consequences, whatever that means sometimes it means accepting the punishments because you can't fix the consequences, other times.
It means paying reparations, sometimes, it just means saying, you're sorry or accepting that. Yeah, I did this and it was wrong and I promise not to do it again. You know it varies taking what we mean by taking responsibility, varies a lot. But when we're talking about responsibility it's relevant whether or not the consequences were predictable whether or not the person intended the outcome or conversely, whether or not the person displayed indifference to a bad outcome.
So there's, you know, there's consequences and there's consequence this. I've mentioned this before if I step on a butterfly and cause it to rain in China. And then flood, I am, not personally responsible for the costs of the flooding in China, and nobody would expect that I am even if it was a direct consequence, and even if we could trace the cause of path from that.
Butterfly to the flood in China, nonetheless, nobody's gonna ask me to pay for it because you know, it wasn't something that I ever thought or ever intended to happen.
And because intent matters a lot. Even in a consequentialist theory, you can assign responsibilities someone even if the consequence never happened because the intent does matter, it takes more of a rule-based approach, you know, like pointing a gun at someone and pulling the trigger is an action that should be considered effectively wrong.
That's an example of a rule, which is why we can assign a penalty to attempted murder even though the consequence did not happen. We have a case here where it could have happened. If predictable that it would have happened and it was intended to happen by the person who pulled the trigger.
So responsibility doesn't include accidental consequences and does include intended consequences that did not actually happen. And I think people are generally happy with that concept. There's always going to be someone who argues around the edges, but I don't think for the most part that people feel that they should be responsible for things that happen completely by accident.
Well, then what are the problems with utilitarianism? I read Matthias, Melcher's post yesterday are the day before, reviewing this and asking over, what is it that bothers people about utilitarianism? What is the problem? And, you know, at first glance, it seems to make a lot of sense. Even, you know, even with the problems of Machiavellianism are egoism, we can work our way through that, and I think that's how we approach it mostly by trying to show that in the long run.
Machiavellianism Eagle isn't produced bad results for everybody. They produce more unhappiness and they produce happiness, whether in the simple sense that you know being selfish doesn't make you happy or in the longer sense the broader sense that being selfish a doggy dog kind of world that isn't really very pleasant to live in and witness, right?
So we can address those but there are some really intractable problems for the utilitarianism and consequentialist series. Generally I coach them here in a couple of sweeping generalizations. One of them is the question of the one versus the many. At least. That's how I'm characterizing it. I put that in the form of a few questions.
Here's one. Is it better to give one person a million dollars or to give a million people one dollar? Well yeah. The answer to that is they're both equal, right. But they're not obviously you give a person one dollar and they're happiness is really mergingly. Improved not very by very much.
In fact, they might not even bother to bend over to picking up, you know, on the other hand a million dollars is life changing and allows that person not to worry about money for their rest of their life and to spend their entire life doing good. However, that may be conceived.
We see this argument used a lot, and by people arguing against taxing rich people, because according to the argument, we could tax these rich people, and collect certain amount of money from them. But then if we turn around and spread that money around the rest of the population, the amount is so small for each individual person that we're not really doing any good.
So there's no point tax taxing, the rich person. We might as well, let them keep the money and let them do the good that they're able to do with it. That's an argument and it's not a better argument on the other hand. Does that mean that having rich people in society is ethically good.
That's something. They think a lot of people find a little less intuitive but okay, we maybe work away around that but let's try this one. Is it worth the sacrifice of one life in order to save five? Now, this is Philippa foot's. Trolley problem, of course. The trolley problem is, if you pull the switch you gonna kill one person.
If you don't pull the switch, the trolley will continue on. It's path until a five. So the, the stickiness here is you actually have to pull this, which you've you've got to kill the person, and if you don't like it, put that way. Well then, you know, just there's another example.
I read in the, the podgeman and Pfizer book you come into a small town where there's an execution about to take place and a bunch of people are lined up against the wall and the firing squads there, and they're already. And the captain comes up to you and says, oh, it's a special day that you're here.
I'll tell you what these people here. They're all guilty but since it's a special occasion if you shoot one of them will have you shoot one of them and we'll let the rest go. So you shoot them. Well, I mean most of the people up against the wall are gonna say you should shoot them.
The captain is obviously gonna say that you should shoot them, the firing squad, even would say that you shoot them if only so that they don't have to be responsible for shooting. People, is it worth? That's not perfect. That's a hard question because it's hard to actually put a measure of a value of happiness on a human life.
Indeed, the question I ask is, is the heat on a common currency and if you're wondering, those are two silver heat on pennies that came from a place, actually called heat on in the UK. So there is a heat on currency, but there are only three coins in existence.
Is the heat on the common currency can we for example trade a life to slightly improve the happiness of everyone else. And we're not saving anybody's lives. Who's making them a bit happier? You know, for example, we could argue that everyone would be happier if I went. And I'm trying to think of somebody who everybody hates I really shouldn't.
So, let's, let's just pick a Charlesman. Let's suppose, I thought, you know, everybody will be happier of Charles Manson doesn't exist anymore. So I'll go shoot up and let's suppose that the calculation which is done by our AI happiness calculator. Actually works out to. Yeah I'd pretty is a lot of happiness in the world if I did that is it then ethically?
Right. For me to go shoot Charles Manson. Well, you know that calculation maybe I could just shoot an innocent person to make everyone happy. Especially in innocent rich person with no will. I'll shoot them and then take their money and give it away to people with that, you know.
Suppose that may be people happy with that work or by contrast, are there things like say a human life that we can't express as a value that we can't trade off in that way. And, and this is the difficulty of utilitarianism is that it does invite the possibility of these tradeoffs.
You know, it's kind of like carbon pricing for the saw you know because you know we can start trading you know maybe we're not where we're going to agree. No a life is, you know, it's infinite happiness. Okay. Well, how about freedom? If I enslave a certain portion of the population that would certainly make other people happier because they be richer because they'd get all this free labor, one of the economics of that work, is that okay?
For a long time, the economics of that did work and at the time, people thought of it as ethically, fine to have slaves today, we don't think so. But it's not just because the calculation change, you know, freedom of speech. You know, our society would be a lot Comer, a lot more harmonious if we didn't have freedom of speech, That argument could be put forward has been put forward.
In many cases, you know, freedom from arbitrary, search and detention, you know, or any number of other actions where you could run the numbers then get the calculation to run your way right? You know maybe you don't deny freedom of speech where everybody you just did. I to a certain subgroup of society and that could produce the results.
You know, if I squelched the freedom of anti-vaxxers to be antivax that increases the happiness and society because it makes it less likely that people will resist being vaccinated say, oh no that argument sounding a little bit better, isn't it? What if I shot the antivaxers that would also have the same effect?
Maybe that's too strong. See that's the problem, right? It's hard to think of ethics in those sorts of terms. So, it's not the question of the one versus the main either way of depicted it. It's calculating this versus that, that creates the problem and it's seems like ethics, shouldn't be that kind of thing.
And those were cases where we agree about the calculations, what about cases, where we disagree. And there are two types of this one where it's an, internal disagreement and two, where it's an external disagreement the diagram, on the right demonstrates, an internal disagreement. Same government in both cases, on one hand, the government is saying peace on earth.
Good. Well, toward men on the other hand, the government is saying more in ammunition for sale, orders filled promptly. So, to that particular government, both of those are ethical values. Both of those produce good and benefit right pieces, good for everyone, but so are good sales. So we have this conflict and it's not clear how to resolve this conflict.
Similarly we can have two distinct people with conflicting calculation of a happiness and that's the case in the the anti-vaxxer case, right? Some people will agree. Yes. We should shut the anti-vaxxers up because that'll produce more happiness in society. Other people will say, no, shutting people up in the long run, will produce less happiness in society because we as a democracy depend on, being able to express these minority opinions.
How do you do the calculation? That kind of questions coming up all the time. Has to do, you know, any time people are talking about political correctness or being canceled or burning books as again? Texas brings us another example. It's a question of balancing, these two objectives on the one hand, the speech or the book or whatever, seems to produce a harm but on the other hand, squelching the speech for burning the book also produces a harm.
How do we decide and underlining? This is the question of whether they're really is and objective standard of happiness and objective standard of what counts is good. It really does seem to just depend on your point of view and that's a problem for anything that expresses itself presents itself as an empirical approach to addressing the question of morality.
Even if we could have our ethical AI system, run the numbers, different AIs will produce different results and that leaves us with a problem. Unless we can somehow all of us get together and determine what the actual objective standard for happiness is now, on our society, it's money. And I've been presented in my own work with that argument a lot of the time, right?
I need to show what the benefit of my work is. And the only way to show what the benefit of my work is is to show how much people are willing to pay for it. Now, happily that has it. Been the prevailing sentiment over the 20 years. I've worked for this one organization, but it has come up from time to time.
Is certainly something that I see expressed a lot, but it sets this stage for what I think is the final ultimate objection to consequential, is based theories in general and utilitarianism in particular. And that's the question of moral luck. All right, I think about this way because I see this happen.
A lot, a person goes out gets drunk to the gills gets in their car, drive down the highway and kill some very spans. Several years in jail, on the other hand on the very same road from the very same bar. Another person gets drunk to the girls gets behind the wheel.
Drive shot on the highway. Nothing half. No time in jail, The two acts are identical. The only difference between them is a matter of luck. Why do I say luck? Well, because they're drunk to the gills, They're not capable of hitting or avoiding anyone. I mean, that's why drunk driving is a crime, right?
Because you are not, in fact, in possession of the ability to dry. So it is a matter of luck, whether you hit someone or didn't hit someone. But we address these consequences differently and that seems odd. But we address these cases differently be because the result was different and we put one in jail.
We don't put the other in jail unless somehow they're caught on something. Unrelated. And that seems like luck. And it doesn't seem to me that ethics should depend on luck. I put a diagram here as part of this final slide. How self made billionaires got their start. Right? So we have bill gates moms out on the same board as the CEO of IBM and convinced him to take a risk on her son's new company.
Or we have. I'm not sure who that is, who started Amazon Jeff Bezos started in Amazon with $300,000 in seed capital from his parents and more money from other rich friends. I think this is Warren Buffett but I'm not sure the son of a powerful congressman who owned and investment company or y'all Elon Musk self-made.
Billionaire, who's dad happened to own an emerald mine in apartheid South Africa. Now, I raised these examples because people like this, first of all, just in and of themselves are very often depicted as instantiating ethical goodness. But certainly they take their money and they do things like start foundation.
So, or even pay their taxes with it and people. Applaud the ethical virtue of this, but they are in their position simply because of luck. The fact one person is super rich and can spend a ton of money addressing disease. And another person is dirt poor and couldn't spend the dying doing that.
There's no ethical difference between them. One person was just lucky enough to have all that money to spend the other person. Wasn't I'm Max generally. What characterizes Utilitarianism One way or another. The difference between an ethical action and an unethical action when it is based, or when it is evaluated strictly, according to the consequences of that action, or even of that type of action, is a matter of luck, no matter what a person's intense were.
No matter what a person's means were. It's a question of luck and I find that coming up with a system of morality. That is based on consequences and therefore, assigns outwardly extra large, ethical value to the extra large actions and contributions of the rich. And powerful is very convenient for the rich and powerful when you want to look at the outcome, it allows you to translate being powerful to being good.
It's a consequentialist theory. Power becomes goodness. And that doesn't seem to need to be an ethical theory. It's a theory, runway won't delay that it's a theory. It's a way of calculating, how much maybe society finds worth or value in an act or in a person or whatever. But you know, I don't see it as determining the ethics of an action simply to look at where that person just happened to find themselves.
And what that person just happened to do. Has a result of that. So, as I say here on the slide, whatever you utilitarian is it's not ethics. So that's my presentation on consequentialist series. I hope you enjoyed that. I hope it made you. I hope it was informative. First of all and filled in some of the background on where this line of thinking comes from.
And I hope it also offered some thoughts about where this line of reasoning. Can go wrong and why. And you know, if we take this theory now and apply it to artificial intelligence, why it won't work for artificial intelligence like the other thirds. And so I'll leave the discussion there and I'm sure we'll have more to say on this as time goes on and as we get into the other sections of the course.
So thank you for joining me. I'm Steven Downs and I'll see you next time.
- Course Outline
- Course Newsletter
- Activity Centre
- -1. Getting Ready
- 1. Introduction
- 2. Applications of Learning Analytics
- 3. Ethical Issues in Learning Analytics
- 4. Ethical Codes
- 5. Approaches to Ethics
- 6. The Duty of Care
- 7. The Decisions We Make
- 8. Ethical Practices in Learning Analytics
- Videos
- Podcast
- Course Events
- Your Feeds
- Submit Feed
- Privacy Policy
- Terms of Service