Content-type: text/html

How to Add Your Blog to the Course

Transcript of How to Add Your Blog to the Course

Okay, so now I'm recording the audio. And I'll just put a comment in here. If you can't see the video, reload this page, I'll put that in. And that'll make sure that people who may be watching can see this page.

So as usual hit looks, you know, this is, like I say, this is a one man operation, sometimes held together by binder twine. But on the other hand, it's costing almost nothing to do. So you know, and and it's open and accessible to everyone.

You know, I can have nice, you know, all in one solutions that work really well. If you can't video, really, if you can't see the video, okay, trying to do too many things at once. If you have a nice all-in-one application, sure, then it looks seamless. But if you're trying to do a bunch of different stuff, less so.

Okay, so what do I want to do right now, I'm just gonna minimize some stuff here. What I want to do today is I want to talk about, I want to talk about how you can participate in the course. And the way to participate in the course, there's really two steps, well, three steps, but really to:

  1. first of all, create your blog, if you already have one, great;
  2. and then secondly, submit your blog RSS feed to the course. So we'll show you how to do that;
  3. and then the third step is, of course, actually write something in your blog relevant to the course.

So first of all, to create your blog, I'm going to use something that I don't normally use, I use Microsoft Edge here. And I don't want "what's new", so I'll just try to ignore that. So a good place to go to create a blog is Blogger https://www.blogger.com. And you just click Create your blog. And you can choose an account or you can use some other account, if you already have a Google account.

I'm going to use my existing Google account. And so I'll enter my password. If you don't have a Google account, create a Google account. And if you don't want to create a Google account, then you'll want to use a different blog platform. And I'll show you a couple before we get too far into this video.

So I'll enter my password. Notice it's a pretty long password. That's a good habit to get into. You know, for each extra character in your password, you've made it that much more difficult for people to crack it.

So these are my existing blogs. But if I wanted to make a new one (trying to figure out how to do that here, I probably should have created a new account, shouldn't I have, but yeah, I don't know where to create a new blog...) There must be a way... new post... new blog! And that's what will show up right at the top if you're brand new.

So here's a new blog, I'll just call it "Ethics, Analytics (and I'll try to spell analytics correctly) and the Duty of Care". Okay, so we'll go next. And we'll choose a URL. So let's try 'ethics21' because that worked really well. No, it's not available. Someone already took it. 'Ethics2021'. That one's available. So we'll, we'll go with that. And sorry if you were planning to use that, and that's it.

That's all we need to do to create my blog. Now, your blogger, if you're your blog, let's just create a quick new post Welcome to my blog. All right, I really can't spell. "Hellow world" and I'll publish. I always felt that Hello should have a W on the end. So okay, so welcome to my blog, and I've published it.

Now I can view this post. And notice the URL here, ethics 2021, which is what I chose. And my blog URL, how you can get it just by clicking on the title of the blog, is https://ethics2021.blogspot.com.

My blog is created that step one. Now if you don't want to use blogger, or a Google property, you could use Tumblr, oops, not with an e - tublr.com. No 'e', right. And again, you can sign up login, or you can continue with a Google ID or an Apple id as well. Wordpress.com is another blogging platform you can use. And there are other blogging platforms, I won't give you a whole list. Although it might be a good idea to provide a list. You can choose any blogging platform you want, there's no limit, you're not locked into a particular platform here. You could even write your own blogging platform and use that. In fact, I've done that in the past.

So anyhow, step one, create a blog on any one of these platforms. So step two, so we're going to go to the course, ethics.mooc.ca, and notice I'm still using Edge. And the reason why I'm using Edge here is because I'm not logged into anything on Edge. Well, except for my blog now. So I'm not benefiting from you know, okay, I have special access and you don't.

So I'm showing this to you. So here, down here is where we're going to submit our blog to the course. So I'll click on Submit feed. Okay, now, whoops, oh, that went a bit too fast there. So here's here's the list of blogger software, you can also use edgy blogs or medium. Now if you can't find, you know, here, here's your, your blog.

And you might not know where your RSS feed or even what your RSS feed is, it doesn't matter. Click on this link right here, the OERu Feed Finder. Okay, I now here, get the address of your blog. Copy it. And and then what we'll do is go to the go to this page and put it in here. So I'm just using Ctrl V to submit it. So and then submit. So it's going to verify that it's a real site, it'll look for the feed, and then it's found to Atom or RSS, it won't matter which one you use. So I'm just going to use this one which is Atom. Oh, it won't matter. I'll use RSS because I'm just just feel better with RSS copy. Alright. And because I tested it, and I know it works, but the other one also works. So we'll come back to that page. Okay. So here is the link the RSS link.

Now what's the title? Well, the title is what I called it, right? Ethics, Analytics and the Duty of Care. So I'm going to copy that. And I'll put that into my form. Just making this bigger so you can see.

And then the web page. This is the web page. So I'll copy that. And I'll put that in my form.

And then optionally, your name can be your name can be some other name that isn't really heard. Name (I'm not really going to be watching for that) can be a pseudonym. I'm just gonna put my own in. So that you know, it's me, and then submit. And you're done. That's all you need to do.

So let's go back now to the course outline. And you'll notice the course tag is ethics 21. So this is the third step. Now we'll go back to the Blogger application where I'm writing my blog, I'll create a new post. So we'll just call it "New Post". And I'll write, "test new post", and then put the tag, either (whoops, that's not the tag... So put the tag either) in the post, #ethics21 or in the title, or as one of the labels for your post. And in other blogging platforms, it might say categories or something like that, any one of these three things will work, they each correspond to different data elements. And the course platform checks for each of these, and then publish your post.

And it's done. So I'm not going to actually harvest this post, so you don't get this ugly post in your newsletter. But that's all you have to do.

If you do those three things. Then in the course, in the course newsletter, the the link that you posted will show up here in the newsletter, and I might change the format ever so slightly, but you see here, Matthias Melcher already did the process that I've just shown you, here's the link. And when I click on this link, here it is. And now I can read his article similarily.

If you're receiving the newsletter by email, you will see this link, you click on the link, you can go straight to his post. So that's all you have to do to participate in the course. So you might want to start doing that. I'm going to assign this as a task later on. And I'll talk about tasks in one of the upcoming videos. So you don't need to feel you need to do that today. But if you do that, then you will be in a position to take part in the course using your platform of choice, such that what you contribute to the course will be visible to everybody else who's taking the course.

And that's it for today. That's all I have for the course for today. So I'm going to wrap up this video and we'll save that and we'll login and we'll make sure that this video goes out in the newsletter as well. Because that's how the course is going to work. I'm going to present you know these small short videos that will show up during the week in between the live sessions. So thanks a lot for joining me and I'll talk to you soon. Bye.

Transcribed by https://otter.ai and moderately edited by me (the transcription was pretty good).

 

 

How to Follow the Course

Transcript of How to Follow the Course

Autranslated by Google Pixel Recorder - Not yet edited

Okay. Welcome everyone. It's 12 noon and I'm afraid I tried to quickly eat an egg sandwich before I started. So now I have the hiccups. Hey proud, my apologies for that. So you may hear hiccuping sounds as I do. This short talk presentation, so if you're watching, I'm watching the comments.

If you're watching on YouTube, I'm watching the YouTube comments and already got one from Tyler and it's, well, I'll be watching the comments in the activity center where you can also be watching this and the activity center of course, is right here. And to put on that, there it is.

Live of just leave it off so that I don't get echoing but you can just access this stream in the activity center. Of course, if you're already here, you know that already anyhow this channel want to cover, how to follow the course. Now, if you're already subscribed to email, I'm just going to come back here.

No, not there. Here we go. Your furnished subscribed by email. Then of course you can just follow the course through your email newsletter but maybe you don't want to use email. Maybe you want an alternative to email or maybe just like different ways of following the same thing any other way.

That's fine. There are many different ways you can follow the course. And I'm going to talk about those in this session. So the newsletter of course, be click on the course newsletter, and then click on this link here to subscribe. It'll take you to a subscription form. You fill in the form.

All I really need is the email address. First name last name is completely optional and then you need to check. Email to give me GDPR permissions to send you email. And then just click subscribe, you'll get a confirmation in your email and your subscribed. This is a mailing list that you can unsubscribe to at any time.

There's a link at the bottom of every email and all you need to do is click on unsubscribe and you're gone, so that's email. And again, I think most of you probably already know that but that's come back. There's also course feeds. So, the course feed,

Risk be seen here. So in this feed, what I do is I harvest other people's feeds and including my own where I may contribute to my own information and you can subscribe to the course feed in our SS itself. In fact I don't think to have a page for that.

That's interesting often make a page for that. Oh my, let's go to the course newsletter again. Here we go. I shouldn't mention the newsletter is available on the web. I'm here it is. It's just ethics. Dot move.ca. Slash course newsletter.htm and it's also available via RSS that really should make this more prominent here.

So this is the course RSS feed this feed. We can see it here. It's updated every day. Actually, it's updated every hour and it contains the posts that you the participants in the course. Have contributed via your blog and we talk yesterday about how to contribute your blog to the course, plus any presentations for example, this presentation would be available and accessible through the RSS feed.

And then the other course materials, there's a bunch of different materials that may be offered will be offered. In fact as we go through the course and they will all be made available through this RSSP. So the question is, how to subscribe to it? Well, what's you need is an RSS reader so I'll just type in Google RSS reader.

There are various readers. Okay. Here are the best. Free RSS. Reader apps and 2021 according to zapier. Let's see what once they recommend. Feedly news blur. I know reader the old reader and feeder. So, feedly is the one that I use personally, so I'll click on feedly. So actually, I'll click on feedly using a browser that where I'm not currently subscribed to freely, so you can get a sense of what that looks like.

Feedly and so here's what it looks like. Oh I am subscribed and how about that? Okay, hello internet explorer. I'm really not. I don't use internet explorer for a while. I don't even have it loaded here chrome. Let's try. I might be subscribed here as well, but we'll see.

So feedly same URL. Okay I'm still subscribed. Okay, let me let me log out here and we'll see what it looks like. When you first sign in, by can log out about chrome, so I don't know how to do anything. Here we go. Down here logo. All right, this is what feedly looks like.

When you just go to it to begin with and then you just click on get started, you can create a free, feedly account, they are free and either continue with Google, feedly with that year. You know, if you don't have an account, I don't know how you can continue with feedly or continue with Apple or choose other ways to sign up and adult offers away to that's with your Microsoft account.

What I'm looking for is the latest sign up without actually. There we go. Just enter your email address and then you'll create a password and you can sign in. So feed me looks like this. This is my feedly to add an RSS feed to feedly. You click on the plus.

On the left hand side of the page here. So click on that. I know it's come back to the course, which is here somewhere.

Okay, this is the RSS so I could either copy the address from the top or from this course. Newsletter page. I can just wait click where it says RSS and go copy link or I could just enter the URL for the course itself. We come back into, feedly just submit it and you'll see the source that comes up and click subscribe.

Here it is. And it gives you the option to follow, and you now have an option to put it in one of your categories. I'll just put it in favorites because it is my favorite and you are now subscribed. So what happens then is, as new content is added to the RSS feed.

We can look at that RSS feed here. So here I am in favorites. I'll just get this as actually too hard for me to work when the text is hard. So I actually have 42 favorites. I'm just trying to list from all our. It'll be down at the bottom here.

Probably. There it is. So here's the feed. Notice you can easily see the course icon. So we click on that and here is the item from yesterday. How do I add your blog to the course? Here's the post that Matthias Melcher wrote yesterday here is another piece of information.

Here's another post that someone else has already written. Here's the session. For the course, you can watch the video anything you want in. The course would be available. We all reader. So another one that I've used it works very much the same way click Addis subscription, again the same URL paste that in in this case click plus and now I'm simply simply subscribe to it.

And here we are. You can add your own neuron content. Metallurgist male shirt was already submitted stuff. He has and ask tool that he uses on his laptop. It's called quite ISS so you can download it and install it. So I'll download it here. Just use the installer for the windows and so we'll save it.

Now should be here. There it is. So I'll set it up. So now I'm installing quite RSS on my desktop. I'm doing this in real time so that you can actually see this happening install because it is very quick to do this thing. And I have here, it opened up for me automatically.

Now, here's my desktop RSS, reader solution, not even an account, no tracking or anything like that. So if I want to add a new feed,

Here's the URL. It actually remembers it from my clipboard. That's pretty cool. There's no authentication required for this feed. I'll click next and it'll go in all feeds all finish and here it is. So now and this application, this desktop application, I can read all the contents of the course.

And of course follow the links and see and I have actually displays right in the feed reader. Isn't that nice? So you don't even need to use a cloud-based service. You can use this desktop application to access all the contents of the course through our SS one more thing to add to this and that's another way of following the content of the course.

So we'll come back to the course and that's through. The course podcast. Yes. There is a course podcast. Of course, there is the podcast. Oh, I need to update this pay. Well, let's go to the course outline. And the podcast is listed here, right? Under videos. We'll click on that.

So this is the, the web page for the podcast. And each one of these, each entry, each episode, if you will, is listed here. Now, right now the podcast, only includes one presentation and it's the presentation from yesterday. Here it is. Tomorrow, it'll have two. It'll have the presentation you're listening to right now.

Plus the one from yesterday. Now the thing about podcasts is there is an RSS feed for podcasts and there it is. So this is the RSS feed for podcasts and the way podcasts work is they look for a fall in what's called an enclosure right in here in the RSSP.

So then the way you subscribe to a podcast is we'll find or find our favorite podcast application. Okay, podcast app. So Google podcast apps on Google Play and all that won't help me. Mostly people use their mobile phones for podcasts. I wonder if I can just do that really quickly.

Don't even wear mobile connect and dashboard mirroring. See your phone screen on your PC. Send request. Okay. Activate. Whoops. Activate. I need to send. There's a, let's see. Now, I need to, I'm trying to open it up on my phone and we go.

Okay, send request here. We'll start now. There we go. All right. So, now we're looking at my mobile phone screen. Sorry, I've hope that delayed in putting that up there, so I use personally, I use pocket podcasters. You can see some of the things that I subscribe to tweet and verge cast, and front burner, and much of things.

So, if I want to add a new podcast,

Actual podcasts should be able to just add it. No, I saw it. Discover search podcasts and they just have their own list of podcasts. There's there's going to be a way for me to enter in the URL. I'm sure I just don't know what it is.

Organelle. No. What's this? The hope. Okay. Well, this is the panels of doing unrehearsed things on live television, because there should be, the problem is it's hard for me to see behind that screen sharing icon there.

Okay, no. Okay, here we go. All that's cast to. So, I could cast it to my television or something like that.

Well, I'm sorry. I don't know how to do it with this application. I thought there was a way, but I apparently, there isn't. So, let's find another podcast application on Google Play because I use Android use iPhone, use your own applications. I can't help you. So, we use Google Play.

And we'll look for podcast.

So there we go and we'll look for podcast players podcast apps. So installed this, this is cast box. That was an ads. Maybe I should use that podcast player. Here we go. This looks good. So install it. And okay, well we're still installing it hoping okay. So here we are in the app and we'll go through some garbage will allow it because otherwise it can't actually access.

What can be the region? That's not good. Subscribed plus. Okay and URL. See it at the bottom. There's kind of hard to see add. You are all and we have to wait for it here. Thinking it's thinking, come on.

Okay, add you are L. There we go. Okay, so now I have the URL here. The URL for the podcast for this course come back here is ethics. I'll type that in asex dot monk dot. Ca slash audio dot xml. Confirm URL is incorrect. It must want an HTTP at the front.

HTTPS. That's right. Everything everything is HTTPS these days. So there we go. Oops, we're not seeing that anymore. So, here we go. Confirm.

Stop saying you are always incorrect.

And that is very strange, Inferno.

Yeah, unable to control your phone from your PC. That's probably good thing.

Okay. This is turning out to be a failure. It should work. It's a perfectly. Good podcast. Okay here we go. Okay so it finished doing that. Let's see if it shows up here.

And it did not.

Okay, well, I'm gonna and that attempt at the moment that was a failure, but this should work with your podcast. I will say it's getting harder and harder to simply, subscribe to podcasts, using RSS these days, but that is the podcast. You are all and in an upcoming episode of Essex analytics in the duty of care.

I will show you a successful subscription to a podcast, just to save that it can be done. However, I seem to be having issues right now and it's in the middle of a live broadcast, so it's too hard to face. So, I'm sorry about that. However, well I do have one viewer, so to the one viewer, my apologies for that.

Nonetheless the RSS. We saw that work and the newsletter we know that that works so those are other ways that you can follow this course. So with that along this session, at this point, I'm trying to keep these sessions. At least the introduction ones, relatively short and tomorrow Thursday, we'll talk about tasks and other ways of participating in the course.

And then Friday, we'll talk about. Well, we'll have our first live session using zoom, who knows how to go wrong. We'll see, should work solving tested, so should work, but you never know. Well, and we'll talk about what does it mean to participate in a connectivist book as opposed to one of the ones you may have seen in Corsair or at X or whatever?

Whereas just like, in university course, except online, connectivist moves are a bit different. So, this session right now, and thank you for your time and and you're interested, talk to you again.

Activities, Tasks, Badges

Transcript of Activities, Tasks, Badges

Auto-transcribed by Microsoft Office 365. Unedited.

Hello and welcome to ethics analytics and the duty of care. 

We're in module minus one getting ready. 

This is Thursday and the title of this session is activities, tasks and badges and so. 

We're still talking more about how to participate in the course. 

So we've talked previously about how to publish your MC and supply an RSS feed for the course and some of you have done that and we can look at the results of that in the on the your feeds page. 

We've got these fees now being. 

Harvested, and why don't I just check while we're live? 

Whether I've got any more feeds that have come in? 

So we'll just check and see and random access learning that one is new, so we'll approve that one. 

And this one is new. 

So we'll add that one. 

I don't know why he made the OR she. 

Made the title. 

UH staff education. 

Oh, I see what you've done, OK? 

UH, title. 

So the harvest. 

Let's check the harvest, that's correct, so whatever. 

So for the title, it's going to be whatever the title for this blog is. 

So let's just go have a look. 

Learning matters. 

So we'll copy that. 

And we'll go back. 

Whoops, that's not. 

Where we were. 

That's not where we were. 

Too many windows open again here. 

We go, so I hope that's alright. 

Uh, currently unknown author of this blog, but it'll make it look a lot better in the list. 

So anymore, UM, that's my ethical POV. 

That's new awesome, so we'll check that one. 

We'll confirm that one is approved. 

So make sure ethics analytics so not OK. 

I believe I have them all. 

So we'll just reload this and get the new things OK, and so now now this this page unfortunately isn't doesn't automatically update. 

Oh yeah, yeah yeah, well this one. 

Oh, it did update it. 

Looks like it updated. 

It only updates once a day, which is probably why you wouldn't be seeing it, so let's update it manually so everybody can see this now. 

So list page. 

And course feeds is what I call it. 

And let's publish this. 

And let's have a look at it. 

You see the title your feeds, so learn tech George, Ethical POV, random access, learning ethics 21, ethics, analytics, MOOC and oh learning matters isn't? I forgot to. 

Right? 

Change the I changed your title never approved you as an edit, so let's list the feeds again. 

And let's list all of the feeds, learning matters and let's approve learning matters. 

And now let's publish that page again. 

So I'll just publish that so. 

Giving you a bit. 

Of a view of what happens behind the scenes in this course. 

Because if you see what I'm doing, you have a better sense of what you're doing in. 

In order to make the course work. 

At least, that's the theory. 

You can let me know if that's not true. 

Yeah, as we progress through the course, we. 

Get you know. 

Into the actual topic of the course, rather than just setting up for the course. 

We will. 

You won't see all of this behind the scenes stuff. 

At least I hope not, because I'm hoping everything really works. 

And we'll just be looking at content, but you. 

Know it's nice. 

To see this as we set up. 

So here we go. 

Here are the feeds currently being harvested for the course. 

Will take, you know, a little bit now for the harvester to gather all the data for those and insert it into the course RSS feed for you to follow and for us to see in the newsletter. But anything that's been written in any of these blogs over the last 24 hours. 

Or longer for some for the new ones, we'll show up in the course feed and we'll show up here in the course newsletter so you know that what your contribution is is being seen by other people in the course, and that's the whole idea right? 

The whole idea. 

Is that you? 

Write things you have your opinions or resources. 

You might want to add and these are shared with everybody else as well as whatever I write. 

And whatever I share so that we get a really good mix of things happen. 

And so it's not just me adding content to the course, it's you people as well. 

Alright, so activities. 

So, so I want to re acquaint you with the course activities so course events. 

Here we are. 

All of the course events are available for viewing in the list of course events here. 

One thing to note because I forgot that next Monday is Thanksgiving Day Module 1 introduction will not take place on Monday. 

It will take place on Tuesday. 

If you click on the event. 

You can see the event. 

And as well, if you wish, you can add the individual event to your calendar. 

If you want to Add all of the events all at once to your calendar, click on course events and you can load the full calendar. 

This is an ICS file, so you click on it and then you open it with whatever your application is. 

And that will Add all of the course events to your course calendar. 

Or, depending on how your calendar works, it might make a new calendar just for the course, and then you would display that alongside your regular calendar. 

This is up to you, so all events, every event. 

Here is viewed in the activity center, which is where we are now. 

Now, so here's the activity center. 

This will be true of the zoom conferences. 

If you don't want to actually participate in the Zoom conference, then you just come to the activity center and you can participate. 

You can watch the zoom. 

A conference call happening, live and comment it and see anybody who on Twitter who's commenting on it. 

Alternatively, when we do the actual event. 

You can join the event by clicking on zoom. 

You just click here. 

So for tomorrow tomorrow is Friday. 

Go to this page and click here. 

Although and I'll make sure as well that this is available in your newsletter. 

Uh, for well for today I had a bit of a glitch with the newsletter yesterday, the. 

Page didn't update properly and as a result I had to send it out manually, but I can do that at least, so you should still have received newsletter. 

If you have not received your newsletter yet, you should be receiving them. 

But if you have not check your spam. 

Folder or you may need to wait list. 

Stephen@downs.ca, which is marked as the sender of the newsletter. 

So that's the activities and the calendar information. 

Another item that's happening in this course is tasks. 

I'm going to go back to the activity center just to see if there's any chat happening. 

Nope, and I see there's no chat happening on YouTube and that's because we have no live viewers. 

I love doing these online courses where there's no live. 

All right? 

Oh, just to just because there was a question about times being presented here. 

The the. 

The calendar files or the calendar submissions. 

Those should show up. 

In your local time zones, everything in here is specified according to time zone. 

So the ICS file. 

Does specify that this event is taking place in the Americas Toronto Time Zone, which is Eastern right now. 

It's Eastern Daylight Time. 

The calendar even specifies the RCS file even specifies when and how. 

We make the shift to Standard Time. 

We move out of daylight savings time and into Eastern Standard Time on November. 

Seven so the first date that will be affected by that is Monday, November 8th and the the ICS file takes that into account. 

So is it not fun? 

So your calendar should display the time the correct time locally for you. 

But if you see it as 12, that's 12 Eastern Time. 

Whether it's Eastern Daylight Time or Eastern Standard Time, all right course activities. 

So here we are in module minus one. 

And so here we have the the discussion and you can access any of the activities from the module page. 

You see, it's the same page here that we go to as well. 

In each module there will be one or more tasks assigned. 

These tasks are completely optional, obviously because I'm not offering grades. 

Excuse me, but but there's a bonus right, so click on the task and you get the task page. 

And as well, the tasks, each task, of course, is associated with a module. 

But the task as well is associated with a badge. 

Now I don't have the badge displayed here, I really should. 

I'll edit this page later. 

Oh, actually, I'll edit the template later so that the associated badge shows up, but I'll take a look at the bag. 

See if I remember the badge number correctly. 

Yeah, so here's a badge and there's the related task right that we just saw and I'll make sure the link works the other way as well. 

So if you complete this task. 

And provide you know the the appropriate. 

Blog post that is harvested by the course. 

Then you can receive the badge. 

If you receive the badge. 

It'll show up on the link page for your post, so here for example are now it doesn't. 

This doesn't display very nice and I might change the format, but we'll see is the harvested version of the post that Matias? 

Meltzer created, and you can see down here. 

Here's the badge now. 

Like I say, this is all a work in progress, so forgive me here, but. 

Here's the idea. 

The what should show up here is the actual badge image, and when you click on the actual badge image you will be taken to. 

The the badge page now. 

You'd probably don't want to, you know. 

Have your link on my page. 

I certainly wouldn't. 

So what will be available? 

And it's not ready yet, but it's almost ready. 

Is the badge that you can put on your webpage that will link back to this task? 

So there's a couple of ways that I'm working on it. 

I've been working on on it with Badger. 

So here it is in Badger and there is a way for me to award the badge in Badger and So what will happen is when I read your item, I'll just click a button to award the badge and it will be awarded and so you can get the badge from Badger and put it on your page. 

I don't care who you are and I might not have a name to put in there. 

And I won't be identifying you by email. 

I'll be identifying the recipient by URL. 

So the URL of the of the submission that you've made. 

That was harvested by me for this particular task. 

Will be what's recorded in the badge information, and then the award narrative will come from the description of the badge on the website. 

The additional evidence maybe a link. 

Or a textual negative textual narrative. 

But maybe not. 

You know, the main thing is the URL your link URL. 

Will be placed here. 

Here, and I'm not going to expire it, so you'll actually receive that badge. 

You don't have to do anything, although if you want to help me. 

If you're actually completing a task with something that you've posted on your blog, indicate on your blog post that you're completing that task. 

That's not as tight and automatic as I would like it to be. 

In the future, my vision is that people would use a personal learning environment and so they have all of this data on their own computer so that when they create a blog post, they can just select from a drop down list and say, you know, I'm associating this. 

With that badge and then the MOOC system that I'm running. 

We would see that in the harvest of your link and automatically award the badge and then you know I put some AI in there and you know to look at your thing, see if it actually satisfies the criteria. 

That's where you down the line, but you can see how the AI would play a role there. 

So my headphones are about to die, but good thing I'm not using them for anything here. 

So, uhm. 

That's how the tasks and the badges work, so again the tasks are completely optional. 

They will always involve creating a post. 

So really what? 

I'm using the tasks for is to offer you writing prompts for the things that you do in your post up. 

But you know, these writing prompts may involve you actually doing some activities. 

For example, uh, in one case I'll be looking to find cases using different types of artificial intact artificial intelligence technology. 

For online learning, I'll be looking for cases where issues come up. 

I'll be looking for. 

Examples of using artificial intelligence for whatever. 

We'll talk about that when we get there, and more so the idea here is that these tasks involve you in some of the things directly related to the subject that we're talking about. 

They won't just be writing. 

Answer in your blog, but you will be. 

It will be recommended that you actually do something that's what makes this a course rather than just Series A series of lectures. 

Is this idea that you go out and do things? 

Things these things aren't. 

All they aren't at all going to be done on the course website. 

You do them off in your own environment and made me tell me about them and I might see links. 

I might see resources whatever and. 

What I'm trying to do is gather all of that, bring it together, and make all of that part of. 

The course Reza. 

Courses because by the end of this course, this course itself will be a huge resource base for everything you might want to know about or do things with related to ethics, analytics and the duty of care. 

Tons of stuff. 

I'm sort of wrestling with. 

How to present that. 

That's what I have for today. 

Uhm, so normally I'd see if there are any questions. 

In the. 

Chat area and we'll. 

We'll have a look at the activity center, see if there's any comments or questions there. 

Nope, and that's probably because right now, there's no one viewing this video, but that's OK. 

I I imagine people will watch. 

This video later and if you don't have a question, just pop it either into in, into here or oh, that's what happened, right? 

Somebody entered a comment and then clicked post to Twitter, but when you click post to Twitter it shows up as a comment under my name. 

I should make sure that only I am posting to Twitter. 

Uh, that's something to. 

Look at so so so I'll be making sure that my pages are updating properly. 

I'll be making sure that posts to Twitter only come from actually me and not people pretending to be me by accident. 

I'll be cleaning up that badge. 

Display stuff. 

And I continue to assemble and organize the content of. 

The course there's a. 

Great deal of content in there. 

Doors that we'll be looking at starting next Tuesday. 

Live session tomorrow. 

If nobody shows up to the live session, I know that's a possibility. 

I've learned to live with rejection. 

It's hard. 

And then I'll just wing it and talk about how the course setup process has progressed and point you to some resources related to that. 

So that's all I've got. 

Thanks everyone for joining me. 

I see someone has just. 

Joined on YouTube. 

I don't know who you are, it's OK, but it does say there is one person watching, uh, so hi to the one person watching. 

We're just finishing up, so if you want to see this whole session, give it a few seconds for it to load in Google. 

And then and I forgot to turn on the transcriptions rats. 

OK, well I'll have to do that manually. 

But anyhow, reload the activity center page and you'll be able to play this video back from the beginning. 

So thanks, thanks a lot everyone. 

Talk to you tomorrow. 

 

How to Participate in a cMOOC

Transcript of How to Participate in a cMOOC

Audio file
2021 10 08 - Ethics.mp3
Transcript

Auto-transcribed by Office 365 - changes made to remove all the duplicates of "Speaker n' at the beginning of each new line

Speaker 1  
So I can see if I.
Can hear you.

Speaker 2
Nice to see you Stephen.
Ah, nice to meet you guys.

Speaker 1
Excellent Margaret hi.
So OK, this is awesome, so I've been having horrible horrible web server problems so so much so I turned off all generated page displays.
I'm hitting my resource limits.
Uh, I believe it's probably.
My fault.
There's probably a programming error in there somewhere causing a loop, but I don't know what it.
Is and I can't seem to find the problem.
So there we go.
So, so people who expected to go on the website and watch in the activity center etc.
But aren't able to watch this meeting, which is unfortunate.
I'm sort of trying to step my way through the problem, but it's really difficult when I'm able to get one page load every five minutes.
So I have no idea what's causing it, but it's very weird so.
Uh, enough about my server problems so so instead of me launching into something, oh it's Juan Domingo.
Fernos welcome.
Let let me get reactions from you guys.
Up to this point, how?
How have you been finding the course?
And I I realized we haven't actually launched into the subject.
Of the course.
Yet, but just the the setup and the mood in the initial instructions.
How have you been finding that?
Better thumbs up for Matias.

Speaker 2
Yeah, it's been.
Good if I may say I haven't put enough time in Steven, so I apologize, but I watched the video you had on setting up a blog and subscribing it or adding it to the course newsletter.
So I'll I'm working on that and I hope to have that to you today and I'm I'm listening.
To this so I'm looking forward to be fully engaged as we.
Go through here.

Speaker 1
Something I've noticed about this June.
This zoom setup is if I talk while you're talking, it cuts you off, so that's interesting.
I wonder if it works in reverse if you speak while I'm speaking, does it cut me off?
I'll bet you it doesn't.
Yeah, that's interesting so.
The main purpose of this session or.
What I what?
I had in mind talking about is just the the concept of participating in A C MC, generally because it's quite a contrast to your regular everyday course where your steps.
Through learning activities as a group and everybody does the same thing and there are learning objectives and all of that.
So a number of you have already taken.
Well, actually, I see Matias and Margaret.
You both have your mic muted.
Are you able to speak and just don't want to or you're not able to speak?

Speaker 4
I think I can speak, I just don't.
Have anything intelligent to say?

Speaker 1
OK, I'm sure you do, but.
So I'll put that down.
It's choosing not to speak at the moment, Natalia.
How about you?
Are you able to speak?
You're not able to speak OK, so that'll limit your participation somewhat.
That's too bad.
I'll keep the chat open to have.
Keep an eye on what he's.
Say so if you have a comment to make just.
Make it the chat.
Switching off your camera is you're getting bandwidth warnings.
OK?
Well yeah from Margaret.
Yeah yeah, that's the problem with this whole concept of online learning is bandwidth and I still don't think it's been solved personally.

Speaker 3
Uh, so.

Speaker 1
So there's so basically Bernie, it's you and me conversing with Margaret interjecting one so.
So how have you taken any of these simants from me before?

Speaker 2
What in order?
No, but I I saw you at a conference about 15 years ago or 10 or 15 years ago.
And yeah, it influenced me back then.
Stephen influenced me back then.
So and I've been.
Subscribing and I I follow your your your writings and I.
It's influenced my teaching.
Influenced my teaching.
Constantly and or continually.
So I'm I'm.
I really appreciate the opportunity to say hello to you, and I'm indebted to you.
I am indebted to you.
Uhm, so I'm looking to just stay sharp here and you're going to force me to do come out of my.
Sort of some.
Of the patterns, maybe I've fallen into.
I'm pretty not a bad online teacher, but I know there's always something to learn and you're definitely you know.
Looking at all all kinds of things.
So I'm looking forward to participating and.
It's important to say I'm a teacher, so I need to be effective online.
And I can't just sit back.
I need to be.
Engaging people and so I'm it's a real pleasure to to be here with you on this this journey.

Speaker 1
Well I appreciate that I always worry.
Uh, whenever do any of these things is because I do things quite a bit differently from shall we say best practices.
And I was worried.
Does that make me have had online teacher?
Uh but I I run that risk Margaret, have you taken any see marks that I've offered previously?

Speaker 4
No, I haven't.
This is my first foray.
It's my first foray into.
A movie in general.

Speaker 1
Oh wow.
OK, well that that's interesting.
So you haven't been spoiled.
Or yes, I'll use the word spoiled by Coursera or edx or anything like that.

Speaker 4
I've done some like non synchronous stuff with Coursera and I'm currently in a part time.

Speaker 1  
Oh yeah.

Speaker 4
You know formal university online program, nothing nothing look like.

Speaker 1  
OK.
Technically, I think everything offered by Coursera is a quote unquote MOOC, but of course that that's less and less true.
Perhaps as time goes by.
Now I'm just going to shift my screen here.
Sure, OK, we have one person watching the YouTube stream, so we do have a fifth person, but they're they're just not visible to us.
So, so I'm just whoops, I've minimized hours chat.
So I'm just moving my.
My stuff around select I can see the chat on the YouTube channel.
The YouTube channel is at least a minute behind us, maybe more which I find very interesting, so I guess there's some signal processing happening there.
I'm learning all kinds of stuff about how all this works.
I really I didn't expect.
Zoom to have a live stream mode, but as soon as I saw it I knew I wanted it because what my original plan was was to do the zoom conversation on the desktop and then use an application called open broadcaster systems.
To pick up my desk, my screen activity and then transmit that to YouTube. That would have meant twice the bandwidth.
It's on my end and that would have probably ruined all the effect.
But this seems to be working pretty well.
This is for for.
Bernie and Margo especially, I know Matias has taken many of these looks before.
He's an old hand at it and he he spends more time correcting me and telling me that there are things broken on the site.
So and then than anything but.
Taking assuming is very different and and let me outline a couple of the major things that makes it different.

Speaker 2
If I may, can I ask a question here?
Yeah, even I.
Even I did take a MOOC of five years.
Or so ago.
On how to be a good online teacher.
It was by some Italian Italian University or something.
It was really good and I enjoyed it so.
So so that I have taken a MOOC and.
It was it was.
It was good.

Speaker 1
OK, I wonder if I I wonder if that was through.

Speaker 2
Yes, yes Emma.

Speaker 1
OK, OK yeah, right that's that's wonderful because I did some work with them.
They're based in Naples.
They're really good.
People down there and that I had the opportunity to visit with them a number of times and I really Miss Naples.
I I missed the pizza, especially in Naples.
I'm I'm now a pizza purist.
All good.
OK, so.
Uh, we have a list of topics and you know those are the the 8 modules of the course 9 if we include this one, but it's not really a curriculum traditionally defined.
Uhm, it's probably better to think of the list of modules or the list of topics that recovering as a list of things that I want to talk about, and I'm inviting all of you to talk about them with me.
Because there's no real centralized control, there's no requirement that you talk about these particular topics.
You know, I mean, because the MC sort of brings together a bunch of people who are sharing resources with each other, etc.
And you might go off.
Another tangent.
And there's nothing I can do to stop that and nothing I want to do to stop that.
So if if the participants in the mood decide to abandon the course outline completely and go in a different direction, that's actually fine with me.
I'll still keep talking about the things I want to talk about.
That's I'm stubborn that way.
But you know, I'll probably have comments on what you're talking about too.
Or maybe not, who knows?
So in a sense, you know, although there's a list of things I want to talk about.
In a sense, it's like I'm an equal participant in a conversation with you, although I am talking about a bunch of certain things.
And this was modeled.
Yeah, the the original MC that we created.
Back in 2008, they explicitly used this model. This was modeled on.
Courses were offered way back, you know, in the I don't wanna say early days 'cause early days goes back to the 1500s. But but before now at places like Oxford and Cambridge and whatever, at least as I've interpreted that.
Processed this of course I was never there and I didn't dig deep reading on that, but but the idea was that.
Students attended these universities and they formed themselves into learning societies and in some cases secret societies.
But mostly learning societies seem to have the society for analytical philosophy or society for consciousness and thought or whatever.
Most of my knowledge.
But this process comes from the world of philosophy.
And so they self organize their own studies.
They teach now one of the things that Oxford and Cambridge can offer that I can't is the students also worked with an individual professor who would be their main mentor and leader through the whole process.
But but other than that?
So I can't do that, but the way these societies would work is they would control or convince.
One of the professors to offer a series of lectures on a topic, AKA a course of lectures.
Which to me is where the word course comes from is just a course of lectures, a series of lectures on the topic.
And the professor.
Peace their job.
Uh, I would.
Reluctantly agree 'cause it takes away from their research and their one on one work and students and show up in the classroom or more more accurately, in Auditorium Hall and deliver these lectures and so.
You had some very, you know, through history.
You've had some very famous courses of lectures offered by these philosophers.
Is the the Ludvig Mic and Stein lectures.
I believe that they were at Oxford, but I'm not sure.
It's either Oxford or Cambridge this wherever Bertrand Russell was in GE Moore was and a bunch of the others.
You know, there was one apocryphal story where.
He walked into the lecture theatre and of course there there are some people in the lecture theatre and she EM. Anscombe and a bunch of others.
Uh, he walked in he went to speak, he went.
He sat there, he thought for about 1/2 an hour and then he abruptly turned around a lot.
I I won't do that to you.
But I really reserved the right.
And and you know the these things sometimes descended into popularity contests.
And there's the story of young, uh?
Hegel would attract huge audiences to his lectures and in the same university at the same time, Arthur Schopenhauer would attract a small, paltry crowd of people so.
But that's the idea, right and?
Then what the?
Students would do after that is they'd.
Take the content of these lectures and they do whatever they wanted with it.
You know they.
They would take copious notes and most of what we have of ludvic kanstein philosophy today.
In fact, you know almost all of it comes from the notes of his students, whose Bekenstein.
Like me, uhm, wasn't big in writing books, but he would write a whole series of notes and and then organize and rearrange.
All of these.
Notes, so after his death, students had access to their notes notebooks to Vic and Steins notes and in that whole pile stuff.
And so they organized his books. I'm I'm sort of like that with my Oh well, daily. So I got 30,000 he did.
It too a little.
Uh, if I ever become as famous as victims and somebody can sort into real thinking.
And and this course kind of works that way to where all throw out notes and thoughts and things like that.
But again, because it's a MOOC ASI MC, I invite you to do that too, so we actually get not just one person.
Set of notes.
Individual thoughts, but we get a group of people set of notes and thoughts and then we can reorganize them, respond to them back and forth, et cetera.
So that was the overall model of the.
MC When we first started.
And it it actually worked really well.
When we we had 2200 people, it doesn't work as well with the smaller MC and and so then we get more pushed to do something more formal and structured. But you know I I always remain hopeful.
I read this morning about someone doing AI in a MOOC there where they cheated a bit because they wanted to test.
You know how the AI could respond to what people were saying in the MC in order to meet learning needs?
And that's a good idea.
But instead of actually offering emotion, doing it in real life, they created a simulation of a 1000 person.
Look what a great idea. I wouldn't have to actually offer moose at all in the future. I just run a simulation on my mood and it'll have 1000 people and therefore be successful.
And then draw all kinds of conclusions from that.
That that was that was pretty funny.
That's the first aspect of it, and that's why I ask people to create their own their own blogs and to contribute, you know, by writing in.
The blog it.
Doesn't matter really to me if people like the blog.
In writing Twitter, people haven't been writing in Twitter, so I guess it matters a little bit, but actually we.
In the early days and I continued to this day, have encouraged people to, you know, use whatever forum works for them.
And and I've sort of settled into blogs over time because a lot of it and it's sort of funny.
A lot of these alternative forums have sort of fallen into disuse.
For example, in in the early days people created groups in Google Groups.
And had nice threaded discussions where they argued with each other and with us and with strangers who would come by.
And that was really good and.
I'm not sure I could do it now, but I was able back then to bring in the RSS from the Google Groups as well and put them in the newsletter and I did that.
It wasn't as structured in his neat as the blog post, but it was still pretty interesting stuff.
We had another group even created their own island in second life.
That was back when second life was a faint.
There was no way to aggregate that, but you know there are limits to everything.
I still still feel that way, you know.
They want people to use any platform that they.
Point and if a method exists to bring the content from that platform.
To the course.
Then I'll make that happen.
Sometimes it means I have to write some software, but but happily I can write the software to do that and it sort of makes an interesting sideline to the whole loop.
Experience, at least for me.
Uhm, again works better with more people than rather than fewer people.
Now we have we're up to about 120 people signed up for the Email newsletter. Of course, I said explicitly you don't have to sign up for the Email newsletter, so so I'm hoping some people took.
Me at my word on that and are subscribing to the course through RSS.
I have no idea how many people have subscribed through RSS.
I could look it up on Feedly, maybe, but you know, I imagine it's fewer than email newsletters.
Site gives us an idea of the size of this course, so it's kind of smallish.
So in other words, it's a MOOC in a affordances only, as opposed to.
Achievement, but you know, to me it's not about how many people have signed up.
It never is.
I was perfectly prepared and I would have done it and still will do it in the future if it comes up to do this session.
Live on zoom all by myself.
It would not be the first time.
I've done a session with zero people in the audience and I'm sure it won't.
Be the last.
So any that's the first part of this, so any any thoughts and questions on all of that so far?

Speaker 2
No sounds good, yeah?

Speaker 1
OK.
So the second thing I've added to this, and I experimented with this for the first time in 2018, which was my previous move.
It's been that long since I offered a move I keep meaning to do one, and I keep not doing it so.
I was sort of like.
Kicking myself and forcing myself to do this one 'cause.
Uhm so and.
A big thing about learning online is that it needs to be more than just consuming content and more than just seminars like this even.
Yeah, yeah yeah yeah I.
I actually have breakout rooms enabled though, so I'll be breaking you into four breakout rooms later.
No, I won't.
Hi Keith.
We have Keith joining us as connecting to audio at the.
Moment, so I'm not sure how about uh?
Worked out with all those dire warnings from zoom coming in my email.
I'm still waiting for these interlopers on the web to come and bomb our our our zoom meeting.
I've I've never experienced zoom bombing so I'm kind of looking forward to that, but it hasn't happened yet.
Unless, Keith, you resume bomber, but I don't think you are because you immediately came in.
No video and your microphone muted.
So muted not muted oh.
There you are. Hi Keith.
Can see you now on video, but you're not presenting as a zoom bomber.
You're presenting as an interested and engaged participant.
So welcome.
Uhm, so again, feel free to jump in with audio at any point.
Uhm, and as I mentioned earlier, I've discovered that if I speak, it's sort of.
The the the technical term is ducking, it'll reduce the.
It'll reduce the volume of anyone elses speaks so I can talk over it.
It's interesting that zoom has auto ducking for for the presenter.
Uhm, anyhow so.
Should be more.
Than just discussions like this shouldn't be more than just reading content.
There should be activities.
So I was inspired by Jim Groom.
Who offered a course called digital storytelling. 106 DS 106 at Mary Washington University in the US somewhere.
Kingston's in Maryland or something.
I should know this, but I don't.
I mean, it doesn't matter where it is, but.
What matters is that it's a real university, so he always had a, uh, largest group of people who really participated because they needed to get course grades.
I have no such inducements to offer, but.
What he did is 4 activities in the course.
You set up an activity bank. Now DS 106 is all about digital storytelling, so the activities all revolved around fat.
But the idea is that participants in the course could contribute activities through the course where if people did those activities rather than be selected.
Activities that would count as part of their course grade, so I tried that in E Learning 3.0 not in the E Learning 3.0 I had about the same.
Participation is in this MOOC, which should have been a warning, but So what happened was I I created a bunch of activities and those were the only activities in the course.
Nobody contributed more and that's you know I would.
Maybe 10 years ago it would have taken that personally, but I think that's a really common phenomenon.
You know, it's it's hard to get people to participate in that way.
Unless you know it's giving them a better way to.
Get grades in your course toward a degree.
Nonetheless, I've or I have I I should be more accurate with my tenses.
I am in the process of creating activities for this course as well.
So for each module over time you'll see a list of activities I will set up a form.
So that people can contribute their own activities so.
That will have.
The equivalent of an activity bank in this course.
It's something that I want my platform to support anyways, and I think it's a good idea and I think it's a great idea in fact, and that's why I wanted to steal it from Jim Grimm.
So, but how do you induce people to do the activities?
Because that's the other side of it.
Now a lot of people just did the activities 'cause they're engaged and interested in the course.
And I'm, I'm hoping people will do that here. I gave them really hard activities in E. Learning 3.0, you know like.
You know, set up a brave browser.
And use it to publish content into the Interplanetary file system, then access that content through a distributed file reading tool, stuff like that.
It was horrible, but people did it and and and.
You know, I was really encouraged by that.
I'll probably do that course again sometime in the future, and I think the tools will have improved by that.
I'm sure they will have at least some of them so.
But I still wanted to provide some sort of inducement, and at the time in 2018 everybody was talking about microlearning and badges.
So I spent some time integrated the course with Badger.
And created a mechanism whereby as people submitted their blog posts or whatever through the RSS feed, I can read those and award them a badge for a particular module.
And I'm setting that up.
Again here for this course.
Uhm, now in my perfect world.
A lot of this would happen automatically.
And and specifically, two things would happen automatically, number one.
You working in your own environment would not need to indicate that this work applies to this task.
In this module, the system would just detect that that's what you're doing.
But if you're working on, you know WordPress or Blogger or whatever, I see no.
These systems don't do that.
They haven't a clue.
What you're writing about.
So it's helpful if you indicate what module, what task you're working toward, if that's what you're doing.
Even if you don't do that, as I read your posts and obviously I'll be able to easily read everybody's posts because we don't have 2000 people contributing posts. If it qualifies for a badge, I'll still award it for the badge.
Uhm, so you might be getting badges.
You didn't actually apply for.
I'm proactive that way.
But you can see why, like like even MC, every personal intervention is A is a bottleneck, right?
So because a person who's looking at things and deciding whether or not to award badges.
Uh, they have an upper limit of how many things they can look at your all in education.
You know about that if you're like me.
You've probably sat down Once Upon a time with the proverbial vertical feet of marking to do.
And that is a vivid illustration of the upper limits of how much sort of stuff you can do.
So you know, ideally in a proper boot, the MC itself would determine whether or not your contribution is a submission for a task, and there are two ways to do it either.
The the system itself recognizes it, or you indicate that it is, and then the system reads that you've indicated it, so that's on my.
That's in the back of my mind is something I want my system to do eventually, but right now it doesn't do that and it's losely because.
You guys are using tools that are outside my control, so I can't make them do that.
The other thing is.
The actual marking of these things.
By marking what I mean is it qualifies for a badge or it doesn't?
And again, I'll just hit the button to award the badge I can.
Make that automatic.
Uhm, but you know eventually.
It would be something.
I don't know if I could ever write this, but it would be an artificial intelligence of some sort.
Saying, does this piece of writing satisfy the conditions of that batch?
Match them and see that's an interesting problem and and that's actually looking at some of the content.
This course, Even so, I may be exploring that at least in concept.
I don't know if I could do it in actual practical reality in the in the scope of time that we have, but you know that sort of thing is, is something worth thinking about?
So I've added that aspect to the course and part of what's causing my system to crash at the moment is making all of these pieces work together.
'cause here's the.
Other thing about a connectivist MOOC and now we're onto the third topic.
It's nonlinear.
And and I know it seems linear because we have a series of one through 8 modules and and that's because the book takes place in time and time, at least as we know it is linear.
You know, I do philosophy, so I'm perfectly prepared to contemplate the existence of non linear time.
However, that would work, but but.
You know there's.
There's certain practical limitations to thinking of time is not linear, so for the purposes of this course, time is linear.
How many other courses were you?
Do you get that right for the purposes of this course, we will say that time is linear.
And so, so there is an unavoidably linear element to it, but after that the course is nonlinear.
The course is structured or set up as a graph.
And and the the extent of that graph.
I'm not sure how that'll work exactly.
Yet the the graph sort of grows.
Now when I say graph.
What I mean by that is that.
There are a bunch of different entities in the course and I have different types of entities I have.
Modules I have posts in fact different types of posts presentations.
Uh, events, people, authors, links.
Which is what you're providing your you show up in the course graph as authors because you offer stuff and as links because that's what you author.
And more, there's a whole bunch of different types of entities, so.
My application which is called Grasshopper.
What it does in the background is.
It takes these entities and draws a link between them, so anytime you submit something.
It actually creates it creates a link which is the thing that you submitted.
There's a feed which is your blonde just thought of generally and an author, so there's three things there, so author align to feed and align to link and then back to author.
So those are three entities which are connected, so you have entities.
And connections between these entities, and that's what forms a graph.
A graph just is entities connected together, so the whole course is structured like that.
Now in the past, that structuring has been limited to the actual practical presentation of the course I have in my mind.
The idea of extending that to the content of the course as well.
So for example, in the in the second module we'll be looking at applications of artificial intelligence so.
I'm going to make each application properly so called.
A node or an entity in the graph.
And there will be.
I don't know there's 40 or 50 of them.
And and and, and we'll talk about those.
So have each of these activities and then.
Including but not limited to, the posts that you create will have links that are associated with these activities.
So how maybe I shouldn't call them activities or not activities, applications or column applications?
So there will be an application of AI connected to a link connected to a person connected to maybe an external feed, whatever.
Then in the next section we'll be doing issues related to AI.
So again my thinking is.
I'll make each issue a node in the graph and then issues might relate to applications might relate to resources, etc.
Something like that.
Now this is.
So this is kind of meta right?
Because the topic.
Of the course.
Is specifically these applications these issues?
These theories of ethics, etc that we're going to be looking at in in the third module?
Look at ethical codes.
I've got something like 40 or 50 ethical codes that I've looked at and analyzed.
Over the years, each one of those would be an entity, and then we can link them together so.
What that will give us is first of all, gives us a way of thinking about this material, which is kind of non-linear.
We can think about it in a more global or comprehensive way.
And I'll try to be, well.
My intention is to provide ways of accessing these.
That makes them more accessible, make them easier to comprehend, because right now it's you know if you ask what are the issues of artificial intelligence.
It's easy to get mushy in a hurry.
And by mushy I mean vague, imprecise, not really sure about what the domains are.
You know what I mean?
But after the course, ideally you will be able to go well.
There are four major types of issues, two of which are caused by this, two of which are caused by that.
We break these down or whatever you know you can talk about these issues intelligently.
And in such a way.
That you're able to discuss them comprehensively, you know, rather than saying the issue in AI, is this particular thing, which is what we see mostly in my experience, by looking at them as this graph related, you know, so more holistically, we were able to talk.
About all of the issues or what the issues in general represent.
Or whatever.
I don't know what it will turn out to be because that's how we see MOOC works, right?
I don't know what we'll learn.
We have this topic area.
We're going to look at it.
You're going to wrestle with it and by thinking of the content as a graph by thinking of ourselves as part of that graph.
What we learn will and the technical term is emerge from the graph.
We will begin to see patterns.
We will begin to see irregularities.
The idea is that we come out of this course.
Being able to recognize themes, ideas, etc in the literature or whatever, and I have found through my own experience when I worked this way.
Then then when I sit down to read, say.
A new article.
Or, you know a new publication or new study.
I'm sitting.
There going, oh.
Yeah, that's one of those.
That's one of those.
That's one of those you know I recognize and and then able to place this in the larger model simply by.
My previous work in this field and that's how I've worked generally, and that's how I work specifically in any particular domain.
And so it's that sort.
Of practice that I'm trying to engender.
In the course.
And yeah, that part with my software is.
Broken right now.
So and specifically where?
Uh, I use the graph like I have, say, a presentation, and I use the graph to associate it with a module and then present that that's a simple direct association.
That part is broken.
And it's it's.
I have a typo or.
Uh or some badly phrased bit of code somewhere in there in the piece that.
Presents web pages and it's probably sending it into an endless loop.
But that's the thought.
Thoughts on that?

Speaker 4
Sounds interesting, let's see how it builds.

Speaker 1
It's again not the usual way of presenting a course, because the usual way of presenting a course is, I would give you some learning outcomes, right?
And then you would, and I would structure the activities to produce these outcomes in you and then you would achieve.
These I can't.
Do that just.
I don't know what the learning outcomes are.
So, and actually, the activities you know you sort of think about what the activities could be.
Uh, well, if there were 2000 of us I'd say OK. Each person in the course draw an association, you know.
Do do some.
Associating right look at, say, an ethical code and map it to the issues that it addresses, and I'd give you a tool to do that.
And I still do that.
I think that would be fun.
So things like that or.
As always, you know, here's the topic area.
You know, like a type of application.
Find examples out in the world of that list, living your blog and so that when you feed them in.
When you write it, it'll the system will analyze your blog, find these activities that are these examples and map them to that application.
Hard to describe that.
When we get to that point, I.
Mean part of my?
Responsibility is to make that particular.
Task as clear as possible.
So, uhm.
Of course, again, the utility of these tasks is greatly increased with a larger number of people, but nonetheless will still be able to produce interesting results even with a small group of people doing some of this work.
And even if nobody does any work, I've got a whole bunch of links that I can associate with things anyways, so you'll still get this really interesting resource that you can use.
I see Bernie is not taking a call.
Uh, we're almost at our time, but there's a fourth thing as well about this and and and Matias is well aware of this and and possibly Keith as well and.
And that's the the, uh.
The openness of it.
Now we're doing this in like we're live streaming this conversation, for example.
And that's she.
Maybe maybe I should talk about.
I got a thumbs up from Matias alright, or from Keith rather.
Cool so.
I was just thinking like he could have openness level 1 openness level 2 because educators love levels of things.
But maybe I walked over or dimension, but OK, this is level 1 openness, the actual conversation we're having, which I'm sorry.
A bit one sided, but we'll live with that for now anyways, later on.
I'm hoping all of you tell me much.
More than I tell you.
But that's you know, pretty immediate, obvious openness.
We're live streaming on YouTube. Any number of people could be listening on YouTube. In fact, two or watching no bonus, we double over our audience.
You know, and any member people can watch the video later on into posterity.
So that's one type of openness, but.
Another type of openness is that the if you will assets of the course.
Are intended to be available for reuse by other people.
In the future, now buy the assets.
Every individual entity.
That or or artifacts?
That's a part of the course is available for reuse, and now I always.
I always.
Use a non commercial license, but that's just me, but honestly I don't care.
Uhm, but it's you know it, it just slows them down a bit.
But so you know all of the posts, the videos, the contributions, Now your contributions.
That's up to you.
You can license on however you want and this course will respect that license.
Now, presumably when you put it in the feed and.
Allow the course to.
Harvest that you're at least giving me permission to link back to whatever it is.
I do have copies of the.
Resources in the course database.
But ultimately, what really matters is the original that you've produced the copy I have is.
Simply to make it easier to do some functions with it.
But it's not like my intent is to display your content on my website.
That's not the point.
In fact, it's even it's actually the anti point.
I don't want to do that.
I want the content to be distributed and out there in the world so that if for some reason.
Grasshopper or my website or whatever blows up or has a catastrophic failure at the server side, which could happen.
All this other content exists some.
And and continues to insist after the course is finished, so that's the second level of openness.
There's a third level of openness which is the metadata about the course.
And and some examples of that, the list of feeds that's harvested.
So this list of feeds is made available in a format called Opme and what that means is that any person with an RSS reader can get this file opml file, which is the list of feeds loaded into their own RSS.
Reader and they can follow everybody feeds, including the course feed without ever actually interacting with the course itself.
So that's a different kind of openness.
It takes it even a step further, right?
So the list of links in the course, the list of posts in the course, I would say not.
The list of people because I don't know who's registered, but those who have volunteered to be authors.
That list UM, et cetera.
I'm not sure what other lists there would be, but any type of data in the course there's a list of entities of that type of data, the list of applications of AI.
Will be made available now.
There's no opml for that, but there is a standard format called Jason JavaScript Object notation and so all of these lists will be made available in that.
What can be done with that well?
I don't know.
But something right?
Uh, so a simple example.
Somebody who's creating a website could take the list of all the links, or the list of all the applications, or the list of all the.
Issues create their own websites such that it reads that JSON list formats it makes mail presents it as a nice list.
Or they could set up a little search website so that it reads that list and you can type in a search term and it finds that in the list of Jason and they can do that without really knowing a whole lot of programming.
And they they wouldn't even need a web server.
To do that, they they could probably do that right on their own desktop.
So that's another level of openness, right?
And all of us will be available to you either as a web resource or as a search interface or whatever.
So you know in the future.
If you're wondering, you know somebody says well.
Is there an?
Issue about AI or analytics that has to do with.
Disease is we can do a search on the list of things that we have.
Come and say yes or no.
So what what was that the third level?
I lost track of my levels.
But there's a fourth level.
And the fourth level is the graph itself.
The the list of all these links between types of entities that also is open.
Uhm, and that's published as.
Well, there's different ways of publishing it.
I will publish it as Jason.
There's also I have the thing that I use called Graph Markup Language Matias.
One of the things that he does is he takes these graphs and imports it into one of his own programs that produces all kinds of diagrams and tables.
This information can be imported inputted into a graph database so that you can do searches on it.
Now we might do that.
We might not.
I don't know.
It might be taking the whole thing a bit too far.
We do need to focus on the topic as opposed to meat.
Things to do with graphs.
Which is a different course.
But the idea is that.
When you're doing your search, you're not just searching through the properties of the list of items like, say, the list of applications of AI, but you're searching through the properties of the list of items and the things that list is related to a first order search, and the things those things are related to.
A second order search.
And so on.
So you can get very sophisticated searches of these graphs so.
Part of the objective of the course is to create this graph of applications, issues, codes etc related to artificial intelligence, artificial intelligence, slash analytics in e-learning.
Uh, so let's.
And make that open.
And and then, there's probably a fifth level of openness beyond those fourth levels of openness, but I don't know what it is yet.
But it'll be something.
So, and that's you know, you know we George Siemens and I and others, Dave Cormier and a bunch of us know the whole bunch of people wasn't just us built the first massive open online course and it was massive, but massive.
Is you know, just.
It's an aspirational thing.
It doesn't have to be massive.
It can be massive by affordance.
Let's take this course and it was online, definitely.
And it was a course in the sense of a series of lectures, but the key thing for us was open.
We're using open resources.
Bringing in open resources.
Maybe that's level minus one that we use resources that are open to actually create the course itself, but then.
You know the the.
Ongoing process of the course and the production of artifacts for the course that's all opened as well to produce a longer lasting, more robust resource, you know.
And those those original CONNECTIVIST courses are still available, and they're still used by people as resources and ideally.
This course would be used as a resource by people working on these topics in the future.
If it's a good course, if it's not a good course, it'll be forgotten and languish in obscurity until eventually somebody unplugs the server.
Yeah, well, that's the risk we run in academia, but it's a risk we're taking to my mind.
That's the process of taking a MOOC, right?
You're not taking a course to learn a bunch of stuff, although you probably will.
Uh, you're involved in this conversation in this project.
In this act of network building, both among ourselves, but also of resources and topics and all of that, where do you think?

Speaker 2
To it, Stephen?
Nice to see you.
Nice to see the other people too.
I'm really looking forward to connecting with the other people in the course in any way possible and I I like the fact that we don't know where it's going.
Go the.
That's that, that's exactly what I like.
I like that idea.

Speaker 1
Any other thoughts?

Speaker 3
Yeah, well I I.
 Was involved with both the first connectivist MOOC, the first one that you and George and Dave said, and on e-learning 3.0 and I thoroughly enjoyed the experiences and.
The E Learning 3 point.
Oh, I described to the people.
I work with is it really hurt my head?
It's important to have learning experiences.
That haunt your head.
Because otherwise you coast through in a big echo chamber and one of the things I like about the model that you provide Steven is you have to challenge yourself.
'cause it's too.
Easy to consume so you have to consume in a way where you can create something new for that.
So I've got high hopes for this because I really enjoyed the last two that have been involved, so you're looking forward to this.

Speaker 1
Margaret and his aunt.

Speaker 4
I'm really just seconding one query in cases that I'm looking forward to looking at the dynamics.
I mean different people, different perspectives, different ideas about.

Speaker 1
OK, great, that takes us to 1:00 o'clock and I want to be if nothing else punctual because for the purposes of this course times linear.
So, so the next live session.
Will be the.
Introduction to module one, not on Monday because Monday is a holiday here in Canada.
So on Tuesday, October the 12th, just verifying yes.
Tuesday, October the 12th at 12 noon Eastern Daylight Time.
So we'll see you all day.

Speaker 3
OK, bye.

The Search for the Social Algorithm

Transcript of Module 1 Introduction - The Search for the Social Algorithm

(Transcribed by MS Office 360, edited to correct errors and make it readable)

Stephen

Hello and welcome to the first live session in the first week of Ethics, Analytics and the Duty of care. We're no longer on the "how to take this course" segment, we're actually onto the real contents.

With me I've got Bernie here in the zoom chat and an unknown number of people watching the live YouTube stream either on YouTube directly or in the course activity center, and any number of people following either the video, the audio or the text transcript, all of which are being made available as part of this. With just one of us Bernie, you should feel free to jump in anytime you want.

Bernie

OK.

So because why not, right?

Bernie

Well, I can tell you that. So I have at our meeting last week, you reminded me of the MC I took from Emma and I went back and looked at that, and yeah, it's you even who directed me to that through a Facebook post, and it turned out to be a really effective course for me. It was all, a lot of it, was based on when I when I first saw you a number of years ago and you talked about, you know, going out, finding some information or searching something, digesting it or making meaning of it then.

And that's what I've been trying to do ever since, and it's it's been good and I'm I'm looking forward to being in this course with you. And of course, we're busy doing what we do, and I've already read one person blog post and it looks like it's going to be good. It's starting for me, so I appreciate the opportunity to be here with you and I'm committed here, 'cause I'm working on it.

Stephen

Yeah, it's been easy so far. It's it's going to get a lot tougher, there's a lot of content this course. But you know it's a MOOC, so pick and choose and you know you don't. I mean, the idea isn't to remember at all. The idea is to, you know, change the way you see the world. I guess this is one way of putting it. Or maybe inform the way you see the world.

What I want to do with this session is introduce not just this week, but this course as a whole. So if you were thinking of this whole course as a book, which you should because I am recording all the transcripts (so think about that) then you should think of this session as the forward. So it's not the actual content, but it's what comes before the actual content and what I want to do is set up the course and and the topic and put it into context.

And I have slides for it with a provocative title, the scandalous title... not really scandalous, but you know this could be a bit controversial if we think about it: "The search for a social algorithm." And if you're wondering, yes, that is me in the picture and I'm at Occupy Wall Street almost exactly 10 years ago today (actually October 29, 2011 - ed). Occupy Wall Street. So you might not think it but this course actually does have a genesis in Occupy Wall Street. Certainly a lot of the thinking that I've put into the course starts with the thinking around Occupy Wall Street. And and the people who were involved in that should know their activism had a wider and a longer influence of which this is one of just just one of many, many outputs.

So here we are, 10 years later, and we've reached a point in history where we don't know how to govern ourselves. Look at what's. Happening in the US, we look at what's happening in Europe, even China, Japan, I can go around the world and point to examples. We're struggling. We're struggling individually with fake news, disinformation, too much information, information that's triggering, information that is oppressive, things we can't say anymore, things we should say now.

And all of that in a world that's getting increasingly more difficult to live in thriving and surviving, you know, simple things like the the way wages have not kept up to inflation, let alone productivity, the arguments and fights over minimum wage and and as parts of communities who are living in a world of me-too and black-lives-matter and similar movements around the world.

And also at a time with refugees coming in from conflict zones around the world, and as a society we're struggling with issues of power, of disinformation, propaganda campaigns, and with the global crisis of global warming, the global supply chain breakdown, and of course, everybody's recent topic, the pandemic. And that's just three things, there's much more than that.

It's a time of complexity and chaos, right? It's a time of rapid change, events piling up on each other. You know, it's like the hurricanes: we're done one we got the next one coming down the the Atlantic Freeway. Information literally moving at the speed of light. When an event happens in Turkmenistan, we know about it right away, or virtually right away.

We're in a world of globalization. I mentioned the supply chains earlier. Global information networks. People use the term "context collapse" to describe it. What we say isn't just heard in our own communities anymore. It's heard around the world by people we never intended the message to go to. We're seeing division and polarization. You know, left and right. Environmental oil industry. Vax, non-vax.

Every society, every country around the world, is doing this in its own different way, facing the breakdown of communities and institutions. We look at the struggles the university has faced over the last two years with the rapid transformation to remote learning. How do wecope in that sort of environment? But that's nothing, I think, compared to what's coming over the next two or three years after we recover from the pandemic and start to figure out as a society how we're going to pay for it all.

Then there's the mismanagement of complex events. In the Guardian, either yesterday or today, I'm not sure which, they're talking about, how the mismanagements of the pandemic in the early days in the UK cost thousands of lives. Of course, in the United States, 600,000 people dead again, arguably because of mismanagement.

So there's a challenge, and it's within this wide context of challenge that this course takes place. You know the topic is "ethics, analytics and the duty of care." But let's not for a minute think that that's all we're talking about. Indeed, as a community - by 'community' now I'm talking about the online learning community, the Learning, analytics community, etc. - our response has been far too limited. That's why I call it the paucity of our response. The poverty of our response.

Our understanding as a community of, shall we say, analytics needs to be expanded much more than it is right now. We're looking at analytics as a way of looking at how students are progressing through courses to predict outcomes. But we need we need to think about this much more broadly than merely using data about students and their activities not only to understand and improve educational processes, but to support learning itself.

And you know, I've done a study of the applications of learning analytics and artificial intelligence over the last two years, three years. We'll see that in the second module. There's a huge range of applications that people don't even touch when they're talking about this, and we're beginning to see in some sources now the the suggestion, at least, that we need to think more broadly in terms of what we mean by learning analytics. What we mean by artificial intelligence and education.

It's not all bad. It's not automatically wrong. This wouldn't even be an issue for any of us if there weren't a huge upside to using this technology and using it precisely to address some of the problems I've just pointed out.

And we haven't as a community come to grips with the concept of ethics. We're presenting them simply as rules and principles. We're focused on a few issues, such as diversity, equity, inclusion, which to be sure are important issues, but do not constitute the broad sweep of ethics.

And we aren't even, I would argue, coming to terms with the changes that have happened in our understanding of ethics. Generally it's no longer simply teaching about rules and principles, despite what we might see in the academic response. Sternberg, who I quote here, says we should be teaching ethical reasoning rather than just ethical principles. But what does that mean? I mean, people can't even agree on what constitutes critical thinking, much less ethical reasoning. How do we decide? Or do we decide what's right and wrong? Are right and wrong even the right concepts that we ought to be applying here?

You know, we think of ethics in, shall we say, the old fashioned way: it's a set of principles for deciding, using reason, for decising what constitutes a right action and a wrong act. Well, that's a definition that doesn't work any more precisely because we live in a rapidly changing, dynamic, complex world. And in fact the breakdown of the institutions and the social structures that I've described are precisely because that kind of reasoning doesn't work any more. So what do we do?

And all of this arguably - and and I will argue shortly - is happening in a climate of change, huge sweeping social change that we don't yet fully grasp. Now, it's not just, "hey, we've introduced artificial intelligence, now the world changes." It's much more than that. If I had to characterize it in slogan form, I'd say that society is transforming from a tree to a mesh.

And Occupy Wall Street, in its beginning was pointing out what was wrong with the tree, what was wrong with the traditional structure and organization of society. We we can represent it here with this model of a traditional social network and you can see what what really is a fairly familiar hub and spoke kind of construction. And we can see that reflected in society as a whole, whether it's business and industry. You know how Apple, Facebook, Microsoft, Amazon might be these hubs? Or we might think of it in terms of websites. I guess we'd list the same list of websites. Or in other industries, other major companies. Or perhaps global social structures with Russia, the United States, China and all their vassal nations.

And the individuals who are in this network are profiting disproportionately. We see in the lower right hand side of that slide there a characteristic power law of the distribution of influence, and therefore also the distribution of wealth, in the society. And when you have this kind of structure, that's the kind of distribution that you receive. Also, when you have this kind of structure, it's much more vulnerable to disruption, for example, to disruption by pandemic (it's not something that Occupy Wall Street was talking about, but it was still there as a possibility), disruption by supply chain disruption, disruption by war and conflict.

Target the nodes and you can break down society. Control the nodes and your control society. And that's why everybody is going after Facebook. I heard someone say on it was one of Leo LaPorte's podcasts on the TWIT network, "People aren't trying to stop Facebook, politicians aren't trying to stop Facebook, they're trying to control it." I think that's true. They're working within this structure, and if you can control the node, you can control society to your own benefit. That's the way it works. That's why people were protesting.

The alternative toward which we are inevitably moving is a mesh structure. A mesh structure is the sort of structure that characterizes road networks, email networks, anything peer to peer, anything place to place, anything where you don't have to go through the hub to get from one place to another place. It's more distributed. It resembles discussion more than a lecture. It's more balanced in terms of power. And arguably, it's more reflective and more democratic.

I've made this argument in the past and I'll continue to make this argument. And if we look at or analyze power, wealth, influence, cinemais structure, we no longer have the power law. We have a distribution which is much more along the lines of what people when polled think is appropriate. Not absolute equality. Nobody nobody argues for that. A reasonable range of influence from the most influential to the least influential, where instead of one person having millions of times more power or influence than another, they might have 10 times or even 100 times. People are actually pretty comfortable with that, especially when we see the other lines represented on the chart here, especially when people who are in, shall we say the long tail, or, shall we say, making minimum wage aren't below the poverty line anymore, aren't struggling to get my to make a living.

So we're moving into that organization, but by fits and starts and not uniformly. Many of our technologies are already mesh structures. I mentioned the road system, I could talk about the power grid, etc. But they're being managed by hierarchical structures, and therein lies the dissonance. Therein lies the the clash of cultures within our system.

What I'm wondering when I'm studying with this course and other work is: what is it like to live in the mesh? We know what it's like to live in a hierarchy: you follow rules, you do what you're told, you rise up through the ranks. That's how it works.

In the mesh even our values, goals and objectives change. In the hierarchy these are pretty clearly defined: power, money, wealth, influence. But we're seeing more and more different values expressed by different people.

How do we know? In the hierarchy we're just told to believe. I the mesh there are no authorities anymore, and you can't just go around picking authorities. In many ways, unless the authorities are in roughly the same position you are, they're going to misunderstand your perspective. That's how we get arguments about colonialism and cultural imperialism. But even more to the point, they'll lie because they're in it for power, wealth, influence, etc.

What can we do? What are the practical steps we can take? What is it like to thrive, and shall we say, live ethically in a mesh? We're only beginning to learn that, and frankly, I am not going to be producing an answer to that question despite deeply looking at it for 8 weeks. Any anyway, that shouldn't be the output, that shouldn't be the outcome.

How do we learn what it's like to live in the mesh? There are two major approaches that I'm going to take as starting points.

One of them, as suggested by the analytics in the title, as the use of AI and neural networks. I'm going to characterize these as connected sets of entities with inputs and outputs, and therefore, an input layer and an output layer. The study is of the algorithms that add or strengthen or weaken those connections. And related topics such as activation functions, network, topographies, labeling. There's a whole bunch of factors that go into the design of a neural network.

And I'm understanding this endeavor as the intent to produce the set of algorithms that produce the best result. That's how they approach it. They'll take a challenge like "can you translate text from one language into another?" And you get the result and you're looking for the algorithms that produce the best translation, for example. That's how I'm going to look at it.

But also, in the other of them, we have the study of neural and social networks as they exist in the world. What's important here is (to this point anyways) we don't have the liberty to just go in and start tweaking the algorithm. The algorithm is the algorithm, whatever it is. You know the brain is the brain. Society is society. And so this is the study of these networks in the world.

It includes (things like) the identification of the entities. So we could talk about that. We probably will. Is the right identification of entities in society, the individual, the community, the cultural group? The linguistic group? Or do we take an intersectional approach? And what does that mean for network analysis?

It also means the study of network topology, the growth and development of networks, how selective attraction, for example, gives people more power and more privilege in a network. How these hub and spoke networks develop and why they develop? And the objective here kind of is to explain why the things they are, why the things are the way they are.

That's may sound straightforward. And again the work of creating the network or the series of algorithms that will produce the best result may seem fairly straightforward and fairly simple. But they're not, because there are no easy explanations. There are no easy prescriptions. Things are going to change from context to context. In a world where there are multiple simultaneous interacting variables, you can't just give a simple cause and effect explanation anymore.

So I'm structuring my work over the next year, not just in this course, but overall this way, so I'm looking at the networks and I'm looking at the analysis the two subjects that we've just talked about and these resolve into work, on the one hand, about ethics, and on the other hand, about literacy. So I'm looking at what we value, what we want, what we desire, what's right, what's wrong, and then how we go about reasoning toward these things. How how we manage in this world of data to come up with mechanisms that produce the best result not only in computer systems, but also in ourselves and also in society.

And is there a way to do that? I don't even know if there is a way to do that. I think we can approach one. I'm not sure if we can ever ultimately get there.

So the work that I've been doing over the last couple of years - and this is the current snapshot of what that looks like:

- the MOOC that we're looking at now:
- I've been working in a Government of Canada subcommittee on AI learning.
- I'm a member of the NRC Research Ethics Board and all that that entails.
- Participating in NRC data Equity working group I've.
- I've been participating in things like the creative of Creative Commons, ethics of sharing report,
- and I've and published on ethical codes and learning analytics.

So that's the one side of it. The other side of it, which will be next winter, February to March 2022. We focus on data literacy. And I construe that pretty widely to include data literacy, data management, etc. You know, again, it's an equally large topic.

- Data Literacy MOOC
- I've been involved with DRDC, which is defense research and development here in Canada
- I've been involved in something called the Fair's Fair book projects for finding accessible and interoperable, reasonable resources.
- A presentation of what does it mean to enroll in a course?
- Even a series of presentations accompanying this course about how to build a MOOC
- A thing called CovidEA which addresses a lot of these topics
- and even the work that I've been doing in blockchain and consensus.

All of these inputs are coming into these two courses. So let's look at that.

When I think of reasoning generally, I think in terms of critical literacies, and this is my background as a philosopher speaking here, not so much as an ethicist, but as someone who's studied how we learn, how we think, how we create. And I've drawn up (I don't want to say taxonomy, that's not the right word) a set of overall approaches which I'm grouping here into three categories, applications, values and practices. And we're going to look at all of these in some detail, but not under these headings necessarily, but this kind of thinking informs the background to a lot of what we're talking about.

The applications are simply in the mechanics of how things work and the mechanics of how things work. There's there's the two sides of that. There's the syntax, which is the mechanisms that are being created by artificial intelligence theory and neural network theory. This is where AI is now. We have pattern recognizers, We have systems that spot regularities, systems that classify, et cetera. And then there are the current issues in AI, including things like value, meaning, goals, the ethics of AI reference: what are we talking about? What kind of models in the world are we creating? All of that.

But moving beyond fact and where we really need to be thinking for a topic like ethics, analytics and the duty of care, especially in a learning context, but really, in any social context, are the values. First of all, how we use these technologies? What kind of actions do we take? Do we persuade, do we interrogate each other or the environment? What are scientific method, propaganda, all of these things? And also there is the context in which these applications take place, and how we define that, and how we describe that.

And that leads us to the practices: how we take these things and bring them together to give us a story about how learning, inference and discovery happen in society. And so I can break these down arbitrarily into cognition and change. Cognition is about how we argue or explain things. Change is about how we recognize, and work toward progress and development in society (or just spin around in circles, whatever the case may be).

This course is basically here on this slide. It's a comprehensive study of what analytics actually are and how they're established in our field, and maybe generally. So we're looking at the applications how we apply AI so, and that'll be module 2. And then later on in the course in the second half of the course, we're going to be looking at what decisions we actually make when we apply artificial intelligence analytics, neural networks, to any of the applications that we've been talking about.

Because we do, we make the series of decisions. People talk about, for example, the need to avoid bias in the selection of the population that we study. Quite so I agree. But I'm looking at this from the perspective of we are selecting a population to study. What are the decisions that we make when we do that? Because we're still in old world thinking: we want "bias causes bad results", and we and make simple explanations. But there's a range of decisions that we make when we're selecting a population for a study as input data for a neural network analysis. We need to know what they are. Then we apply the ethical dimension to all of this.

Module 3 looks at ethical issues. And you know I'm nothing if not dogged and comprehensive. Some people do literature surveys where they break down the list of papers into a small number of methodologically valuable studies. I just inhale everything like a vacuum cleaner. What's interesting to me is whether something exists, and if somebody raises an ethical issue, it doesn't matter what the context is. That issue exists. Now we can argue about whether it's salient or not, but the existence proof is simply the fact that there's a piece of writing or an infographic or a video and this issue is raised. So that's what I've done. I spent the last two years inhaling ethical issues.

Similarly, with approaches to ethics, the discussions around ethics and learning, analytics and ethics and artificial intelligence generally sort of skip over this step. They assume that the ethics have been solved - "we know what ethical uses of AI are, and we just just shouldn't do what's not ethical" but I'm going to argue, and more to the point, so I think, pretty conclusively, that these issues are not solved, that the 2500 year Long Quest to find reasons for deciding what's right and what's wrong is an effort that was ultimately a failure, and that we haven't been able to find reasons to make these determinations. We can certainly rationalize things after the fact, and we've done a lot of that, but the manner in which we actually determine what's right and what's wrong is not a rationalist project.

And that leads us to the duty of care. The duty of care is a feminist theory that is has its origins in recent years with the writings of people at Carol Gilligan and Nel Noddings and others from the perspective of practices, from the perspective of context, and especially cultural context, and from the perspective of putting the needs and the interests of general the patient, but more generally the client, first.

And there's a whole discussion there. And this is not a rationalist case of "I reasoned out that this is the right thing to do in all cases." It's nothing like that. It's not universal. It's not argued for. It's based on - well it's hard to say when it's based on. The caring intuition, the specifically female capacity and need to show care towards the young. I think there's reasonable argument there, and I don't think it's specifically a feminist argument. I think we all have our capacity to decide for ourselves or to make ethical decisions for ourselves in a non rationalist way. And this is a way of approaching that subject.

And that leads us to the practices. The ethical codes is what we do now, and so I study that practice closely. I've analyzed, I don't know, sixty, seventy, eighty different ethical codes, and then they're still coming in. And I'm still looking at them, and people say, well, no, there are common things about the ethics here that we all agree to, but if you look at these ethical codes, you find very quickly there is no such definition of ethics as we've quantified it in the different disciplines and in different circumstances. There is some overlap - fairness is something that comes up a lot, for example. But what we think is fair varies a lot from one circumstance to another. Similarly with equity, diversity and other ethical values. Justice - you know people think, "yeah, ethics should be about justice," but the understanding of justice is very different, not just from one society to the next but from one person to the next.

So that leads us to the question: if not ethical codes, what are the ethical practices? And that's the section that I'm going to use to finish off the course and take all that stuff that we looked at before, and think about it. How do we actually decide what's right and wrong? What are the processes of this? What do we actually do?

And looking at this from this mesh perspective that I talked about, we get an understanding of how we can move from ethics as determined for us by an authority or by an ethical code or by a set of rules to something that we can determine for ourselves as individuals and as a society. That's the objective.

It will be followed in in February, March, with a similar sort of approach. I've done an analysis of data literacy models and as well an analysis of elements of data literacy. I've done needs analysis and looked at other needs analysis for data literacy and for things like digital literacy and other kinds of literacy, in general, information literacy, computer literacy, even emotional literacy.

And then we look at the practices: rirst, how we measure and assess literacy? And based on what we've seen so far, we know that it's not just going to be "Can you do this? Can you do that?" Literacy is not knowledge of a set of specific facts, it's something else, and in fact, if we think about ethics and we think about ethical literacy, the same model can be applied to literacy more generally, I think.

And then we talk about enhancing data literacy. How do we become a more data literate society? And even talking about that back to how we become a more ethical society. But all of that is in 2022.

So that's the story. That's what I have in mind. That's the background. I probably shouldn't be doing this. But I I can't help myself. I think the issues are as huge as they get. The the need is as persistent as it gets. And I think there's something unique in the value in this approach that's worth sharing.

Bernie

I like the fact Stephen that you say you can't help yourself. I've noticed that you don't settle for the status quo in technology. You're constantly trying out new things and not settling for just what Google or somebody gives you. You'll use whatever tool and if the tool isn't there, you'll make the tool.

One of the reasons I enjoy following you is 'cause you got this sort of life long drive to keep going and when I'm trying to do what I do with my students, I'm just trying to get them slightly - you know some of them are struggling. Like I got an email from one and I got "I'm not feeling well, so it's not going to connect with me today." And I think your approach, I'm hoping it's going to help me with that, those students to somehow through osmosis or some other way, catch this virus you have, constantly seeking stuff out.

Stephen

I'm not going to be able to solve that particular problem, but I think we know what the story is that can be told here, and it's not a story of just you and the student, it's not even a story of what the students should be doing or shouldn't be doing what you should be doing, what you shouldn't be doing.

You're both working in the hub and spoke kind of model for learning. But if we think about the perspective of the student more generally, they're not in a hub and spoke, they're in a community, they're that are in an environment, they're in that culture where calling in like this is appropriate behavior.

Now we know that because that's what they did, right? It's John Stewart Mill. You can judge what people think is good by what they do, by what they actually say is good. You know you, you don't need to come up with a version of 'good' for them. They already have their own definition.

And and that's why an intervention at your level so hard. Because you're working against all of that. And maybe in a classroom you can intimidate them enough. But when you're online you lose that power.

And that's what's been happening in society as a whole. It used to be, and it still is the case in some societies, where "we'll just intimidate people and they'll do what we say." But this is working less and less. And there are good reasons for that: global connectivity, all of that. But also, you know, just this consciousness that people just don't want to take that anymore. I think it's a great thing, although it results in your student calling in sick when they're probably not even sick.

So hat's why I say you can't come up with a solution to some of these problems, because there is no solution to some of these problems and and the very idea that you are thinking that there's a solution, that's the mistake. There's so many ed reform movements based on this sort of solutionism (I guess other people have talked about this as well) without realizing that.

In an environment of authority-based information and power transfer, it's something different. But how would you? And here's the question right? How would you, at least in part? That particular ethic that that particular kid has, knowing that that ethic is created by and fed by their entire community and cultural surround, of which you are a tiny fraction and not even not important on that child-scale of important things.

And the best answer I have, the only answer I have, is to model and demonstrate. Which is where this doggedness comes in, where this curiosity comes in. And my thinking is that people see that and the results that that produces, and over time more people emulate it. So the practical thing, if I had to offer a practical thing, the practical thing in this case is for that student to be exposed to models of good practice, ethical behavior, etc.

Which - as an aside - society is providing exactly the opposite of. And therein lies our problem. You know? You know the sorts of activities that we think we should value, everything from hard work, curiosity, persistence, resilience, fairness, justice equity? All the examples that this particular student who called into your class has are the opposite of that. Their politicians, their business leaders, maybe even their parents, their friends? Hopefully not their school, but who knows, right? School is not the most just and equitable place in the world.

Yeah, so that's my answer. And you know, you know. I mean, it's the the old Clinton thing. It takes a village. It does take a village. And that's the problem. The village right now isn't really up to the task. And we we can't just will it or give it a set of rules to follow. Change has to be more fundamental than that. That's why this was so hard. Fascinating, but hard.

Bernie

Fascinating is a great word. I like it. Yeah, like they are fascinating.

Stephen

 So what am I missing? Or am I overlooking?

Bernie

What do you want me to do next? I'm supposed to start reading here. I gotta dig in. OK. I'm supposed to put a blog post together and a minimum blog post there. Yeah, if if. How to do it?


Stephen

OK, if you haven't done a blog post for the first part of the course yet, module minus one, then yeah, yeah, you want to do that. You know, get your blog being harvested. Submit your blog.

Write a blog post so that it could be included in the minus one module. I'll keep harvesting posts from every part of the course all the way through to the end of the course, so it doesn't matter how late you started.

There will be tasks for each part of the course, each module. They'll also come out in Monday, so there'll be one that comes out in your newsletter today and it'll be of the form, "Write a blog post about your thoughts on ethics and analytics at this point in time." What questions do you have? Look, yeah, look like the example you gave me with the student who calls in. You know how does that apply, right? Well, that's the sort of question that should be talked about in the blog post, right? How can what we're doing address that.

Something like that. I haven't actually written the task yet. Well, but it'll be something like that and it'll come out in today's newsletter. (Update - it wan't. Tomorrow. -ed)

That's that's the thing with this course too. Like I'm I'm building it as we go with binder twine and cobbled together code and you know. As I said, it would start many moving parts and they don't always mesh. You know, yeah, sure I could just use Moodle, but then it wouldn't be the kind of course I want. Because it's like back in the early days of connectivism, the 2008 course, we created the course to model the kind of thinking that we are doing and that still continues to this day. And I'll be getting people hopefully to to do more than just write blog posts, but actually go out, find things, share things.

I plan - no, I don't know how much of this I can carry through - my plan to actually take all of these concepts and put them in a big graph, put them in a big network and see what the relations actually are. So in other words, to try to do a little bit of network analysis. As we progress through the course and then maybe even if I can possibly figure out how to do it. Maybe even do a little AI as we go through this course.

You know tomorrow when I do my view, I actually do several video segments, but one of them will be some of the ways I'm already using AI for this course, Or even just in general. So I'll probably try to get people to do some of that right now, and instead of writing your blog post, say your speak your blog post and get it transcribed. I should do that!

Bernie

OK. Catch up on your past activities and prepare for your future activities, and then they'll be reading some videos and such in the newsletter as it comes out. OK, super.

 

The Joy of Ethics

Transcript of The Joy of Ethics

Ethics should make us joyful, not afraid. Ethics is not about what's wrong, but what's right. It speaks to us of the possibility of living our best life, of having aspirations that are noble and good, and gives us the means and tools to help realize that possibility. We spend so much more effort trying to prevent what's bad and wrong when we should be trying to create something that is good and right. 

Similarly, in learning analytics, the best outcome is achieved not by preventing harm, but rather by creating good. Technology can represent the best of us, embodying our hopes and dreams and aspirations. That is the reason for its existence. Yet, "classical philosophers of technology have painted an excessively gloomy picture of the role of technology in contemporary culture," writes Verbeek (2005:4). What is it we put into technology and what do we expect when we use it? In analytics, we see this in sharp focus.

Ethics is based on perception, not principle. It springs from that warm and rewarding sensation that follows when we have done something good in the world. It reflects our feelings of compassion, of justice, of goodness. It is something that comes from inside, not something that results from a good argument or a stern talking-to. We spend so much effort drafting arguments and principles as though we could convince someone to be ethical, but the ethical person does not need them, and if a person is unethical, reason will not sway them.

We see the same effect in analytics. Today's artificial intelligence engines are not based on cognitive rules or principles; they are trained using a mass of contextually relevant data. This makes them ethically agnostic; they defy simple statements of what they ought not do. And so the literature of ethics in analytics express the fears of alienation and subjugation common to traditional philosophy of technology. And we lose sight, not only of the good that analytics might produce, but also of the best means for preventing harm.

What, then, do we learn when we bring these considerations together? That is the topic of this essay. Analytics is a brand new field, coming into being only in the last few decades. Yet it wrestles with questions that have occupied philosophers for centuries. When we ask what is right and wrong, we ask also how we come to know what is right and wrong, how we come to learn the distinction, and to apply it in our daily lives. This is as true for the analytics engine as it is for the person using it.

And as we shall see, these are and continue to be open questions. It may seem that many writers approach the subject as though we have solved ethics. But we have not. There are multiple perspectives on ethics, and each issue that arises in learning analytics - and there are many - is subject to multiple points of view. We cannot simply say "solve this problem and we have solved the problem of ethics in learning analytics."

Perhaps, it may be argued, we should focus specifically on outcomes. This is a common line of reasoning in education circles, focusing for example on 'what works' (Serdyukov, 2017) and 'effect sizes' (Hattie, 2008). But as we shall see, it is no simple task to define successful outcomes, nor how to cause them. Will it work next time? What happens when we can't predict what the secondary effects will be, and what happens when we can't repair bad consequences after the fact?

Perhaps, it may be argued, we should focus specifically on rules or principles. This is a common line of reasoning in ethical circles, and especially professional ethics, where ethics in such fields are typically defined in terms of obligations and duties (Jamal & Bowie, 1995). We shall see, however, that universal principles do not take into account context and particular situations, they do not take into account larger interconnected environments in which learning analytics are used, and they do not take into account how analytics themselves work.

As we shall see, the key to understanding both ethics and analytics is to understand that they are not about something abstract and abstruse, but instead are about us - who we are, where we live, how we connect, what we believe, how we see the future. This is felt as a sensation or feeling of rightness and wrongness. In this context, what defines 'ethical' is a 'duty of care', the same sort of care that we have learned through the day-to-day experiences we have throughout our lives, through our interactions with others, through being connected, dependent and responsible for others. 

And we shall see that the same sort of mechanism is at work in learning analytics, where it is neither possible nor desirable to over-rule the learning algorithm. We cannot, or at least should not, expect analytics to create a somehow corrected version of ourselves. If we want better learning analytics - whatever that means - then we have to become better people. Not 'better' in the sense that we conform rigorously to rules or principles, not 'better' in the sense that we always succeed, but 'better' in the sense that we care, where this means something like, being kind, being open, embracing diversity, and living in harmony.

Ethics and Analytics: Getting a Feel for the Subject

Transcript of Ethics and Analytics: Getting a Feel For the Subject

(Unedited auto-transcription by Google)

Hi everyone. I'm Stephen Downs. Welcome to the latest episode in Ethics Analytics and the Duty of Care Today. We're looking at ethics and analytics. And the purpose of this video is to give us a feel for the subject of ethics. And analytics the way we'll do this to begin with is we'll look at a few examples of where ethics and technology have clashed.

Here's one example. Consider this case a patient is required to see a healthcare robot instead of a human. And what's interesting about this case is that it's not simply a choice that made by the patient, but rather it's a requirement. So, the element of choices removed, they can either see, they health care, robot or they see nobody.

And the ethical question here, that's raced I think is a question of access the same sort of thing can happen in education. If you look up robot tutors on Google, you will see dozens and dozens of results. And there's even one case the case of Jill Watson where students were taught by aerobic tutors without being told that they were being taught by robot tutors and that again raises a question of ethics, with respect to their choice and with respect to how much information they should get.

Here's another case, little while back. Google revealed something called project nightingale where they were accused after they were accused of secretly gathering personal health records. This is reminiscent of the Cambridge Analytica scandal for Facebook. Where again, records were secretly gathered and used for research purposes. Now, Google also offers a classroom application and it's relevant to ask.

Are they secretly gathering classroom records? Are they not so secretly gathering classroom records? And what are the ethical implications of this? Should they tell people? Should they do it at all?

Here's another example, analytics data is being used to adjust health insurance rates. So, the insurance company looks at what your doing on online, maybe watches your videos, perhaps your skydiving or bungie jumping, and then adjust your health insurance. According to what they see, now, another country is like Canada, where everybody receives health insurance.

This isn't the case because we don't have health insurance rates but we do have tuition and other costs for education and it's no stretch to imagine companies adjusting education. Whether it's the costs or access to education or any other factor related to education based on the analytics data that they can get from trolling through social media sites.

And again, this raises ethical questions. What data are suitable for use for educational purposes?

Here's some other one. This involves Facebook again, where a company experiments on the use of news feeds and other data to actually alter the emotional states of users, when this came out, of course, it was a scandal. But what if we know ahead of time that companies are doing this?

And what? If ahead of time, we're able to identify beneficial purposes for this. We can easily see, for example, experiments on educational data, feeds allowing researchers to alter or adjust the emotional states of learners. So, they're more receptive to the education. They're receiving. Is this, right? Is this wrong?

Under, what conditions would we countenance? Doing such a thing.

There's something more down to earth cafe, and deli using facial recognition software to build its customers. There are a number of stories like this in the media stores where, you know, longer have to go through a checkout. They just use cameras and watch what you take off the shelf and put in your bag and then charge you for it.

Based on things like facial recognition, school districts have been using facial recognition for some time. Now, the most ostensive purpose of, this is using it for security purposes. The US has a problem with school shootings. And as a result, they're screening everybody who comes into the school, is this ethical?

It's certainly a good purpose, right? Charging people for what they take preventing violence but a spatial recognition, the software you do this. What about facial recognition software used by an examination company? Say protorial to proctor exams. Now, the ethics are a bit different, aren't they?

And sometimes it's not the use of the technology but the refusal to use the technology in many cases physicians because of religious reasons. Perhaps have refused to apply certain technologies on the grounds of ethics. Some of them may even be life-saving technology, We've certainly heard of cases where physicians don't want to perform a certain operations, Don't want to perform blood.

Transfusions Don't want to perform transplants on people who have already had covid. Educators may also refuse to use learning analytics for similar reasons, If an educator believes, for example, that video proctoring is ethically wrong. They may refuse to use. It is yes, is the educator ethically, right? In such a case.

Well let's take that. Another step further. Some technology companies are refusing to provide services to customers that they believe are ethically wrong. That was a case with Google Cloud services. For example, they made the climate contract for to an abusive government, or an agency. They may put their finger on the scale if you will to, for example, equalize error rates across protecting classes of people.

There's all kinds of practices that accompany may take based on the ethics of actions, undertaken, by other people. And specifically their clients and are the companies ethically entitled to do this is it up to the company to decide on the ethics of a certain action. I recall in the most recent federal election, we had a political candidate here in Canada.

Who was flagged by Twitter for posting, what they thought Twitter thought was a misleading or misrepresented of Lydia. It was an advertisement and it was saying that the other candidate hell of a certain position and Twitter said no, they did not and they flagged the video. It's very arguable that the Kennedy did.

Hold that position. And that Twitter was wrong. Unless suppose that they were wrong, is it up to Twitter to apply ethical standards, Twitter and American company to apply ethical standards to politicians running in the Canadian election? It's a good question. So what do all these cases have in common?

Or there are number of things that they all have in common and this will define the scope of our study, first of all. And most obviously, all of these are cases where the company or individual or government or institution uses advanced computing applications and learning analytics, which will call analytics, just for brevity.

And they may vary, we'll talk in a later video about the types and applications of these technologies. But that's what they all involve their, their instances of this intersection between advanced computing technology and ethics and they raise similar questions. You know. In each case, the specific question is different.

But the questions overlap in the sense of asking how we address these practices, whether these practices are ethically acceptable, what would constitute ethically acceptable in educational circumstances and in wider circumstances and on what basis should we decide one way or another?

These cases also aren't simply cases of individual ethics. They aren't simply. Cases of is this company doing the right thing or the wrong thing is that person doing the right thing or the wrong thing. These are all cases where the use of analytics, artificial intelligence data gathering and the rest of the infrastructure that supports all of this, maybe pushing society as a whole in a direction that we're uncomfortable with.

And we sometimes label this, for example, that this surveillance society or the data society, the information society and these terms suggest that the fabric of society is changing as a result of the ethical decisions or the unethical decisions that we are making with, respect to this new technology.

There's also the sense in which there may be misuse or deliberate harm caused by the people who use these technologies. Sasha Baron Cohen, who many people know better with borax argued recently that the platforms created by Facebook, Google, Twitter, and others. Constitute the greatest propaganda machine in history recently in testimony before a committee.

A Facebook whistleblower argued that it's not the informational content of the disinformation that Facebook produces, that's the problem. Rather it's the algorithm itself. The use of these particular technologies for the purpose of nothing, more than making money. And that there's a structural problem here. Either way, we're looking at not simply ethical lapses, but deliberate harms that are being inflicted on society either, by individuals by companies or by the overall structure of the system that we've put together.

Collectively.

And finally, These technologies are a lot like people, and, I mean, that in the most literal sense, these technology's are either already able to perform tasks that humans have traditionally performed. And we'll look at some of those in some future segments of this course, or they're potentially able to do so, you know, the case of robot tutors, arguably, we're not there yet, but we can imagine based on what we've seen so far where robot tutors could replace teachers.

It's it's conceivable. It might be technically impossible. I don't know, but it's conceivable and the ethics of robots. If you will aren't the same as a ethics of humans, one other consequences of using analytics or artificial intelligence is that they may make their ethical decisions differently than we do A driver, might swerve to avoid a deer on the road.

A machine might not a human teacher. Might find grounds for forgiveness, for a student who, for some reason skipped the question in their test and machine might not. And so, the replacement of humans by intelligent machines, pauses a whole class of ethical questions and they kind of break into, I would say two categories.

One is what were the ethics that the humans? Apply in these cases and second, what should be. And how will we create the ethics that the machines use if they're replacing humans? And of course, there's a third question. Overall should machines do the ethical tasks that humans have done in the past?

So that's a broad sweet of some of the issues that are involved in this course. It's by no means comprehensive. We hope be very comprehensive in module three with the issues, but the idea here is to give us a sense of the scope in the scale. The problem that we're wrestling with.

So, I hope this causes you to think about it and I hope looking at some of these issues in particular give rise perhaps to new thoughts about the ethics of using analytics in these particular situations. So that's it for this video. I'm going to stop it. Now I'm going to in fact, this is the end of this video and if you're watching, here's what's going to happen.

I'm going to turn everything off. I'm going to set up the next video and then we'll do the next video. And what you should do is give me a give me a couple minutes and then reload the activity center, just, you know, refresh, the screen, and the new information should show up.

So, thanks a lot and see you in just a few minutes. 

Ethics and Analytics: What We Mean By Ethics

Transcript of Ethics and Analytics: What We Mean By Ethics

Unedited Google transcription.

Hi everyone. I'm Stephen Downes, with S's at the beginning and end of my name. Welcome once again, to ethics analytics. And the duty of care, we're still in module one. And this is the fourth video in module one and we're looking now at what we mean by ethics. And we're going to take a broad scope, look at it and try a narrowing.

What we think ethics actually means return ethics actually means for the purpose of this course. Now just stay a terminal. Logical. Note the word ethics. And within us, we could think of it as a singular. What is ethics? We could think about as a plural. What our ethics? We could talk about something like ethics is one of those subjects that each of us feels.

We have an understanding for, or we could say that ethics are important in a human life. Either way, we'd probably be grammatically, correct? So I'm going to be a bit indifferent on whether I treat. The word ethics singular or plural? It's really going to depend on contextually all. Default to the singular.

Ethics is a single subject but sometimes we'll talk about it in the plural as in the sense of there are many ethics to consider here. They really depends on context. So what are ethics? What is ethics as a subject? Well, it's one of those subjects that we all think we know what it is.

And usually for most of us, it involves some combination of integrity principles, morality honor choice, conscious fairness responsibility etc. But it's a volatile mix of subjects and it's not always a mix that we come to agreement on. And we don't think we have a pretty good grass on the rule that affects plays in life.

But again, some people cautiously seek out to live deeply, ethical lives. Well, others just notice that voice in the back of their head, reminding them what they should or shouldn't do, but it's not really a top priority. What we can say about ethics is that it's a subject, is 2500 year.

Olds years old or more. That's about how far the written record goes. There's a history of deep and often contentious discussion on the subject and that discussion continues to the stay.

What do we mean by ethical? There are different ways we can look at it. It can include or describe an outcome that wasn't ethical outcome, they can discover describe a type of process. It can describe the set of values that we hold or the percent of principles that we follow.

These are ethical principles.

Ethics can be arrived at or reasoned about in different ways. At first glance, we might just think, well, everyone knows what right is, and what wrong is. But the arising this understanding in different ways for some people, ethics is something that is discovered in the world. Maybe they're looking at the natural state of humanity, the way Russo did, maybe there looking at it, from a biological revolutionary perspective For others.

Ethics is something that we create JL Mackie. For example, writing as book ethics inventing right and wrong and maybe it is just something that we create in order to perhaps produce social, cohesion, or some other political purpose for others. Ethics is something that is revealed something that comes down quite literally on tablets from the mountain and there is no rational basis for ethics.

They're just is of sentence of, right? And wrong.

Ethics might find itself, applying to many different topics and coming from many different domains ethics might involve rights and fairness, justice equality, diversity access, or it might involve biology religion science, psychology humanities, law, numerous different subjects. They're all going to both be influenced by ethics and have an impact on ethics.

There's another entirely different stream of thought that describes ethics, as being based on virtual and character. Perhaps as described by Aristotle, the ancient Greece, and we think of the ethical person as the person who just plays such qualities as wisdom, courage, humanity, transcendence, justice, moderation, or pick your set of qualities reasonableness.

Rationality, kindness care.

Ethics also because we're not done. Yet is generally thought of as speaking to what actions to take and there are different ways of expressing this and these all correspond to different ways of thinking about ethics. It might describe actions. We should take should give to the poor. It might describe actions that we ought to take.

You ought to pick up after yourself in the cafeteria or sometimes it talks about actions. We must take. You must be fair in hiring decisions and counter to that ethics also describes or can describe the opposite of these things that we should not do things. That we ought not do things that we must not do ranging from simple and fractions like perhaps lying or leaving out aspects of a story to more significant fractions like stealing or killing or copyright infringement.

That was a joke that lasted just in case he didn't get it. We can also talk about what ethics is by talking about what it is not. And here, I'm following the last quiz SL 2009, but I agree with these. One thing ethics is not is that it's not the same as a feeling, for example, we would not say that, simply because something is repugnant or offensive.

It is unethical. Something can be repugnant or unethical. Like, for example, pugs without being an ethical repugnant or offensive without being an ethical. I'm making a joke about pugs, but I'm thinking about, you know, maybe a dirty trash in the back yard, the downstairs, part of an outhouse, etc.

All of these are repugnant and offensive but they're not necessarily on ethical and more of the point simply because we find them repugnant or offensive. It does not follow necessarily that they are unethical. We want a bit more of a story there, whatever. That story might happen to be but we want more of the story.

What else is what else is ethics? Not. Well, it's not the same as religion for one thing we think of ethics, as being broader than religion. In the sense that most people I think do not think that ethical behavior is limited to those of one particular denomination of one particular religion, nobody believes that only Zoroastrians can be ethical At least.

I don't think they believe that but there are also other reasons why we would say ethics isn't the same as religion. It's arguable as Kineilson argues that. We can be ethical without religion ethics without God, is the name of his book conversely. It seems to me, certainly possible that a person can be religious, but be unethical.

And even more to the point, the domain of ethics very often, it seems extends beyond the domain of religion, beyond the domain of spirituality. And you think of, for example, copyright, you wonder what the Bible would have said about copyright. And honestly, I don't think we know. So it's not the same, that doesn't mean that there are no religious arguments for or against propositions in ethics.

It doesn't mean there's not a religious dimension to the subject and there's certainly can be, especially for people who are religious, but they're not the same.

Ethics. Similarly is not the same as cultural culture or cultural norms. Again just like religion cultural norms. May influence our ethical decisions and our ethics may influence our cultural norms. Someone would hope that they do, but they're different Different cultures for one thing. Defining different define. Ethics differently. For example.

We can distinguish between compliance oriented cultural perspectives as opposed to value oriented perspectives as opposed to say libertarian perspectives, where it doesn't matter. Also arguably some cultural practices appear, obviously, wrong slavery is a good example, cannibalism is another human sacrifice. These are generally thought of as bad things at least, today by us in our society.

But there were cultures and times when these practices were deemed as ethically, okay? And indeed sometimes that even ethically required. Certainly to appease the volcano. It's the only right thing you can do, right? So there is a distinction to be drawn between what is normal in a culture and what is ethical?

Additionally, cultural norms may go. Well, beyond ethics, and our culture, it's normal to wear blue. Jeans, is it wrong to wear blue jeans? Not in our culture. Another cultures in other circumstances. It certainly is.

So ethics is not the same as culture. Ethics is not the same as science, either. And again, we have the situation where science may influence ethics and ethics. May influence science. Now certainly the evidence matters well, maybe I shouldn't say certainly. Maybe we can just sit here and just make up ethics or if they're revealed.

Ethics, perhaps the evidence doesn't matter. But arguably, the evidence matters certainly would were talking about ethics. We need to take into account, how we should regard evidence and what counts as evidence. But even that said, there's a longstanding argument in philosophy that what ought to be the case does not follow from.

What is the case? For example, someone might argue that it is against human nature to fly. It does not follow from that, that flying is ethically wrong. Now, you could substitute your own practice or behavior for the word fly. We still have the same form of argument here And the argument suggest that what is the case here does not inform what is ethically, right?

Or wrong one. Might say that. What is the case? Informs? What is ethically possible, and there's another dictum and athletics, the expression art implies can that if you ought to do something that can be true. Only if you can discontent you ought to say the drowning person. For example, only if you can say the drowning person and there's good argument for that and there's actually argument against that as well.

But we can see how what we know about science and what we are able to reasonably predict is consequences, can be something that informs what ethics apply in this particular case,

To wrap up in a final segment of this talk. Let's think about a few things that ethics might be and by by might be what I mean here is that maybe this is a good approach for thinking about ethics. Maybe it's not and it's the sort of questions. These are the sorts of questions that we're going to want to think about one that I see quite a bit is that ethics is a framework for making decisions.

So it's a tool essentially and the idea here is that analytics and related technologies pose dilemmas dilemmas for practitioners and ethics is a framework or a tool that allows them to make the right choice when they're facing these dilemmas whatever that might be.

Other people and here, I'm taking a meal. Someone but also many others, might argue that ethics is inherently political, and there's a point to me, made here. Certainly, there's a strong relation between ethics and politics and we would like our politics to be ethical, and sometimes we'd like, our ethics to be political.

Sometimes we'd like our ethics to inform policy. I think there's a divide between them. I don't think ethics should govern everything in politics and I don't think politics governs. Everything in ethics, there may be a cause effect relationship here, or there may simply be an overlap or a commonality of topics being discussed or categories being considered.

Ethics to wrap up. Might also be rational and, you know, that implies that ethics might also be irrational. We'll explore that. But certainly there's this sense the ethics in these includes some sort of sense of rationality reasonableness or decision making Certainly the philosophy of the manual. Can't makes that suggestion explicitly that ethics is within the domain of reason within the domain of practical reason.

And that the idea of being able to mate, the right or wrong decision about something is something that is inherent to rationality and indeed inherit should be a human being and in particular, a spiritual human being and we certainly use reason a lot when we're talking about ethics, and it's interesting because the topic of artificial intelligence or analytics also brings into question, the concept of rationality and others, right?

The concept of rational agents this central to artificial intelligence. So, ethics often depicts, it's subjects either us or machines or institutions or systems as rationally. Agents trying to determine reasonably in responsibly. What is right? And what is wrong. So my question, which I'll raise to wrap up this video is can we do that?

Is it reasonable to suggest that we use reason in order to address ethical questions? Or is there something more subtle at work here? And I'm going to suggest screw the weeks and months of this course that. Yeah, there is something a lot more subtle happening here. I don't think ethics really is any of these three things framework a type of politics or a branch of reason.

I think it's something different. And one of the things about the duty of care that appeals to me, is that it gives us a mechanism that allows us to get at that sense of what's different about ethics. Of course, we've got a lot of thinking and talking to do before we get to that point. 

 

Ethics and Analytics: What We Mean By Analytics

Transcript of Ethics and Analytics: What We Mean By Analytics

Unedited Google transcription from audio.

Hi. I'm Stephen Downes. Welcome to another episode of Ethics Analytics. And the Duty of Care. Today's video part of module one, which is the introductory module to the course, is ethics and analytics, what do we mean by analytics?

So analytics generally is thought of to be related to data and related to decision. Making for example, here to channel writing, it's thought of as the science of examining data to draw conclusions and when used in decision, making to prevent present paths or courses of action. But this is, by no means.

The only way of thinking about analytics, we can also think of it as the overall process of developing actionable. Insights through problem definition and the application of statistical models. That's what Cooper writes in 2012.

The focus of this course is going to be learning analytics, that is to say the application of analytics, which will continue to talk about here and learning or educational context. So as apply to learning and education and even looking at this definition, we see there are different aspects of analytics that we can focus on everything from data environments, contexts to the objectives of analytics.

And learning the methods, and who is involved to the stakeholders.

When you stand learning learning analytics, is typically described in terms of its objective, which overall is to increase the chances of student success. But in practical day to day sense, might mean anything from basic reports and log data through experimentation and results of trials to the organization of students and faculty to the transformation of an organization.

The way it offers its classes the way it presents materials. Even through to a sectoral transformation. This model here is called the maturity of learning analytics deployment model.

But there's also what might be called a scientific goal to learning analytics looking more deeply at the subject which is to say the learner and trying to approach an understanding of how that person learns by studying the the mechanisms of analytic systems in order to understand the mechanisms of human development and human cognition, There's a this idea that these might work handing hand to develop, shall we say a science of learning.

But generally, we want to do more than just understand. We want to optimize learning, we're looking to do what we're doing better. And so, this involves is George Siemens, says the measurement collection and analysis and reporting of data about learners and their contexts.

In this course, I want to take analytics to mean something very broad. There are different ways of thinking about analytics and it's easy to get distracted by focusing on a, fairly fairly narrow perspective. But let's look at some of the different questions we can ask Gartner. For example, offers a model of analytics that moves from basic information management through to optimization at the basic level.

We ask what happened? Then we get a little more diagnostic now we ask why did it happen? Then we try to predict what we'll happen. And then finally we look for efficacy or agency how can we make it happen? Of course the answers to these questions are going to depend a lot on who you ask typically when we look at learning analytics and by typically I mean generally or in the majority of studies that I've looked at are the majority of reports that I've looked at the focus of learning analytics is described from an institutional perspective, and we read things like slate and tate here, learning analytics offers the potential to provide educators with quantitative intelligence to make informed decisions about students learning.

So they're used by the people who provide organize and present educational or learning opportunities. But what we mean by learning and it's learning and education might be very different depending on who we're asking.

For example, here, we have an institutional leader saying it might be the case that we keep them students. We retain them, but also, we were able to provide them with better support. They're looking at it from this, institutional point of view, the teacher might be saying, you can reflect on what works and what doesn't, which should I keep doing?

What do I need the change? The student meal is looking, at analytics, from a more personal perspective. I'm always curious about which areas I'm struggling in and which areas I am doing bettering. These three domains are important to an understanding of analytics as a whole. Not only the institution of domain sometimes called academic analytics, which looks at operational financial decision, making student retention topics, like that not only teaching and pedagogy learning design, learning design, curriculum, recommendation and materials, of course, paths etc.

But also learning, from the learners perspective, learning strategies feedback, dashboards all of these aspects of learning analytics play a role in the subject that we're talking about. In this course, Another way of looking at the different types of learning analytics, is to look at the different areas which analytics is used and we're going to see a similar tripart division of the field, the UC Berkeley Human Rights Center research, team for example, divides AI tools into three categories.

Learner facing teacher facing and system facing or institutionally facing. It should be clear though for our experience with little the learning management system that the same tool might face all three of these sectors. At the same time, just three ways of looking at the same data. In fact, as we look at it, as we look at the history and developments of learning analytics over the years, there are numerous types applications and domains of analytics research and education.

We can look at online systems. Neural networks students, paper learning education, study virtual learning and more. All of these are different perspectives for different frames. We can attach to learning. And in fact, you know, as I prepared for this particular piece of work looking for these frames, looking for these ways of characterizing analytics, I saw model after model after model lens.

After lens after lens, there are many ways of categorizing and typifying learning analytics. None of them is probably best for any given application. You could probably choose the one that fits your purpose. Most tightly, but if we're going to understand the subject of the ethics of learning analytics, we want to construct.

This is broadly as possible. The wider definition avoids. The difficulties trying to come up with a narrow definition, but also it makes sure that our look at the ethics of the subject is complete that we're not ignoring potential ethical applications or ethical implications. Simply because the practices outside the scope of learning analytics attempting to avoid that.

If the question comes up, we'll include it as part of learning analytics and then we'll sort it out from there.

Now analytics is part of artificial intelligence. Artificial intelligence has its own subdivisions and ways of breaking it down. This is a useful way of breaking it down. As we can begin with artificial intelligences, self as a term, meaning something like building machines and software that can mimic intelligent behavior.

May be mimic is the wrong word. The subset of that, is machine learning where instead of providing the AI with explicit instruction or explicit rules, we focus on giving computer systems, the ability to learn from data without being explicitly programmed. And then a subset of machine learning is deep learning, which is as neural networks to shall we say, learn a representation of a data set and but near on networks here, what makes the deep learning deep, is the idea that these neurons, these connected units are organized and layers, and it's the layers that makes the learning deep.

So, to description of the topology of the network, and not say the idea that it can have deep thoughts or something like that.

So we're going to take analytics. Broadly to include artificial intelligence. We're gonna think of artificial intelligence, broadly, a software and possibly hardware systems designed by humans given a complex goal to act in a physical or digital dimension but perceiving their environment through data acquisition interpreting, the collected structure to run structured data to reason or process that information and decide on the best action, where that action could be.

Any of a number of things, including a prediction, including a categorization, including a representation and more. We're not going to limit it to action as in action verbs, and we're gonna keep the focus a wee bit narrow in the sense that we're going to focus much less on AI.

That's based on symbolic rules and much more on AI. That's based on narrow networks. In other words, machine learning and deep learning. And the practical reason for that is that most of the field has turned away from symbolic or rule-based systems. Now there's a caveat there and the caveat is that in many applications, you'll find a blended approach with a neural network being used and then rules being employed to apply constraints on those rules.

Will certainly consider such systems because such systems have ethical implications. If you apply air, don't apply a rule that clearly has and ethical implication. But we're not going to be thinking of the rules based systems from say, the 1970s in the 1980s as examples of current artificial intelligence or analytics.

Now generally through the course, I'm going to use the global terms analytics and AI or artificial intelligence interchangeably. When SAI, I mean analytics, when I say analytics, I mean AI. When I use either term, I'm talking about learning analytics more specifically. In general, we can think of these general uses as being fairly loose when we need to be precise.

We will be. So if we need to say machine learning as opposed to other methods for example as opposed to expert systems we will if we need to make the distinction between supervised and unsupervised learning we will if we need to make the distinction between convolutional neural networks deep learning etc, we will.

But generally, when I'm using the global terms, they'll just be loosely applied. Finally what makes this different? Why does you know the topic of analytics and artificial intelligence in learning create a range of ethical questions. We have an encountered before. Well, back in the 70s where and I'll wrote that the dangers of digital technology stem from three major effects and, and just pretty much captures a lot of it.

First of all, there's scale a computerization, enables an organization or today an individual to enlarge their data processing capacity substantially. We look at the modern artificial intelligence systems and they're looking at a billion data points. This is something that we just could not do in the you know, even 10 years ago, even 20 years ago, just could not do that.

Exceptional from the perspective of a human brain, which does do that. So, the things we can do with computers are in that sense different from the things that we could do with machines in the past. Yeah, that's kind of like the difference between weapons and weapons of mass destruction.

There's a significant difference between what you can do at the former and what you can do at the latter. Second thing that makes digital different is access. We talked earlier about the idea of context to collapse which is to say the case where something that you have written. Perhaps intended for a specific audience being available to a worldwide community and then hearing back from that worldwide community whether you want to or not.

That's the sort of thing that digital technology enables so people can act as data much more freely and easily than they used to be able to we read of a case, where an analytics engine is generating faces by simply gathering thousands millions of photographs of people on the internet and using those as input data.

So this access is what makes this kind of AI possible. It's kind of the same thing for individuals we now have access to things like Wikipedia and Google and maps and more. And so we have at our fingertips masses of data that we never did before. And also as we see we have access to the technology that allow us to process that data to regard that data intelligently and to draw inferences from it.

We'll be looking at many examples of that. Finally, third function computerization creates. As these authors wear a towel set, originally a new class of recordkeepers, we now have a much better idea today than they did in the 70s, what this new class of record keepers looks like. And it's not just companies like Equifax, which handle our credit data or health companies, which handle our health data.

It's companies like Facebook and Twitter that handle our messages back and forth to each other which used to be secret used to be privy used to be something that we did, except in the rare case of a wiretap without anybody else looking in. Now, there's this class of companies that has custody over all of our interactions in all of our information.

There's really no way to avoid that in a certain respect and it's the creation of this new class of record keepers with correspond with corresponding power and responsibilities that creates a whole new class of ethical questions. So that overall is what we're looking at for that analytics. Artificial intelligence learning analytics technology.

Generally, we're looking at these different categories of machine algorithm but we're looking at is broadly as possible. And I hope I've given you a sense with the short descriptions of the varieties of different applications and systems that are out there in the next section. In module two, we're going to focus specifically on applications of analytics, in learning applications of AI in learning and we're going to use to begin with the characterization offered by McKinsey looking at the questions that we're answering.

We'll find that McKinsey's characterization, false short of what's actually happening in the field. And we're going to identify and classify a large range of potential applications. Now, the reason why we're doing this isn't to create some theory or model, that best helps us classify and categorize applications of learning analytics.

I know there's a lot of theories that do not, but that's not the point here. The point here is to capture a sense of what the benefits are that we obtain from the use of learning at analytics. And, and it's important to keep in mind, like these benefits are what generates is entire inquiry into ethics.

In the first place, If there were no benefits to the technology then nobody would care. We simply wouldn't use it, but the fact is the, there are benefits and so are discussion of episodes of ethics is going to look at these benefits, and look at the ethical issues in the light of these benefits.

So we have the applications, we have the benefits, and we have the issues. We're going to try to map those out and I've I even now I have no idea what that map is going to look at. And a lot of this course consists of taking a lot of these entities.

A lot of these things like applications and issues and ethical codes and so on putting them in a chart and seeing what we see and as we get to the later sections of the course we get more of what my impressions are. But also importantly, you will be able to develop what your impressions are, and I'm sure they'll be different from mine and that's the beauty of organizing a course this way.

So that's it for this section of the course, there's one more video in module one following this, which is the wrap up discussion held on Friday and then we'll take the weekend off and we'll get back to it on Monday, with module two. So thanks a lot. I'm Stephen Downes.

Module 1 - Discussion

Transcript of Module 1 - Discussion

Unedited google transcription from audio. It's pretty awful in places.

 

Are you able to hear me all? I can't hear you. I wonder why because I muted. Well, that would do it. So few more clicks. Yeah. So as you can see, you heard the only one here. This is not the first time that's happening. No, but I was counting on birdie to make it the biggest groups so far.

Yeah, me too? Or someone anyone, when let me run yellow, my neighbors, I'll be right back. I was thinking about that we because, you know, the especially the live participation has been limited to say, the least, although I've got now more than 120 people, subscribe to the newsletter. And it occurred to me.

That probably right now, there's more alternatives for people to attend online meetings conferences, sessions webinars than there's ever been. Before you have it and though these size of the audience is probably grown somewhat, it probably hasn't grown nearly at the rate of the size of the offerings. That's my story and I'm sticking to it and I think that's I think, that's right. 

Yeah. I also think it's a holiday week. This week in the US. I'm not sure. Yeah, not so much not so much. Okay. Yeah, Columbus Day is become very contentious so for some, it's more of a holiday never. But for the majority they just yeah, do they get unless you're Italian? 

Yeah. Do they get the day off? That's a good question. It's a federal holiday. I'm not sure, I'm not sure about that. I'm not a state employee so I'm yeah, I know. Yeah. You know, on all the sanctions holidays? Yeah. Don't think so, but I'm not sure. And I guess also, with so many alternatives available to people in their own time, zone people in Europe in Britain, you know? 

And and points east I guess. Don't really feel the need to stay in after work and keep working. Yeah, well, plus, you've got them right at the end of their day, right? Yeah. They're in transit. Probably. Yeah, either that or evening supper, it depends on whether they work at home or not. 

Yeah, so and I see your hat. That's the LA Dodgers hat. I assuming not the angels hat so you must be pretty happy right. Now, I am I have to say, beating the giants and having two teams with one with a hundred and six wins and one with a hundred and seven men. 

I know was the truly me. Yeah. And, and it's kind of, you know, now the doctors have to travel to Atlanta. Yeah. The team with 10 less rims. Yeah, it's evil. So there's a problem here. That's kind of weird. Yeah, I'm a bit surprised well because the doctors were the wild card, right? 

Yeah, exactly. My team first time in the Giants ever played for the Division. Really? Wow. Yeah. Because usually ones out right. Yeah. Right. So yeah. Yeah. First I believe I heard them say it was the first time I don't spend a lot of time on it anymore. Yeah, I believe it. 

Yeah. I mean there have been some classics with the giants and Oakland and the Dodgers and Oakland. But and yeah this is the first time, the cross. All my giants friends are all whining about the called third stride. He swung it was too out. One on shares are pitching. 

I don't think it changed the outcome. No, never no. Absolutely. Not in my mind. You know? I mean the way to look at it is if you're blaming a called third strike for losing the game, you probably haven't gone enough during the game to win exact. So that's always been my position. 

Yeah. You're complaining about the calls, you just haven't scored enough learning. Yeah, that was my concern with the doctor and still is well, shut up, because you can't squeeze out. One, run bunch of millionaires. Can't squeeze. Yeah, I know it. Yeah. So yeah, I'm happy today. Tomorrow is a meeting. 

Yeah, you're happier than I am? Well yeah, but you live in Toronto. So you're almost like a customer. Hey we well I guess they won series but we won the series twice back in the 90s. Not quite as bad as it and we have a fun team to and always kind of like the A's. 

I've always thought. Yeah. New James Bond. One of those is fun. Hey, we got nothing on these. Kind of the exam and we got 91 wins and finished. Fourth. Yeah, that's a tough division. Yeah, yeah, I lot of teams and one individual with 89 days. Oh yeah. That's Well with the West 107 106. 

Yeah, sure. That's a person that's up there. Now, I think it is. Yeah. Yeah, that was crazy. Yeah, hate to be any of the other teams. Poor, San Diego. All that money. That was their result. Get to be almost as good as I supposed to be second. Second team. 

Yeah, didn't happen. Oh well, so I have one big question. Yeah, I tried to install grasshopper in 2018. You're not alone. Grasshopper last week and failed and put in a ticket to recline. I know the guys at recliners. Yeah. You know, in Virginia and nobody answered like, oh yeah, that's surprising that nobody answered because first time yeah I've been with them for years and always get intermediate response. 

Yeah. So it's been a busy week. So I'm in a circle back. Yeah. It's a good you know, Tim's right there. Jim's. Yeah. Italy. But he's yeah. And I'm assuming they're gonna help me fix it. But I just wanted to say I may be asking more questions about basketball. 

Yeah, that's no problem. And clearly there's time to do that. It's not like you're interrupting other people asking about ethics. If they wanted questions about ethics, they should have joined the chat, but which way did you try to install it? Just as a cloud looking doctor or okay. So what I should do. 

Yeah, that's a lot more reliable. Especially the, the PLE got like, there's two versions, right? There's the PLE version and the course version the PLE version is more advanced and I've got various instances of it running even now. So it's should install without too much difficulty. If you install it, using the cloud and have you gone to the, you've probably gone to the GitHub site, right? 

Yes. Yes. Right downward. Yeah. So there's instructions there on how to install it using cloud using doctor and just so yeah I hear you. Yeah. And you know, especially, you know, I mean I'd say from experience, you know if something goes wrong it doing it in. The cloud is so complex. 

It's really hard to figure out what went wrong. But when it goes right, it's beautiful because it's a one, clicking stall. Well, not quite one, click, but virtually a one, click install, and then you're up and running the way I recommend people do it is to use the doctor cloud environment, they have. 

It's some, I think it's just cloud. Got I'm not doctor cloud environment. The reclaimed cloud environment. I think it's just cloud. Got reclaimed.com. So you'd need to get an account there. Okay. All right. That's a thing. Yeah, exactly. And then, you know, I've tested it on reclaim dozens of times. 

So, I'm pretty confident with that installation. Still not perfect. There's no wind to the bugs that can crop up, especially people do something different, but but it's working a lot better than I did ever. And is a lot more reliable than just trying to install it you know, in in a typical instance like on a typical web server. 

In fact, the the login function won't work properly. I can tell you that right now and at least not on reclaims sea panels sites. And it's because one of the requirements for encryption doesn't install properly on the reclaim site and I don't know why it reports that it's installed, but then when you try to run it, it says I can't find it. 

So but I really felt, you know, I felt that it needed better, you know, just better security on the login, you know? 2021 security is supposed to 2001 security. Yeah, big difference. Yeah. So, that's what I did and I'm happy, I did it, but it really means, you know, you're working with, you know, you have to use the cloud. 

I plan to migrate all of my stuff to the cloud and even this course. And and the idea will be that people just be able to install a cloud version of this course in any doctor container that they want or more accurately, any doctor cluster of containers that they want and I'll never know. 

They took it. Yeah. That's that's interesting but but apparently necessary. Yeah, yeah, but it's fine with me. I mean, in the background, I've got to presentation coming up next Tuesday. At the open learning conference. And the title is what does it mean to enroll in a course? And the thesis is that if you want it to be open, then you shouldn't have any sort of registration. 

And but what does that look like? And so in this course is meant as the example of what that looks like. And I'm discovering some things. And I'm discovering, it's, it's hard on the ego because you don't really see the people who are in the course, and I assume people are there. 

I mean, there might even be someone watching on YouTube right now, and I just have no way of knowing it and unless they're chatting in the YouTube chat, but I'm not seeing that. I'm seeing us here. So me, slip over to YouTube and see if there's any layer in the chat. 

Although so I'm still seeing yesterday's try to reloading the page. That's the here in the activity center. Yeah, I think I got there out of activities. Yeah. And it's still showing yes. Everydays. Let me start over. Yeah, that's weird because it shouldn't be. And okay, so let me reload shoulders and then go to, it's yesterday's. 

Oh, you're I see where you are. Go back to the homepage the course homepage. 

And then in the upper, right? Right. Below course, newsletter is activity center. Oh, there I am. Okay, I thought that's how I got here but you know it was already open from before. So, okay. Yeah. So now I can go to YouTube and nobody's in the chat. No, it says two watching now, but that might be the two of us because I just went there. 

Yeah, probably else. Yeah. But you know, tell you my interests. Mm-hmm. Of course since I have this unique opportunity. Yeah, so well, you know, what was a three, four years ago and then pooping down? It's you know, it's seen as it seems like it was just to be a year ago to. 

But anyway, where we met at? Some open governments are not eventually and my main. Well, I have two main focuses. So I've been going to Victoria for the digital humanity summer institute. Yeah, learning about digital units as you know so that umbrella for my digital transformation from a glass floor. 

Definitely not in my main. Focus is eportfolios for adults the document. They're informal learning to be able to present it as evidence of prior learning for college. Credit sure, right? So kind of like followed kind of the Scottish model with their further education, you know, catering to what I call taxpayers, some people call citizens. 

Yeah, you know, since we pay for all this without access. So, back to access, we ought to have access to, and I made a presentation to Abel. AAEBL. Never remember what with all that stands for but they're the import volume people several years ago saying that it would be helpful if ePortfolio people had an ontology or whatever controlled vocabulary or their public ePortfolios, so they can find each other and I got a big shrug. 

He's, you know, whatever. But I kept added, and joyed the group about technology and kept presenting, and they were busy on there, adding e-portfolios, as the 11th high impact, practice on the AAC in US. And they told me, well, when we get that done, definitely doing this, okay, and then code, right? 

So I really haven't called. Yeah, So that's why, that's why? So that's all to say well I didn't so that's one part three, two parts. Third part. I'm hoping to be able to use something like Omega, you know, Mecca, free and open source, software that libraries and museums use how that being spilled MECA, OMEKA. 

All I have seen that. Yeah. Yeah, and it's by a reputable group on the East Coast. That does several free and open source software around journalism and libraries and things is find. And then based on my digital humanities work and thanking a flat website of just pages. But then there's a big struggle with Hugo or, but that's when to have this equal, polio be just flat pages, right? 

So, it's transportable and archivable. And all of that based on a graph graph storage, mmm, right? And I've taken some seminars from Neo4j and now, tiger graph. I just feel seminar with that love webinar but I'm not a program you know I'm not a computer scientist so I just want to use this stuff right? 

Yeah, but so putting all that together. Why would hope to do is have my own graph of my own learning and then be able to using a control vocabulary. Be able to share that with other eportfolio people. Yeah, we have a graph database or not. Whatever. And then I'm piloting this at a university Minneapolis Minnesota Los Angeles, right? 

But they're going to give me credit for the stuff I post on the other website. So it's a long complicated story, but there it is. And and that's why I'm here, just to learn more about open were and then just some hopefully solve some like that problems and increase my learning. 

Sure. It's the other thing is finding other people who know more about the stuff. There are other people I see. Now from the course activity center that there are some people watching this video, so it's not just us. But what you say is really interesting to me because part of what I have in mind doing is creating a graph of a lot of the concepts in this course using grasshopper. 

For example, we're moving into. After this module, we're moving into the module on applications of learning analytics. And, you know, I've been collecting examples of different applications for several years. Now, I've got a big list of examples and I, I've kind of categorized them, but I'm sort of curious to see how other people would categorize them. 

I know I I'm not necessarily fixed on the idea of a controlled vocabulary for these but I am kind of curious, you know, you know, how would we group them? So basically what I want to do, I don't know if I can do has put what I want to do is create the list of applications and one column and a, create a list of categories in the other column. 

And then have people just draw lines between the applications and the categories. If they match now, there's a practical problem right there, like, dozens, and dozens and dozens of applications, and potentially even dozens and dozens of categories, although I don't have dozens of lessons. So I I'd have to show subsets of these to make it manageable but that's okay. 

Yeah. You just go through several screens of this. It can be a fun thing. That's one of the things that I've been thinking about doing during one of the live sessions. One of the Friday sessions. You know how people do that and I look at the results, of course, we'd need people. 

But but but, but all of that aside, you know, we may well get more people this course continues. But you know and then the the week following is the ethical issues module. And now things get pretty interesting because doesn't depend so much on how I organize things. Right. You know it's an application. 

If it exists, it's an issue. If somebody's raised it as an issue and I don't need to sort them out but I can just have a row of issues in a row of applications. I have people draw a line between the application and the issue. Not see what that turns up and the idea here and and this is always been my thinking about using graphs to understand any of the stuff is that it's more interesting to create and look at the graphs that people create as opposed to create. 

And look at the graphs that machines create, which is what most artificial intelligence does. Now, you know, and that was working on a personal learning environment project years ago and I had graph people and even a high people and they just all wanted to take raw data and analyze it and they didn't want to look at the graphs that were out there in the world. 

And I want to look at the graphs that are out there in the world. Um, I've got again, hundreds and hundreds of resources, which I'm just going to load into the system as links. And then these links all Mac to applications, are they map to issues or whatever. And I'm trying to think of a way to make that work again. 

Not an automated system, you know because I could do it by keyword or whatever but that's kind of artificial and it's kind of me picking winners. If you will this keyword matters that keyword. Doesn't this issue matters that is? She doesn't, but if I can find just cases where people actually or people where I actually refer to a specific resource in the context of a specific discussion or whether other people do that with their own blog posts. 

So somebody writes a blog post about an issue. If they're referring to a particular resource, there's a line that can be drawn. And ultimately, you know, there's this big, huge graph of resources, people organizations, issues applications, and then I can just use a simple, say, JSON format or I've also, there's a graph markup language that I've been using that. 

I think that's Matthias Melcher's language. But I'm not sure whether he got it from somewhere else. I want new that, but I've, it's now months in the past, so, but it's there, it's defined in the system and so that would, you know, getting this all together would make grasshopper a system for doing that kind of thing. 

And then sharing the result with other people and building giga, graphs, if you will or comparing grass, whatever, you know, I don't know. I don't know how that would work out, but I think it would be interesting. So that's what I have in mind. And that seems to mesh pretty well with what you have in mind. 

Yeah. It sounds like you know but again you have a lot more tactical knowledge. I come at it as a student than the user. Yeah, so that's extremely limiting on my end. So yeah. So I'm searching for a solution and, you know, immediately I guess a container. Yeah. All the appropriate software that could be easily copied. 

Yeah, would be the solution apparently well, and that's my thinking that you shouldn't need all this technical knowledge to be able to work through material and create these graphs and be able to look at the output and draw your conclusions. It should be a lot easier right now, it's totally not. 

Well, we're on the leading edge. Yeah, you know, so it will get easier, you know, and then I think that the whole graphing of implies a controlled vocabulary, right? Because the engines, you know the connection has to be rationalized both for us and from the machine but it could be your hand. 

Yeah. You know, it could be a harvested ontology where people gather what everyone call it? Yeah. That could be, you know, organic and growing but as long as there was a place where people go and say oh I should be using that keyword, you know, because then it makes easier for everybody to find it, you know. 

So it's not a top down. I'm not thinking of it as a pop down, I'm thinking this bottom up. Yeah, we're depending, but yet, you know, helping for discovery and and machines are going to be in there. I mean, this is, you know, I can't get rid of them. 

So I'm with you that it should be organic should be human created but then use the machines. You know do we help that seems like the right credit approach. Yeah. See, for me, take the meaning of any given term and you know, a controlled vocabulary will limit it. I'll just represent that with the circle. 

That is not a symbol of whatever, it's just a circle. I have to be careful because the this hand gesture has been appropriated by not so nice groups. And this is not that. And what I think of it as is, it's really like overlaping circles. Multiple overlapping circles Vickenstein used the expression family resemblance, when he was talking about defining words in this way, like the word game, there's no set of necessary, inconsistent conditions to define a game for any definition. 

You offer. There will always be something that is included in the definition but it's not a game or something. That is excluded from the definition but is a game. So really the concept of a game is a bunch of overlaping concepts and you can't really nail it down and it varies from context to contacts from person to person used to use. 

And to me that's okay because what it says to me is that a word any word but a word like game isn't a simple concept. It's a complex concept and the meaning of that concept is much more precise than we can express using words. And this meaning is captured by it's placement within the graph. 

So we don't need to worry about defining the word or even using the word we just take note of when the word is used and what that use occurs in association with, you know, and it's to me, it's interesting because when somebody uses somebody uses the word game, the first question people ask is well, what did you mean by these word game? 

And they may know or they may not know, they may know more or less precisely but even they themselves probably to my mind. Couldn't articulate exactly what that means. So we have to determine what it means. Empirically, where was it used? How was it used? What was the context English? 

It was used you know what seems to follow from something being a game what seems to lead to you know you draw these connections. You basically, I think it's probably a t-step process where you map out all the connections to create a graph and then you try to do again analytics on that graph with respect to a particular term in order to form hypotheses. 

But what that term means is similar to the way. We might try to explain why an artificial intelligence made a decision about something. Why did the AI say that this action was illegal or why did the AI fail this paper, right? There's not going to be a simple precise definition of pass and fail, but if we analyze the graph that produced the fail result, maybe we can come up with a story that we can understand about what failing means so far. 

Is that AI is concerned. So so I don't worry about a restricted vocabulary. And actually, I've designed grasshopper that way. So, and because I think everybody will use these terms differently. So, each person uses their own instance of grasshopper. So we don't have to share our vocabularies, each person defines their own list of entities, their own properties of entities, whatever. 

And then shares that and they just share it in this unstructured. JSON format, which basically says this is a thing, this is a property of a thing. This is the value of that property, keep it as basic as possible. I'm then allow people to import other people's graphs and apply whatever sort of processing to someone else's graph that they want to do to it, to integrate it with their own. 

Right now, I would use a set of rules to do that. I have a in my RSS harvester, there's a section in the processing with some predefined rules that allows that to happen. And I want to apply that to JSON imports as well, or they could just take it and blend it in. 

However, it blends with their own system because it will blend and it may produce confusing but interesting results. Yeah. All right. Yeah. So that's that's a higher order that I was. Yeah, imagining and you think the AI can do this without without a controlled battery without obvious interventions just using texture or analysis that well using maybe graph analysis. 

But you know, I don't know. And I'll be really honest about this. I don't know what the AI can do with the sort of data structures that I mean trusted and creating so experiments are fine. Yeah, yeah to me it's the data structures themselves are inherently interesting and we don't really have any practice you know, as as a culture as this society as as a species in working and creating and working with these data structures you know the best we've managed really up until very recently is you know relational databases and even that is a pretty big step forward from where we were say 30 or 40 years ago. 

So you know, this is all new. Yeah and then I just have the hesitate, you know Chris Gilliard's work. No doctor. I always I always bought them. You should be sure to say doctor for this bill here never does. He teach them any college around the Detroit and he works on what he calls digital deadlining. 

Oh okay I'm familiar digital redlining for sure. Yeah well that's I think that's his turn but it's not he's out in front. Yeah. In fact he's a Harvard fellow right now. So okay yeah. So there's just that, you know, in the back of the minds, this trust of the algorithms, but then it's just a matter of who controls it and is. 

It is it open? Is it editable by the user? But then again, how many users have the capacity to review our analyze announcement? You know. I don't I know anything. Yeah, no exactly. I'm just typing a link from Chris Gilliard into the chat and the activity center. I just looked it up quickly while you're talking sure. 

Yeah, it's helpful. As long as the check is saying there's always that there. Yeah, yeah. Well, it does. Yeah this is me right? Everything gets saved. Yeah. You know I wish I'd believe that but I keep losing so I don't know. Yeah, I mentioned in my blog post, I'm still on I guess it worked. 

I guess it's hardly I've been traveling. So I just got in a couple days ago so I can settle in. Yeah, but I mentioned that back to open, you know, me a doctor me as Amora in New Jersey, she works with Alan, Levine her writing classes have been opened to the public where I think three years now. 

We're there they're a standard state university course but then she allows the overlay of open participation. Yeah like that. Yeah, I do too. So I think she's on the cutting edge and then she did a course with the digital pedagogy lab. That's a university, right? You know. Yes, one else thanks and all that. 

Yeah, this summer, she did a course, sort of a feminist duty of care. Cause right with Dr. Mahabali from Cairo, right? Who I know as well. So that's, you know. So that's the pathway that I've been on currently that was just a couple months ago and now here I am open to continue on. 

Learn more about that the efforts of care and learn more about the algorithm and how we can make sure that it's opened to ethical adjustments, corrections, what I would. Yeah. And and those sorts of considerations are what lead to this course, you know. And, you know, we're gonna go back and look at the duty of care and where that came from. 

And and but, you know, we have this mix or there's blend of of graph or network and open or care. And, and all of that together and it's not at all obvious, how it shakes out? At least not to me, go. This is definitely an experiment. That's yeah, that's why I'm so interested to see how this. 

Yeah, let's goes now that I'm home. I can reach out to Chris and on, and Mia and just give my heads up. If you hear the disco going on, it's you know, see if they want to drop in or whatever. Yeah, be great if they did what. Yeah, wouldn't it? 

Yeah. And so it's Tuesdays and Fridays are these open generally? It's Monday and Friday. It was Tuesday this week because Monday was a holiday in Canada. Yeah. So Monday's Friday. So all alerts under that. Yeah. And ask them to drop in and maybe a couple other people like think of, you know what could they? 

Yeah, and I so as a student or discipline person you know I signed up for this when I saw the announcement and everyone's ago and then it's sort of pops up. It's seems to me that you need to run these regularly. You know, once you are twice a year, right? 

Yeah. And the whole promotion. Yeah, that's I know, I know. So let's say that I would be willing to volunteer some time in the six to four weeks before you run the next one, okay? Just to try to write because it would it would be a lot more fun. 

Yeah. Three or four or five other people. Yeah. And again if it was regular whatever that means but yeah would build a note? Yeah. Also if it was regular it wouldn't have to spend so much time to beginning of the course making sure all the software works, right? It's tricky to do something. 

Get the facts of the community. Yeah. Both the technology and networking. Yeah, yeah. So I mean I just, you know, because I I got frustrated that, you know I again I thought it was like last year was 2018. Yeah, I know. So it's three years. Yes, been three years. 

You know, when I got frustrated, I couldn't get the grasshopper installed. Yeah. And something else came up and you know there, I totally know. Yeah, you know, I have a series of videos called Steven follows instructions where I tried to follow somebody else's instructions to set up a project and nine times out of 10, they fail. 

I know, and I'm one of the few people, I know that will stay up late at night of this camera. Way out of it. Yeah. And, and even then items. Yeah, yeah. Usually, you can kind of get it. The work that. Yeah. God anyway yeah, it should be that hard and again back to I really want working class people. 

You know, people with fans you know. Yep to be able to sit down at 11 o'clock or 12 o'clock at night. Yeah you know and do 45 minutes work documenting something that eventually accumulates to enough to get college credit. Yeah that would be wonderful. Nobody's gonna go through all this. 

That has a job. No, now it's called, it's got to be easy, it's got to be accessible, you know, all of these things are part of, you know, open, if you will, you know, I'm not, not just making it available. I think, I think we've learned that. I hope we've learned that, you know, and we do throw up a lot of barriers in front of what we think is open and they're not just subscription, barriers situation, fee barriers, the ease of the software, the accessibility of the content, all of that place roll. 

Yeah, I'm working on having opportunity to rent an office space for Walmart and I want to set up a little digital center just a couple weeks at the Mexico. You know. Joe Lambert digital storytelling center? Yeah. On a Berkeley. Yeah he just he just got an opportunity to move to Santa Fe to Mexico. 

That took it. Yeah. So you just ran this first in person workshop, and almost two years. And my mom was from New Mexico and I got to go to San Diego. I'm one of those facilitators. Okay. Yeah, so I, you know, so I've been working on online with them. 

Yeah, it's lusably the last couple years, just it just hang around help people with technology and you know, that's really cool. Yeah, I like that. Yeah. So so I just did that and so I have this idea that I'll set up a little local center for the digital learning. 

I don't know yet, I can't figure out what's call it but I think there's a market local center for Jason. Yeah. Well actually I have a URL right spot? Really like that a right spot. Yeah that's pretty good. That's short. Yeah there's a little private college with your college and then I met this little village a town area. 

Yeah. All small businesses and I know every small business I walk into I just stand there in just amazement at their 15 year old computer and employee that can't even. Oh, I know. Yeah. And so I'm just one of a little spaces says, hey, you know, drop in. Let's talk about it. 

And, you know, I promote Google. I mean, you have to pick one evil empire. So not microsoft because at least Google's kind of free. I mean, you're the, you're the product but you don't have to pay that. Yeah. So it's like remote that then I work with some educators. 

And so I figure if I have this little center so kind of a hybrid thing, right? I can run online court jobs so I do that. You know I had tried it just help people use the crap they have. I mean it's just that's simple, you know, completely user-based. 

Yeah. Workshops. So and I could, you know, they're online. Eventually I'll get organized and they'll be on a website and they'll be free mostly, you know? Yeah. And then do something to pay the rent. So that's what I'm kind of working on. Just have a little center and then you know, I know some writers that I know teachers and there are a bunch of students in this area so that one place where they could drop in. 

No yeah so that's that's what I'm working towards this winter. I hope to get that better organized if the below market space. Absolutely. That's right. Yeah, well, I mean the way things are looking it's, you know, I think office space will be pretty available now. Yeah, maybe not storefront. 

Well, even storefront. Yeah, because people are using online. Purchasing more and more. Yeah. Oh there's plenty of available store friends. This is actually just off the main shopping street, all right, literally around the corner. So, yeah, but that's fine for something like what you're describing. You don't need a main street presence. 

Yeah. And then to do digital storytelling workshops, you know it's a cute little town. Yeah parts. There's actually trails just a quarter mile away so that nice photo adventures and you know yeah it's nice and kind of busy and it's only 12 miles from downtown LA. Yeah I think that's like this little Quaker village. 

How can you have a little town? 12 miles from LA? I know it's weird. Is it in the hills? Or so, there is this little band of hills that separates. What's called the inland empire? Yeah. Which used to be where they made steel. Yeah, basically riverside. And San Bernardino, all that out there. 

Yeah, there's a little chain of hills that separates that from Orange County, right? It's right. Where those hills in 12 miles from downtown LA. That's where a bunch of Quakers. Planted a bunch of walnut trees. So okay, so it's this sunny hillside that faces both south and west perfect. 

Yeah, perfect climate. Yeah, yeah. And then, you know, it used to be, you know, 10 mile horse back ride and then it just got overwhelmed. I mean, when I was a kid, there was still Orange County was still orange. Yeah. It was something called the Irvine ranch. And I was like, 300,000 acres on by one guy and then that became Irvine and now it's a city basically. 

Yeah. And then all the game communities went up, and yeah. Yeah, but I can remember when we had a driver across those orange juice about San Diego, but now that's all gone. And I remember when it used to be be mild right across a better bottom springs. Yeah. Not no. 

Yeah, 50 miles of condos. I took a bus once from Anaheim as far south as I could go. And I got to a point where I got to the end of the line and I could get off that bus and get on another bus I didn't, but I could and continue on to San Diego and these are just city buses going through urban areas. 

And it's so basically urbanized all the way to the border and then beyond and beyond. Yeah, and the 70s, I used to tell visitors that Los Angeles, was 100 miles by 100 miles. Yeah. Now, Los Angeles is 200 miles quite too long. Yeah, thousands square miles. Yeah. Yeah it's amazing. 

I was in the Bay Area for 30 some years but it's just on crazy up there the prices. Yeah. And the gender rotation is almost militarized. They're getting right to the point of militarizing. The saw, my first David communities in California. Was that say, I saw my first gaming communities in California. 

I believe they were invented. Well, they were popular. Yeah, yeah, it's disgusting. So, and my little town, really do whatever this is right on the LA Orange County drive out the past nixons old house. So hmm. Yeah. Well, we've wasted another hour. Yes, we have. And I hope that the people watching this, found this entertaining and we did talk about subject, matter related to the course. 

So bonus. So and for those watching the video, if you made it this far, the next session of this course, is Monday the next live session. I should say is Monday, October 18. 12 noon Eastern time is still Eastern daylight time, which I believe is GMT, minus five. But I can't swear to that. 

So you need to check. So it's probably about five pm and Britain, six pm and Europe, seven pm and eastern Europe and and so on and rediculous. So clock in Australia, New Zealand and in China. But of course everything is being recorded. This video is recorded, it's live stream on YouTube. 

The captions are being produced by artificial intelligence, although poorly. So and those will all be available. I've got another recording of the audio, which Google will be transcribing. Probably better. But we'll see and all that will be available on course website. And there are, of course, more videos coming. 

There are videos coming. This course, pretty much every day because I've got a lot of stuff to put out there, but also there's the activities in that. What should produce like I say. Hopefully unexpected results. So I guess we'll wrap up. Thanks a lot for joining me. I really appreciate it. 

I can ask one more question. Oh, absolutely. Can you tell me about your little village? And from from your address that I see everywhere it sounds like you're in a small village and I am, I'm in a village called Castleman. It was founded by a guy called Castleman who was given logging rights to the area. 

And so he in the mid 1800s castle mean is about, well it's it's roughly halfway between Ottawa and Montreal in rural eastern Ontario. It's got a population of maybe, 2,000, give or take. So not tiny. It's actually one of the larger villages. I've lived in. I grew up in a village but a quarter of the size and I've lived in even smaller but it's very it's rural or surrounded by cornfields soybeans and some other crops but mostly corn and soybeans and they're used to feed cows, well maybe not the soybeans but the corn is definitely used to feed cows. 

The cows around here are very well fed and there used to produce dairy and just down the road from us is even smaller village of Saint Albert which has a world famous cheese factory. So I go, I biked down there on a regular basis for their cheese and their other goodies. 

And I picked it because it's in the country, I grew up in the country, I wanted to be in the country but there you know, it's on one of the it's on the railway between Ottawa and Montreal. So, there's a train station here, which serves us poorly, but still serves us to turned out to be pressing it because Greyhound bus sees to operations in Canada and people have been scrambling ever since. 

It's also an exit on the main highway exit number 66 and you know if you ever drive down those highways and you always wondering, you know that you're driving some city to city on an interstate, you see people leaving the highway in the middle of nowhere. Any wonder what's it like to live there? 

I've always wondered that sort of thing and now I know. Yeah, so it's, you know, it's not beautiful. There's a river, the south nation river, it's a decent size, it's not huge, but it's a decent size. And it was a, first of all, an old sawmill and name more recently, an old power plant. 

Now, it's just a damn used to control flooding. So, but so we have the river and a little tiny, you know, riverside park area. But mostly, you know, we've got a big forest la rose, forest north of town and then the river just winding through the area and fields south of town. 

As I go, west toward Ottawa, gets more urbanized. Obviously east, there's more farmland until you get to the Quebec border and then from the Quebec border all the way through to Montreal, it's urbanized. It's sunny urbanized. You look at a map, you think? Oh yeah, that's country. But you look at the satellite photo, it's full of houses. 

So, it's really quite interesting. So, a little little ranch at two acres by the. Yeah, that's sort of thing. Yeah, just threw out. That little piece of Quebec, that's between the Ottawa and Saint Lawrence rivers. It very much so. And that's also happening around Ottawa, as well. We're sort of in between those two areas. 

But, you know, there's sort of beginning the squeeze us in so I might be looking for more rural again. Oh yeah, I like it here. You know, and the funnyest thing and and it isn't funny actually, it's it's actually pretty serious observation because you know I cycle around this area, a lot and around here the important people are the farmers arm and everybody else exists to serve the farmers. 

You know, all the stores, all the shops, the dealership's, the roads, the railways, even and that puts me in my place, you know? I mean because you know I work for a federal research agency. I do you know, as you said leaving edge work on technology. I sometimes think I'm pretty important but then I ride my bike around here, pardon not at the speed store, not in the feed store. 

No, no. And and and you know and it's and I'm sure it's not true, but I almost imagine them looking at me and thinking, well, there's a person who does nothing productive in society. So, and that's what I like about this area. I put me in my place, it's, you know, it's, it's not beautiful, it's not touristy, although it is beautiful. 

You know, it's the other side of it is, you know. Yeah, I'm living in paradise. It's, it's actually, you get out into the country, it's just gorgeous out here, especially, you know, in the fall, the leaves turning. But also, even in the winter and the spring, we got forest trails, rivers fields swamps. 

You know, just on the other side while river we have Canadian shield territory Laurentian mountains even ski hills and such, they're not huge, like the Rockies, but they're, you know, a lot of up and down for sure now. So yeah. It's I like it and so I was wondering. 

So, you were a hundred percent remote right now. Yeah, I'm I'm at home and I've been at home for the last two years, right? So, so but before that, you would go to some office. I would go to an office in Ottawa and I had ongoing arguments with my supervisors, in fact, as long as I've worked with NRC about whether I can work from home, because I've always known, I could do the job from home, and it always seems silly for me to drive for almost an hour into Ottawa to sit in a room. 

That looks functionally exactly like the room I'm sitting in now so. Well I'm just an argument. That's all better argument got soul but that's, you know, I mean this whole online thing was new to a lot of people but I was already pretty comfortable with it. So for me, not a whole lot changed with the pandemic, but but that's also what told me that, you know, a lot of the stuff that people said, now, you can't do online knew you could because I had been doing it, and it was just a matter of people, changing their habits, it was the same for me. 

Last two years. It's been no big deal. Yeah, really. I was already at the same desk. Yeah, yeah, exactly. So, yeah. And now I have a couple of clients that I worked with and and in the in the broader means so that I have you know, those examples have been broader. 

Yes, not a lot of small businesses that realize we really need to rent that office. Yeah, thousand dollars a year and maybe not maybe not. Yeah. So things are just beginning to change. I mean we're not at the end of anything or at the beginning. Yeah, I agree on that, note. 

Yep. Well thanks for joining me. Thanks to people watching on YouTube. Thanks to people watching later on YouTube or listening to the audio because this is also a podcast. Until next time, I'm Stephen Downes. You're Mark Corbett Wilson. And thanks for joining me. Let's do it again.

Module 2 - Introduction

Transcript of Module 2 - Introduction

Auto transcribed by Google, three different speakers (who are not distinguished in the text below).

There you go. So once again, we have one person in the live discussion but it's a different person. Each time or taking turns yeah, you're taking turns it's almost like it's organized. So we might have more people join us. Who knows? I only just put out the tweet. I mean, I mentioned it earlier and of course it's in the course description and all and you found it.

But a lot of times people, wait and don't join until they see something at the last minute that suggests that they could join. So anyhow, welcome to the course, and welcome to this particular part of the discussion. This is the module two introductory discussion, but this is your first time joining us.

So why don't I start to see if you have any reactions to the course so far?

I do have reactions and my the most important reaction I think for me is that I keep chewing on the word ethics, not necessarily relating to learning analytics, maybe in a broader sense of analytics. But I keep chewing on the word ethics and I and I thinking of either said this or alluded to this, that it's not a thing, right?

It's almost like a process, and it's almost like it changes depending on context. In time, my background is in first career was health care. So I will always go back to do no harm. My second career was an academic and I did do research and I found that ethics always changed depending on the type of research that I did.

If I were doing research with community groups or especially marginalized groups. I I really felt that research at ethics was a negotiation that it was a conversation that you had it the beginning, but it kept going all the way through and what was an interesting part of. It was to look at the data that you collected and who it belong to, which really left an impression on me in terms of ethics.

How could I guarantee? Let's say that I would do no harm. Mm-hmm. When ultimately, I don't know what the result of the research is when I started it. So how can I guarantee that, right? I can't. So when I have something I have to share it with the people that I do research with and what if they say the you know, they will be harmed.

If I let if let's say I published this, do I not publish? It's a good question. If it's it's this it's this conversation that happens over and over and over you know, and that's what I think about. Ethics and I think I think about it. It's the same thing in many ways in everyday life.

It's it keeps. It's a moving target. It's something that emerges. Mm-hmm. So, one of the things that you said that quite interested me was to look at it from, let's say, the the lens or the view of a mesh, right? And I'm trying to grapple with ethics and mesh and I those and what I just said are my beginning, thoughts on it and then trying to look at it in terms of learning analytics because learning analytics.

I mean that learning analytics and digital analytics in general. Add a whole new dimension to this certainly. In terms of the scale, for example, how do you negotiate with a hundred million people? The exactly exactly and I did research you know sometime ago and I did observations on use net.

I'm sure you have remember use that. Oh yeah. And how do you, how do you get permission? How do you you know at etc. In terms of a public open online discussion forum. How do you get it Twitter? Good example. Yes. Yeah, so, you know, so, anyway, that's that's where I am at and I look forward to, you know, discussion blogs, etc because I think it'll probably get me closer to this amorphous mesh that I'm trying to put together.

Yeah, it's, it's interesting and I'm glad I have the live transcription running so that I can capture and spiel your thoughts because I have no shame. But you know, a lot of what you said about the process and the conversation actually anticipates where this course is going to by the time we get to limit eight.

So I find that quite interesting. But the to me, huge question that the digital analytics brings up is yeah, most Carl Sagan once said, who speaks for us. And, you know, I even a number of ethical codes and preparing for this, I read essence and dozens and dozens of them, we have a whole section on them later a whole module on them.

A number of them says well there's no practical way to get permission from people so we just assume we have it.

Which struck me as a pretty convenient and maybe a little bit self-serving, and it's interesting. You also raised the question of ownership of data and, you know, we see again, you know, companies out there saying, yeah, well, there's no practical way that people could own their own data. So we own it and again, that's pretty convenient or active.

Very least, you know, companies are saying. Yeah, we have a perpetual no limit non-exclusive or in some cases exclusive right to use this data. It's yours but we can do anything we want with it which ultimately ends up including transfer ownership of it. Yeah and yeah, and I guess the presumption is that's on ethical but I'm what basis, you know, because we've never had this question come up before in society, you know, just, you know, and just hasn't come up it's new and that's what's really interesting to me.

So one of the things I wanted to do as the beginning of module 2 which is now was to see if you're using artificial intelligence in your daily life. Now in any way. So I wonder about that. You think of any uses that you're currently making artificial intelligence? Yeah.

That I'm using or is being used on me, I thinking more specifically of you using them. I mean, we can imagine the other case pretty easily, but yeah.

Not a lot. Not as the moment. Yeah, no. I'm certainly in the in the past. I I've taught on lines. So right. You know that various kinds of dashboards where you could you could see, you know, when students participated or didn't participate. Yeah. Or how many likes they gave something or not.

So that would be probably the most recent. Yeah I'd classify all of that under the heading of descriptive analytics. Yeah. And yeah, indications of how many visits you've had, how many tweets there have been props. Even scores people got on their tests. Yeah, which you can see in a dashboard or presentation or a nice pie chart or whatever.

That's pretty common. Want to say that I'm I did not necessarily use those. Yeah, that's interesting. Probably did. Yeah. Yeah. And that, that creates a case of them using it on the students and it being used on them. Yeah, one area where I, I've used this quite a bit is in the area of physical activity and as you can see, I need physical activity.

And I use a used to use an application called runkeeper and now I use an application called Strava not because I run, but I do do other activities. I do hiking and I do quite a bit of cycling and Strava in particular, which is why I use it. Now it shows me my roots, he shows me my time.

It shows how much elevation I gained and, you know, a bunch of related statistics and I find that really an interesting use of analytics. It's not artificial intelligence per se. Yeah, but it's definitely in the realm of analytics. I would think so. Just actually I I do use things like that.

So, fit bit. Yeah, yeah, yeah. So there's something, what would you think? If here's I'm just thinking out loud here this isn't part of a plan. If one of these applications chirped or whatever and said, okay go out and exercise. Now, actually, it does chirp to me when I sit for too long.

Okay, but I can ignore it really well. Yeah. Here that's it's probably a good thing having that choice and not having to. Yeah, so and in my power bill, I'm based in Ontario here and they've got analytics and I don't get this anymore because I told them to stop but they've got analytics that actually break down how much power I'm using on heating.

How much power I'm using for the refrigerator. How much power I'm using for? What they called. Always on applications. Not sure what they meant, but probably like computer and that can be useful. Although I thought that was kind of invasive. Mm-hmm. Yes. Yeah. Wherever you based you're on a, you're in trial.

Okay. So yeah, you poke I get my power from auto while power so obviously you're getting yours from a different company. Probably Ontario, high door or something. Yes. Yeah, it doesn't, it doesn't break it down under that it's, it's more comparison throughout to last year, okay? Well, it's a minimal, kind of analytics, I suppose, you know, it is a comparison.

Then you can draw your own conclusions. A bet. I can beat that for next year. The power companies trying to gain me. Yeah, well yeah, I'm sure they are and but I just don't know which direction they're trying to gain you into, right? More consumption or less. Yeah. So if they're one of these companies, that loses money with each unit sold, then they probably want you to be used last, so they lose last money.

Yeah. So so we're using analytics right here, actually in this session because I have the live transcription turned on, which means that there's analytics interpreting my voice and turning that into text, which I still think of, as a miracle here. And I I've been testing different types of this over the last number of weeks and months.

So the best one I found was something called otter.AI, which did a really nice, you know, nice conversion of text or speech to text, but it's a private company and you have to pay them money. So yeah, I got to try it for three times and that now it wants money from me and I'm too cheap to give money for the mere convenience of turning audio into text, even though it's a miracle.

Let's funnel. Okay, I just saw the transcription here is an even hook. So America also, yeah, it wasn't very good. Yeah, and so these zoom meeting transcriptions. Generally, they haven't been bad. It's interesting to watch it. Correct itself. As I'm speaking, I don't know if you're seeing the transcription right in front of you.

No, I'm not hit the able to see it. Maybe. Do you see the? Do you have a CC button?

That is showing up, oh, wait a second. I do try. Clicking that. Okay, beautiful transcription. I see it. There we go. But it said beautiful transcription, rather than view full transcription. Yeah. So maybe it has an attitude. Yeah, so and a use on my phone. Let's my own Henry.

Bar has stuck to my phone. This was lunch, 5 is foolishly scheduled these at noon. I don't know what I was thinking. So what we've got here going? Whoops, up higher. Okay. Yeah, this is on my Google pixel. 4 phone. And it does a live transcription as well. You don't know if you're seeing it real time or yeah.

I can't really read it. Now, it just, it just blurred. Yeah, just blurs. That's too bad. Yeah, too bad. I don't have a better camera, but that's less the trade off. If I had a better camera than I'd be using more bandwidth and then I might get stuttering images and so so it's recording the actual audio.

And in real time transcribing it to text and he's a pretty good. It's not bad. It's, it's not quite as good as otter but it's better than the other types of. Oh, we got a person. We have Tim Topper coming and joining us. So it's better than the other type.

I've tried. I've also tried the transcription in Microsoft Teams and also Microsoft Word, Word is nice because there's a, you can just click a button that's on the, the ribbon bar that says, dictate. And then if you're if you're using the online version, I don't think it works on the yeah.

It doesn't work on the desktop. Oh no. This is my so anyhow it's also in power points and I assume it's on the online version but not on the desktop. Oh, and we've got someone else coming in. So, and this looks like mark. Yep, maybe well, I don't know.

Because he's just a image now. So there's no name and what I think we've lost Tim guess I should have been more welcoming. I didn't want to interrupt what we were doing to welcome him and I just thought well you enjoying him on the flying and we lost him connecting to Odia.

Okay, so we're getting there. I'm in. He's in. Okay, he's he's in with his LA hat but I don't know that he's wearing it right now. He's probably not feeling great at the moment. But anyhow, the the nice thing about the windows one are sorry, they might persoft word.

One is I can just import an audio file and then it'll it takes a few moments but it'll convert that, it's not bad, not quite as good as Google, but pretty good. And they've used that as a transcription source as well, so I think closer to the main ones that I've used.

I don't do anything with Apple. I have no idea. Whether Apple has transcripts audio detects or not? I'm not sure either. I I don't use Apple very much. Yeah, actually I don't even know how to use this. Talk about myself. Yeah, I swore off all Apple products and so it's a while ago now, I forget how long it was.

Just because there were so concerned about locking you into this single product eco system. And yeah, that's basically why, you know, I have not really engaged with them. Yeah. And and plus they're, you know, they look everything down. Yeah and do any. Yeah, irritates me, because they take away my choice.

My don't like that. Exactly. When we come back to choice again, mark, we were asking or I was asking whether you're using artificial intelligence in your daily life. So, I wonder if you think of any ways, you might be are not just artificial intelligence, but even animal analytics in general.

Other than being in the Google eco system not consciously using it. Now, you may actually inject the term artificial intelligence side. Wish they picked a different turn. But anyway, it's probably not the best term. This is now. Well, you know, it's it's it's to that do you believe there's a universe or mobile diversity's and I'm one of those who think too is one big thing and so intelligence is everywhere and none of the articles.

Okay, yeah. So but analytics I guess would be determined I'd reach for that. I want to learn to use analytics consciously and non-discriminant oral. I'm going to use another one, you know, use a plant. So that's where I'm at. I'd like to use it but not yet. I'm watching the automated transcription happening here in zoom and it said and it interpreted what you said as not from Laura Lee.

So you should be able to see the transcription on the screen. There's a live transcript CC or closed to caption button in zoom, it should be. It's on the bottom a little bit to the right. Actually minds loading up on the top, huh? Okay? Or maybe that's just another thing cage.

Yeah maybe it's just telling you those. It's right at the bottom where there's chat. Yeah here screen. Yeah. Okay. Yeah, that was another vacation floating on the top. Now I right. So there you go. Now you're using artificial intelligence or analytics. It's funny when you said you didn't like the term artificial intelligence.

I thought your complaint would be with the term intelligence and not what the term artificials. It's quite interesting. I could, I could have reached to. Yeah, yeah, the search for intelligence continues. Yeah. Yeah. But it's an interesting point and I guess it is your it is a theoretical perspective that you might or might not take as to, whether intelligence is something that is limited to humans.

Or at the very least life forms as opposed to machines, or whether any system properly, constructed could have intelligence. And the same sort of question, gets raised a lot with respect to other things. For example, consciousness could there be such a thing as machine consciousness perception? Feeling sensation. There's the whole list of attributes of thought that humans have that we've categorized over time and that we all have experience having.

And we think some people think that they're unique to humans and others not so much. And actually, I fall in the not so much camp myself. You know, accepts you know, perhaps in the sense of perceptual feel like what Thomas Nagel. Once wrote a book. But a runt's wrote an article called what is it like to be a bat?

And yes, the question, you know, what does it feel like to perceive the world the way about does? And the answer is essentially is well, we can't know, we're not bats and you actually have to be a bat to know what it's like to be about. And so perhaps, you know, what is it like to be a robot is also something that we can't know and what is it like to be a human?

Something a machine can. No. But that's not what intelligence is. And it's not even what analytics is. And yeah, the term intelligence is too broad for what we're doing. Now, if I am, absolutely, I would say in the synopsis and I broke it down into six categories. I stole from Gartner again because I have no shame.

There's a gardener categorization, which is see if I can remember it because it's not right in front of me descriptive analytics, which is where we started looking at, you know, the systems that pretty dashboards for us. There's diagnostic or diagnostic analytics, I see that zoom spelled both pronunciation the same way, that's pretty good.

Where we're doing some sort of a interpretation or inference. For example, maybe clustering regression classification then there's predictive analytics which is as the name suggests right predicting what's going to happen and then prescriptive analytics, which darkness is something like how could we make something happen and that makes sense.

But I I didn't think it was sufficient coverage because I went through and, you know, I'm one of these completionists. So I just tried to read everything. You can't read everything, but that doesn't stop me from trying. And I ended up with two more categories, one, which I called generative analytics, which is, you know, the use of AI systems and especially neural nets to create new content.

You may have seen, you know, this is not a person and it's a images of people that are artificially generated or there's a, an application GPT - three. GBD three which finishes poems for you rights songs somewhere out there there's a 24/7 all automated death metal generator which actually is a bad but you know you get tired of it after a little while.

So I added that category and then I also added a category that I called Deonte analytics and that was analytics that tell us what things should be. For example, andalics that tell us what's fair analytics. That tell us what's ethical and analytics that tell us, you know what principles we should use in order to divide resources etc and there's a bunch of different applications that I found.

So that's the categorization, I used, it's purely arbitrary in a certain sense. I was thinking about it today and I was thinking, what is the basis for that? And it's actually well linguistically based, right? There's present tense or future chance there's different modalities like what's possible. What's probable? And then you go into what should be.

So maybe I don't know. What do you think of that? How would you categorize if you had the option or what's missing?

Hard question? Yes.

Well and you know this is certainly not the field. I'm that familiar with these cover from the past in the future as he pointed out. So, I guess by defining these

Categories of bandwidths what it points to for me. When it then points to is, what's the human this period soon? In relations to me. So given that these are different ways of looking, you know, non human machine based or everyone to say it silicon-based ways of looking at things.

Then to me it points back at me to say, okay how do I operate differently? Because I'm always so I I have a brand new well-known palette regional poet here in Los Angeles and I watched him, you know off and on basically as whole life spraggled the springboards together and then now we have these machines.

Yeah. And so I'm okay. So and not being languished or professional poet. I could see where they look very similar. The results look very similar. Yeah, one is from a lifetime of trying to communicate between humans and one is a new and shiny random generator not random. Yes. Yeah.

I'm not random. Yeah, machine generator. Yeah. And to me, they look very similar. And yet, I have a feeling a suspicious that they are a very different product, but I've been wrong before.

That's an interesting question. So oppose this to Sharita? Can you imagine a machine doing the work that you've done over the years?

Some of the work that I've done. Yes.

Certainly, you know, obviously work, that's repetitive.

I have done. I've engaged in some forms of therapy and my my first career and certainly I've done counseling with students in my second career and I'm wondering if

Something with some form of AI could have done those things. That least the, you know, the when you're doing therapy you know, well tell me more about that right? Or, you know, that kind of thing. And an big playing around with that in the past, I'm remembering. Mm-hmm. Right.

So I think some of the stuff that I did. Yes, it could be. But then I go back to where I come from as well as not just language and not just inserting the right word here and there, but I also operate on the experience. I've accumulated over, I don't know 60 years.

Sure, etc. And how does a machine do that? Because I will use that experience. Contextually different ways, depending on how I'm feeling. How I perceive a student is interacting with me. So those serendipitous type of things. I wonder if a machine can do that, maybe they can. I don't know, let's not now.

But in the future we can imagine it can't. We I mean, because you're talking about experiences that you've had that lead to your ability to do this. Well, we can give machines experiences. Yeah. And we I we can actually give it to them a lot faster. Yeah. That's the thing.

Yeah. She they don't have to take 60 years to do it. Yeah.

So, let's wonder and we create artificial empathy.

Does empathy depend on the person who is giving it or the person receiving it who interprets. What you say is empathy? That's good question. Yes. So, you know, I may be saying something that I feel is quite, you know, has a lot of empathy to somebody else but the other person may say God she's full of it.

Yeah. So although it would be sort of weird to be an accounting session with a machine and artificial intelligence and you say something, you know and you know I miss my cat and the machine says yeah I can relate I wouldn't feel right with it. No, not to us.

Yeah. Yeah, but let's stay you. You're a child and a young child. Yeah. And she have your little pet gadget, and you say, something to your pet gadget. The gadget says it back to you or, you know, says something to you. Yeah, you might feel completely comfortable with that.

Yeah. Now, that maybe unfortunate, but I think it with some kids, like, you know, they have their companions, but it brings to mind, the cartoon strip Calvin, and Hobbes, and Calvin, of course, is completely comfortable conversing with his stuffed tiger. Yes. And I was probably completely comfortable communicating with my teddy bear.

When I was in jail, I thought, you know. But that again, that's a certain type of thinking. Yeah, developmental. Yeah, so do machines that do these kind of thing. Do we put them through developmental phases, so they respond to appropriately within context.

Are you familiar with the book by William Gibson? Yeah, where the cybernauts, whatever term you want to call it fully machine, integrated human falls in love with an artificial intelligence and marries the artificial. No, I haven't, I haven't read that book and it's, you know it's 20 years old.

Yes, maybe older, but I think, I think you're right. Serena that if you grow up in a world full of talking tablets, which is what's happening, right? So you might will break these, you know, at two reaching for the iPhone or families. Yeah. Captured in in Santa Clara, California, completely surrounded by this stuff, you know, downtown's over a valley normal kind of that way.

That's where she's growing up and yeah, it's completely normal for her to just talk to gathers and they're not particularly rich. So her friends are literally talking to the refrigerators and they're downstairs. Yes, you know. So yeah it's easy to imagine that world but then you know again it makes me wonder about the human component you know.

Our is there something unique about humans not unique in the sense of you know American exception wasn't bringing like that or Christian except but just unique in a world full of gadgets. Yeah. How do we differentiate and caring Lanier wrote, I am not a gadget or you are not a gadget, and I think the overwhelming question we have to ask here is, how does he know?

Yeah, excuse me. My dog is society today. Something that's. Yeah, just for the record. Pets are always welcome in these. Yeah, conferences. And as they should be because maybe pets were our original gatches. So if it's pretty easy to imagine and AI, that's smarter than a cat and especially a dog.

I'm a cat person. So yes. And I'm done well. So, yes, I'm agnostic. But that brings up the robo. Whatever. Dogs you're being promoted is friendly little pets. Yeah. At the same time, they're being sold. That weapon is ours. That's fully armorable. Yes, yeah. That's I saw that just within the past month, really?

Yeah, they're selling them with, you know, the international arms bizarre as as a robot that can be armed. And then the question is can they buy or independently? Yeah. And we're there. It's not. This isn't speculation. We're there. They can be trained to target and fire. Yeah without you that's a world.

I don't want to live. No, I don't think that's a world that we were done want to live in, but we think about the process that you've dressed described and and what we've hinted at already in this discussion where younger people become habituated to these artificial forms of life, maybe that's too strong a statement.

But these artificial responsive devices that they can develop a rapport with and then having been raised under that sort of condition. Then living in a world where there are these robot dogs with guns? It's in a certain sense, not really different to them than police with guns and humans with guns robots, with guns, it's all the same to them.

And in a certain sense, they're not wrong. You know, especially if you think of the police as you know, a part of society from which we the rest of us have become more and more alienated over time. So they really are in other in the same sort of way a machine could be in other, then it's not so outlandish to think.

Well maybe it's not wrong to have robot dogs with guns. It's safer. I mean, we're also, we're always talking about the sacrifices that police and the military have to make to protect us. Maybe they wouldn't have to make these sacrifices anymore. And you know, the issue is well, they might shoot someone who's innocent but you know, that certainly a problem now.

And maybe they might be better at distinguishing people. They should shoot and people. They shouldn't shoot and imagine that was the case. Imagine the track record of dogs are robot dogs. With guns was better than existing police forces and we have an analogy to draw from and that's the automated marking of tests, you know, not just multiple choice, but essay style tests, where there's research shows that the AI marks more consistently and more fairly

Yes, but we come from a tradition. Hmm, I'm guessing the three of us white people. Speak English. Come from a tree edition. That delineated the authority of God from the authority of peers. Yeah and before being executed we would get to make our case before peers and sort of crowd sourcing the judgment as opposed to automating the judgment.

And so this is quite a diversion, I would say from our cultural backgrounds. And then I immediately reached for once these and I think we should stop calling them dogs. And yeah, their yeah, as cutifyings part of the propaganda. Yeah. Presentation. But you know so there are two and four legged machines and others and there's six eight legends kind of and flying machines are here and you know.

So, anyway machines. Yeah. So as a worker and I was in a circle back to good, my job had been automated but it made me actually too far in the past once the automation of these machines is fully underway and we're I think you know we're right around the corner from that you know two years.

It's something. Yeah. Where are these things are going to be mass produced to me it's just it. We're across the line into authoritarianism because for the first time there will be a counter to the mass. The people that can be controlled for the central abort, and, you know, even today, there can be mass uprisings that change before of history by conflicting, with what the authorities want, and we are just one step away from that.

And where when those with power have four billion robots that will kill on command, then are 8 billion where we can bet. Yeah, 8 billion once, they have 8 billion robots, then the massive humanity can be negated. So to to somehow, in a way, bring it a little bit closer to ethics.

So the the people that will build those robots, that will provide the information for the machine learning, etc. They're the ones that will bring their beliefs or biases into the development of that those machines. So, do we have to go back to, what are the ethics involved with doing this?

And how would we make, how would we, as a group of humans, inform those in authority, that they can't do that. That they're too biased and not directly not sure if I'm making myself clear. But I I'm having to, you know, go back that and I programming, or whatever this machine in terms of the learning that, you know, etc.

So it's my value's that are going in there and, you know, it does raise the general question of responsibility. You know, one of the things that I noticed when I was looking at a lot of these applications of artificial intelligence, I was looking for the benefits or the value produce because I thought, but that's also a way of thinking about categorizing them and the description of the benefits or the value.

Produced is very often from the perspective of the manager or the institution, or the funder, as opposed to, at least in our domain, the students, or the society, the end user the citizen. Yeah. And it's it's too bad Matthias Melchers and here, one of the things that I've said over the years is that we need to relocate the locus of the benefit from the institutional authority, to the individual user, and, and he raises the question.

Well, how keeps you do that? How could you make that possible? But sure that you hint at that when you talk about, you know, the people who design them, the people who give them the data and all of that. So maybe there's a root there to finding ethical uses of these applications.

By looking at how we attribute the benefits of these applications thinking out loud here, something like that. Yeah, but then you would also need to look at short-term benefit versus long-term. Many of the applications or platforms, you know, I'm thinking Facebook, etc, etc. That at first will welcome to as, you know, a more democratic way of everybody.

Communicating, etc. Yeah, we now know that there is harm and it's not long a period of time. Yeah. Yeah, it's funny. It was like five six, maybe seven years ago. People are saying oh yeah the role of Twitter in promoting something like the Arab Spring and now today the role of Twitter in promoting you know, radically dangerous and yeah.

Yeah, you know, Bruce that human tendency to weaponize. Yeah, with my one of my favorite examples, of course is abstract art and Cold War. So here a group of people artists just reaching for new means of expression and no intention of cultural. Yeah, combination or, you know, anything like that and yet there are art was culturally weaponized.

Yeah. I don't know if you're familiar with that story. Sure. Either the CIA promoted abstract art, No bringing the polar war. Yeah. So, you know, so it's I use that as an extreme example. But here, I mean, literally paint on canvas can be recognized culture, not literally recognized, but culture.

Yeah, yeah. And, and the same thing, you know, with these digital platforms, zoombots surveillance, capitalism describes how they will weapon. I they were originally away for college students, but find each other across campuses and then decade later or however, long or now weaponized to support the rising and white nationalism and other kind of political parties, the we're probably getting off the topic but the brilliance of something like Facebook, Twitter, etc, is based on social network theory and which literally in, you know, encourages these bubbles of people reinforcing their own ideas, literally.

That's what happens when you look at it in terms of social networks. But I mean, you know, we were all happy with it 10 years ago or so, we didn't always see the bad points now. We are. So, how can we tell? How can you tell now something that you think is, you know, represents your values?

And you think is ethically, correct, etc. How can you predict that 15 years from? Now that won't be detrimental to what you think of as society. It's really hard. Yeah here I go with my you know, my efforts. Yeah. Well, that is the course. So we've I was go ahead.

Okay, just quickly, I see one another time and so I'm always been interested in if not fascinating, I traditional cultures that seem to hold in their culture. This idea that progress, as we say in Canada is easily weaponized. And so by holding on to the original culture, and not letting it evolve the change it avoids.

Some of these problems of more and more modern weaponization, it makes them look primitive or backwards, or whatever. From our point of view. And yet they typically don't have the means to commit genocide. They may proxy to each other or in each other whatever on some very limited level but they never get to that mix level that we're at.

And now we've ready to step into this automated genocide feature. That's yeah, I'm just flatable.

Okay. Well, I think we'll call it that part in. I'm not careful. Not that careful. No. Well I mean with this is the issue though so and this is why I wanted to be gained with looking at the applications of analytics and artificial intelligence and to look at the benefits and and see, you know what we're getting from all of this.

We can easily imagine. And next week, we will imagine in great detail all the ways. It can turn dystopian and that'll be fun. But you know, seeing what we're trying to use this stuff for now I think is probably a good place to start and energy such where we're starting, so we'll come back to together on Friday at noon for another zoom chat.

So I I hope you'll both be able to join me. And of course, one of the things I'm doing is producing videos and I've got more videos coming, there'll be a task having to do with identifying uses or applications of analytics and AI and hopefully if I can get my software working, a kind of classification tasks, so it's not all writing blog posts.

I want to increase the variety of tasks and I think classification would be fun. And it's interesting because that's one of the things that machines do and I wonder if humans would do it differently, but that's a separate question. So a wrap it up here. Thanks a lot for joining me here and I'm sure there were people watching on YouTube because we did stream live on YouTube, or they'll watch this recording in the future.

And I know that they'll benefit from your contributions to this. Whoops, there we go. I don't know what happened. Yeah. Oh, how weird? I'm not sure what happened there, but I think it was the world telling me time to into this puppy. So, all right. Thanks a lot and to see you next time.

Predictive Analytics

Transcript of Predictive Analytics

Autotranscribed by Google, unedited.

Hi. I'm Stephen Downes. Welcome back to another session of ethics analytics and the duty of care, module two. And in this video, we're going to look at the subject of predictive analytics. We've been looking at different types of learning analytics. So far through this module, we've already covered descriptive analytics and diagnostic analytics and this is the next one of six different types of analytics that we're looking at.

Predictive analytics essentially involves do a two-stage process. And the purpose of them is to answer the question of what will happen in the future based on an identification of trends and patterns in existing data. So the first phase is to identify the patterns in the trends in the existing data and that's a lot of what we are doing with descriptive and pre and diagnostic analytics.

The second stage which is new is that we take the predictive model that we created in the first stage and we add new data to it and the outcome of that application is a prediction of some sort. As you may imagine predictive analytics have wide, a wide range of uses in online learning, and learning technology will be sampling.

A number of them here. In this presentation one such is resource planning, resource planning is important. Of course to educational institutions. They worry about everything from the number of staff to have the number of classrooms to have available the number of books to put in the bookstore. And in the example discussed here predicting website traffic on their website, to make sure that there are no server issues or even doing things to ensure campus health, such as working with Twitter data to predict outbreaks.

A major application of predictive analytics, is learning design. And in this example here we have a case where some researchers linked together. A hundred and fifty one different modules taught at the open university and used them. Use the learning design in order to predict whether the learning design had any impact on their behavior and ultimately on their success in the course.

And as you might expect from this study, there was a relation. So, it now becomes a mechanism for creating a learning design and then being able to predict whether that learning design will actually be useful in the context of an online. Course, obviously many other types of analytics are used in learning design as well.

This specific application is used for testing of proposed learning designs. Another example of testing is the use of predictive analytics in user testing. It's a similar sort of approach as the one for learning design. The idea here is that we want to be able to predict whether a person will use a website, a certain way, for example, in this case tongue and his colleagues predicted, whether a person would start or stop watching videos.

And so you do this user testing and you come up with these predictions for different types of videos, different types of materials or different types of website design, another type of predictive analytics and one of the most widely talked about types of predictive analytics in the field is that used to identify students at risk of failing.

We can imagine this being done based on very simple tests. For example, if a student never attended any classes, then the odds are greater that they will fail. But what about many other criteria? That might have an impact on a student's potential passing or feeling in the class. Here, we have an example where we look at everything from household income, to parents marital status to medical conditions to great point, average to neighborhood demographics.

All of these factors taken together can help an institution predict, whether a student is at risk of failing or not. This in turn helps institutions with another application academic. Advising there's an opportunity for advisors to incorporate elements of AI to their toolkits. We read allowing them to free up, time to form personal relationships with their students.

The idea here is that the advisor spends less time, trying to figure out what approaches will be most successful. What factors are involved in the student being likely to pass or fail succeed or not? And it allows them to use that analytics in the background to inform and help their personal interactions with the individual student.

There's a field in fact, out there called precision education, and if you do a search on this, you'll find a number of resources yang and ogata, right? That the goal of precision education is to identify at risk students as early as possible and provide timely intervention, based on teaching and learning experiences.

We can see why we've grouped all of these into the same category, they're all doing the same sort of thing, looking for or identifying patterns, in a student's background behavior circumstances, environment, etc. In order to arrive at some sort of prediction as to what they will do, whether they will pass or fail, whether they will use our website a certain way, whether they will watch a video to completion etc, the same sort of approach can be used outside the classroom and outside the learning environment entirely for such purposes.

As student recruitment Here, we have an example of a product where the marketing brochure says that by providing market intelligence throughout each phase of the funnel management process. The final that is being the funnel of prospect of incoming students. It's wide at the top which is all the possible prospects that you might gain and narrow at the bottom.

Those prospects who are actually most likely to attend your institution Prospect inquiry applicant accepted deposited registered and matriculant. These are all stages of the funnel management process marketing and recruiting pivots. We read can be made based on changes in student responses and success indices. There's a diagram here on the slide that shows a number of the points where analytics and predictions can be used in order to make this process more efficient and more accurate.

That's all we have for applications of analytics at this time is the short video but I think we like it like that we could probably imagine more applications of predictive analytics. This was just a quick survey of them as we're continuing through this module we're adding additional applications. In other words, additional uses of predictive analytics and coming up with more and more examples of predictive analytics.

Informing learning and teaching technology. That's it for this video. I'm Stephen Downs. Hope you enjoyed it. See you next time.

Module 3: Introduction

Transcript of Module 3 - Ethical Issues in Learning Analytics: Introduction

Unedited transcript produced from audio by Google.

All right. It's not gonna let me do that. I can turn this off though. There we go.

So, welcome to ethics analytics in the duty of care, just checking life controls here. And I have nobody in the life meeting with me, it was gonna happen sooner or later, but that's okay. Not the first time, I've given a talk to an empty room and I'm sure it won't be the.

Last time I give a talk to an empty room. Let's just check now and make sure things are running. So I'll open up the the activity center. And there we go. And it looks good. Just check here, won't be over. Yep. Okay, so we're we're broadcasting live on YouTube and of course, I'm recording the audio.

So, and I'm the only participant in this zoom room chat session. But like I say that's fine. We can do this. We will do this. So the topic this week is ethical issues and learning analytics and I'm going to talk about that in a little bit. But I want to introduce first the assignment so that, you know, people have a good idea of what I'm looking at.

So I'm going to share this screen here. So what you should be seeing now is my my web browser and you should be seeing the activity sensor and there may be somebody watching live on YouTube. So if you are, I don't know if you are. I have no way of seeing whether you're watching.

I suppose I could. But but if so welcome and if you're watching this recording, welcome. So but what I want to do is go to the course outline and we'll jump into module three, ethical issues in learning analytics. And here's the task examples of ethical issues in analytics. So I invite you to take a look at the list of all ethical issues and I've created a list here and you might think perhaps reasonably that we're a missing and ethical issue.

Oh and we've got someone, we have a person. Hi mark. Oh my goodness. We're gonna need more to get his audio up and running and so on. Yes, I performed the unpardonable sin today of starting on time. I'm just teasing America. Oh there. Yes. Welcome. Hmm. I'm hardly hearing you.

I wonder why that is.

No, I doubt that it's you let's just check my microphone or my speaker settings. All that's why. Okay, try. Now you hurt me now. Oh yeah, that's much better. Okay. Yeah, it was going to we're in Iraq. I had bounced around to find this one. What was incorrect. The zoom links.

Oh really know how weird they were linked to the day. They were created not today. Oh for goodness sakes. So, I'm gonna guess readers, you know, like I was gradually, we did find it on one of the on, but not on the activities. Really through Friday. Oh, the yeah, I think reloading like I did update the activity center link, but if you have a cashed in your browser, you might need to reload that page.

Yeah, let's well, let's see, that's 29. Resume. Oh, I've done. Yeah, I've seen what I've done something really stupid. Yeah. In the activity room, the link simply links to the activity room. So yeah.

Will crank up the volume a little bit here too. So then I can hear him so. Well, let's fix that because it would be nice to have people coming into the right location. So, I should have a Invite contacts. Email, copy, invite link. And now I'll go to the activity center page.

Course activity video? No, of course activity slides and yeah, there it is.

And I was doing so well today, okay. And publish. All right I've put the correct link. Now, into the activity center, might be too little to late but at least it's there and now I'll just reload it. Sorry you have shared his email. No one. Well, maybe in theory, if she's subscribed but no, I don't.

That's what I said if she subscribed but then yeah. And yeah. Here we are. Well, okay. So what I was about to do is just to talk about the first assignment or the first task, there's actually going to be two tasks in this module. I've posted one of them.

The second one will be with the graph and I'm just trying to finish off the code for the graph. If I don't finish off, the code will use Matthias's original code, which works fine and it's a good exercise, but it's not 100% what I want, and I'm picky that way.

So let's come back then. So I'm going to share the screen so that people, including you can see it on YouTube or if you're in the in the live chat you can see analogs, just you. So oh somebody saying zoom is not appeared to be working. Maybe that was you.

It was me. Oh, okay, page.

Yeah, where

Yeah. Sheridan says yep, still looking for open. Okay, all right. So let me get the property, right. Can you drop the link into the zoom chat? Yeah, I'll do that. In fact, copy, you know. All right, so I have to break out of the share. So, stop share, chat.

There's the link. Now this is including something together. This is clinging something together, but it just goes to show that we go above and beyond to make sure that our second person is able to join the chat. What other course would do that aside from all of them?

And, Okay. Now, hopefully she. All right, so let me try out. There we go. We've got Sherita. I've just admitted her. Okay, and I guess she's just getting set up on microphone and

And video. So I had this thought this morning, I'll say it now, so I don't interrupt your shirt loan. That if we had an analytics of care set up, we would have reached out the Bernie to say I know. And if we had an ongoing learning community, yeah. Then we have relationships and we reach out to Bernie and say they're doing okay.

And is there anything we can do to help? Yeah. Yeah, exactly. Yeah. It's, it's one of the risks of being so decentralized by suppose. Yeah. Now selling another person landing or I don't know what now being was. So as just me making noise. Okay? All right. Now for I think it's the fourth time I'm gonna share my screen and we'll go to the first task, and I've made it a lot more straightforward than last time the same task, but it's more straightforward.

So we'll go to the course. Outline will go to the module module three. And here's the task. You can also just click on it and see it in its own page. Its own glorious page. But the idea here now is that we're looking at and categorizing ethical issues in analytics.

And this is actually an exercise other projects of done, and I'll actually even look at one or two in what I have to say a little bit later. But here's our list. Notice this time the page exists before the task. So, these are all the different ethical issues that have come up.

That I could find organized and classified by me. Now, I don't know if I've captured all of them, if you think I've missed one, click on submit your example here. Give it a title categorize. It, I'll have to fix that. Drop down and give it a short description. I'm looking here for, you know, types of ethical issues.

So if we look at, for example, surveillance, which is one that everyone brings up, right? So I've written a longer description that I would expect people to, but it's a type where I, if we look at lack of appeal, this will be a slightly shorter description again. I'm just providing an overview of it.

So, and then that's the first part, right? Have a look at this list. See if I missed any, if I have missed any added then the second part is to pick one of. These could be one that you've added could be one that already exists. Pick it and then read about it.

And then see if you can think of or see if you can find a paper or a web website. Whatever that talks about, this issue in particular and click on the link to suggest it. And here this is your submitting, a link. So we need the link title, whatever that might happen to be the link, you are all.

And then you provide a short outline of the of whatever's, on that page that you are all. So, if it's an article, just quote, right? A quick summary, the article, if it's a product, it might be a product that raises the issue, right about. That product, that's the task.

And what we're doing basically is we're feeling out this table with issues and examples of these issues so that we have references to show that. Yes people really have raised such and such as an issue, the second part and I don't know if I have the link open at all.

I'm just looking to see if it's too many of my tabs. I don't appear to have it is the one where we will, I know where I can find it. It's in my newsletter. Oops. Go home. Oh, well, daily, here we go. So the second task, which I haven't posted yet, we'll use this page.

The link for which doesn't appear to be working. That's kind of weird. Yeah, I'm having a day today, aren't I? That's post. Wow. Oh, we got a third person in the waiting room. Jim's gopher new record. Yeah. Okay. So, I've put the link in properly in like, what post.

I'm having a day I tell you but I do know, I can find it.

Okay, copy

This is what happens when I think I'm overworked. Is that I make silly mistakes. Here we go. So this is from Matthias Melcher. So, Jim, I'm just describing the tasks right now. I'm assuming you're in the in the chat room. Let's just see if you're there. Oh yes. There you are.

Hi Jim. And there. I had it on my list to get here at 10 today, but my supervisor ran, the meeting 15 minutes over. Well, yeah, that's what happens. This is, I just delighted to actually be in here now, welcome. And yeah, this is your your first session in the course that you've joined.

Although of course all the videos are available including the first 10 minutes of this one. So we're watching the videos and actually was just before our meeting with the supervisor. I was watching your discussion from Friday on high speeds so I could get through the. That's what people who know me in person, wish they had is a high speed button so that it doesn't take as long to listen to me.

Okay, so where I was was talking about the second task. The first task is very similar to last week where I ask people to add an issue. If I haven't covered all of the issues or examples of the issues. Now the second task, we're going to use something like Matthias, Melchers projects in these, put the demo of it up here and I'll just open it up.

So now you can, how do you make it bigger or smaller? I forgot. Let's get help. Move connect to them from doesn't say how to make it bigger or smaller. There's got to be a way was, oh my gosh. Control my asthmatus. Well, I know there's a way I just don't know what it is and that's really weird.

And of course, you know, well,

Let's try those. I'm assuming your browser window is, it's, I can zoom the browser window, but the problem is this canvas is way larger than the window I have for it. Well, anyhow, hopefully it'll render better in yours and I don't know there's I'm sure there's a way to make it bigger or smaller, Dragon like hunting move it.

Oh well anyhow. On the left hand side, we have, he called the motivations for learning analytics, as opposed to applications or useless. That's pretty good too. These are all the things that we look at last week, over here on the right, are the ethical issues and that's the list of issues that I just showed you.

That I've compiled for this week and as you can see there categorized into groups like predictive analytics, prescriptive analytics etc. And similarly the issues are categorized. But and we'll be linking each of these to, you know, links or references as examples. The question is what issues are raised by different applications?

Let's take dashboards. For example, does a learning analytics dashboard of your class? For example, raise issues of privacy. Oops. So click on all. And then drag, I always forget how to do that. Click on alt and then drag and it'll draw a little line. Similarly does dashboard raise the issue of content.

Manipulation does plagiarism detection? I click on it. Now, I'm clicking on alt and holding out down. Does that raise an issue of privacy? Does it raise an issue of fear and things, etc? Now, the way Matias is done here, he's got all of the links or all of the applications and all the issues.

In one grand big chart, I'm going to try to break it down so that it's a bit more manageable. So yet you're not looking at everything mapping to everything but you're looking at subsets. And then I'll just have you page through different subsets for as long as you want.

But that's basically the idea is to whoops I always press shift instead of halt, any shift? No, but that's the idea. Just draw these links. So it's pretty simple task. Then The other thing that I'd like to fix, but this is the way it works. Now, right click on the diagram and then click on export and what that'll do?

You can you can either open it with Firefox or save it, if you open it, it looks like this. It's an XML file. So there are all the issues which is really interesting to us are all the links down at the bottom and this will include the links that you've added.

So this is of interest to me because what I'd like to be able to do is add these links to the graph that we're building in the online course that we're all looking at. So if you were to send this to me, if I can't figure out a way to just have this save directly into the course without you having to export it and download it, I'll do that other.

You know what I'd like? Right is even a submit button or something but even right click and then send to course, but if I can't do that in the time, then I'll give you a place where you can just click the click a web form in the course, and upload it.

And then I can take it from there and integrated into the course graph. So the idea here is that you're helping build these relations between applications of analytics and the issues of analytics. And, you know, this would be really interesting if we had a hundred or a thousand people doing this.

And then I can take all of these graphs that people did and amalgamate them together and give the links different weights for how popular they were. But we don't have a thousand people but such as life, it'll still be interesting to see what all of you come up with and compare that to some of the things that we're looking at.

Some of the discussions that we're looking at with regard to ethics and analytics. So that's the exercise. That's this. Second task and I'll turn it back over to you. First of all, do they make sense and second any questions? So this time instead of submitting an example, we're going to actually, we're going to be drawing those connections for a second test.

So submitting the example is task one for this module, so I've set it up a bit easier so that it's a bit easier. So, if we, if you go to, if you go to the module screen and module three, and then, click on the page that says all ethical issues.

You can submit a new ethical issue here by clicking on that or you can click on any one of these ethical issues and click on the link of the bottom to add a link, which is an example of or resources URL, a piece of software or yeah, it's like that, that does that.

Yeah. Okay, this, but it works better this time if that is to say, it works this time. So, yeah, it's around until the weekend, so that's, that's fine. Yeah, not a problem. So that's task. 1 is to add these, you know, a link, maybe more and then task 2, which I haven't put on the module page yet because I'm just trying to write some software to make it work.

But task to is to is to draw those links and I don't know why it's not. Oh, I'm in the wrong module. That's why there we go. So when task, two shows up, it'll show up right underneath here. Okay, so it's not posted yet. No, it's not posted yet.

Okay? Because you have first second. And so those are and then there's another one and that and you have a link to where we can. Yeah. Draw those connections. That's it. Exactly. Yeah. I'll provide the link with that task, and it'll show up in the newsletter. So either on our, by RSS, or by email, whichever you're using or website, whichever you're using, tell us that sounds

Yep, Select the area. I'm hoping it'll be fun too about.

Of course, I put some links in the day and and added multiple logos, but realizing that I was sort of redefining them. Yeah. Right. So, I don't know if I should ask you to delete all that and let it go back and add proper extension of categories. Let it ride.

Um, our are these categories of ethical issues? Yes. Okay. I'll probably just leave it for now. You see part of what I'd like to do in the discussion at the end and we didn't really do that last time because we got involved in other things. And that's fine is to talk about the sorts of the sort of thinking that we need to undertake when we're talking about ethical issues.

Categories of ethical issues, how to how to name them, how to describe them etc. So, having examples of what you put in, even if you now repudiate, those examples, they're still examples of how someone thought at some particular point in time. So if not, you would could have been anyone, right?

So we still have something that we can talk about and you know, as I think about this and you know this thinking is sort of forming for me over time as we go through this course. You know, we're sort of emulating a lot of the stuff that AI people actually need to think about and go through as we go through these exercises, you know?

So we've had a labeling exercise, we've had a classification exercise. Now, we have a linking exercise and in one of the modules later on, in the course, it's going to be structured along the lines of the decisions we make when we design AI and analytic systems. And we'll see at that time, a whole bunch of these decisions that we've already gone through in, in our work, through the course.

And I think it'll give us a bit better of an insight. It's already giving me a bit better than insight to the sorts of decisions that need to be made. And, and how they impact, you know, our understanding of concepts, as well as the ethics of these concepts. That's, let's see me how it's shaping out right now.

Anyways. And that's already an unexpected. But interesting thing to happen through offering this course. So our errors and misunderstandings provide a record of learning kind of like, yeah. So our creating students don't erase your answer. Don't need line through it. And but you're new understanding there, and you'll have a record of what you've learned.

Exactly. And so that's why I'm not gonna remove your mistakes when you submit them because we're gonna go put them on video reason. Another issue, is it possible to go back and add new categories onto a posterity made? I don't see how to I can do it. But no what's what?

You've got access to is a submit but not an edit function. No. Yeah, because the article is good and dropping something like this, you'll hear. Yeah, highest discrimination. Yeah. But they weren't used exactly. You have said and I probably could have invented a couple. Yeah. But nowhere. Okay. So, but feel free to add more stuff, you know.

Yeah, I think that's that's the way to go here. Just isn't a side the way I'm designing this and that, you know, like you're probably sitting there thinking about you. Why didn't you put an edit capacity into that and and decide from the horrible, technical problems that it creates.

Although, you know, I did have an edit capacity before and I have a whole permission system in grasshopper to make that possible. But where I want to go with this approach, is that individual people taking the course, have their own instance of grasshopper. And so have their own instance, of a way of creating these editing, these drawing, the links of whatever by themselves.

And then it's the result of what you've done in your environment that gets harvested or aggregated by the course, and then brought together with everything else, can't do that and word press or blogger. They just don't have the capacity, right? All they have the capacity to do, is you write some posts, you know, I was thinking about that.

Even you know, I wanted people to suggest for example, additional applications or additional issues. And what do I have? You do write a post. Well, you could but then how does the aggregator recognize it that post is intended to be an example of an application and put it properly into the database and have maybe if I had AI, it could do that.

But I don't have anything like any I that could possibly do that and and you know in that introduces any waste the possibility of ever into reading and categorizing your post. So better if you do it better, if you do it on your own system, but no such system gives you that capacity.

There is no system that you can use where that's possible. At least none that I found. Yeah, there's some concept mapping tools and things like that and maybe there are ways to explore that. But you know, the the whole overall work with these examples create these graphs, add these links, send it to a central course.

Nothing like that exists, maybe this time. Next year, I'm still working on grasshopper the PLE edition as well as grasshopper the course edition. And when I tried when I offered the course in 2018, I tried it with the two and it failed miserably, but I'll try it again. I still have two and a half years before I retire.

So there's plenty of time to write a large and complex software application. So here's a ignorant question can so can we go up a level and think of using the internet and now that APIs are so prevalent and with the group of people who have been trained and making those connections.

Mm-hmm, possibly just build this in Europe not using gratifying in using APIs policies, Is that theoretically possible? Again a thorough it's not only theoretically possible. I imagine it'll actually be done at some point grasshopper uses APIs to communicate between instances. So your personal learning environment would use an API to connect with my course so that sort of exists and and you could connect with each other's instances with APIs as well.

So that's what creates that kind of community and that's how we would share. For example, things like information that I've bought the link to the to the live discussion but our own unique platform. Yeah, yeah, yeah all of this should be open. So, in theory, you could use any platform, you know, as long as that abided more or less by the API.

And what's interesting to me is the graph on your version would be different from the graph on my version different from the graph on Sharita's version. Everybody has their own representation of the discipline or domain. And and I think that's interesting. I don't know what the application for that is but I think that's interesting.

Jim. You had a comment requestion earlier. Yeah, I'm looking at the all ethical issues page and it's says submit an example, when I go to that, submit an example, it's very familiar, the title, but the categories still has your descriptive diagnostic predictive rather than the categories that I'm seeing on the ethical issues page, quite right.

And that's a coating error that I'll be correcting. As soon as we're done this session, I wasn't sure if I was misunderstanding something or is skipping something but but it's okay. It's something you're still working on. That's fine. Yeah, I watched. So I was watching the I said was writing a blog post last night and commenting on how I had been watching you struggle through on Tuesday, but I'd already looked at the the page and knew that you had succeeded.

So it was kind of a weird time work. Yeah. It's kind of odd. That way isn't it? Well, I mentioned it in my blog post. Not that I need a badge or anything but the logic of asynchronous learning. I love it. Yeah. Okay. Thanks. That was my interesting. So we're I've got no with urine indulgence.

All right. Well, do I want to do that? No, well, I don't know, I prepared some slides to introduce this module. Are you guys interested in that? Or do you want to keep discussing? I'm seeing two Sherita. What are your thoughts?

Shem. Very indecisive which I'm trying to make little things work. So I'm only have half be attention. I hear you have to. I'm gonna have to leave very shortly. So I'm not, you know, the person you should be asking, probably marker Jim. All right. So if you check the recording of this later, it'll be at about the, the 40 42 minute mark, and you can pick up on that.

All right, let me just it's not along thing, but it would be nice to to have it. And if I can do it now, instead of doing it separately, there we go and set up. So,

Sorry it takes a little bit of time because Microsoft always wants to default to showing a slideshow on me entire screen and I never want, not even one. I presenting, I never want to slide presentation on an entire screen. All right, so let's come back here now. Let's see how we go.

There it is. Voila. So now you should be seeing my beautiful slides not yet, but it takes a bit. There we go. Yeah, because yeah. Needs to to load some large and beautiful images. I just love the colors on this image. Okay, so yeah, this is to introduce the, the third module of the course and it's hard to believe that we're already into module three.

But where we're at? Now, we found a ton of applications of AI and analytics and learning far more far more than just the, the typical applications. That people think of, like, predicting how students will do, or recommending content to learners a wide much, wider range of applications, and that, and I hope that, that part you're convinced of now that there are many, many applications of AI and learning many benefits that results.

So it's not really an option. Just to say, no, we just won't use this. Technology is just too much benefit but that's where the ethical issues come from. Of course, taking all of these applications into account is important because it's precisely in these wider accounts and analytics that the relatively narrow statements about ethical principles are seen to be lacking people talk about bias and and they talk about surveillance to name a couple or privacy.

Perhaps and yeah, these are concerns but they are by no means. The only concerns nor is addressing any of these concerns. A simple matter when you're looking at dozens and dozens of different uses of analytics and artificial intelligence. And what's really interesting about this technology is that it's possible to use it correctly.

It's possible to use it precisely, as designed is still reach a conclusion or an outcome that would violate or moral sense. Maybe not everybody's moral sense, but are all personal moral sense and it's possible to use analytics and AI correctly. Still do significant cultural and social harm, the oven as well.

Look at Facebook, right? We we don't need to look any further than Facebook. We've got the the Facebook papers being released this week. The, the great revelations that happened last week and it's just the latest in a long series scandals about Facebook, but what exactly are the ethical issues here and how do we address them?

You know, I read a lot of times is writing, that seems to suggest that we can address the issues with Facebook simply by blocking offensive accounts about for one thing. That's kind of a wackamall strategy, right? They keep popping up. He blocking them but they never go away. They're like spammers but also the it might be that the moral enough of a problems are simply problems about offensive content.

Lot of people have suggested that there are fundamental issues with Facebooks algorithm or even yesterday, I read an article about Twitter's algorithm and revelations. I did favors right. Wing politicians, right? Wing issues, others have suggested that, maybe it's Facebook's incentives that are wrong. The incentive to link to engage drags as deeper and deeper, as they say into the rabbit hole deeper and deeper into more and more radical content.

And again, that's not something that just plagues Facebook, this sort of accusation was leveled YouTube, not too long ago and that's where at first surface, or the problem might be just who Facebook serves and what they want. The purpose of Facebook, is to make money. It's a public company, they have a fiduciary duty to focus on making money, but making money as an objective.

When running AI analytics, may buy itself, because, cultural and social harm. So I think all of these considerations together, speak against simple and superficial understanding of the ethical issues in AI. There is a widespread demand that something be done right. You know, it's it's a rising market. I'm doing this course and I started my own personal investigation because two years ago, the president of NRC made a promise to the government of Canada.

That energy would look at the ethics of AI and there wasn't a whole lot done about it because it's kind of hard to do that. So I took it on myself to address the issue so I'm hoping for anatomy from the president. No, I'm just kidding. And there are other things being done today at NRC.

Now around ethics in AI in a number of papers have been written, but at the same time, we have NRC and elsewhere in our field. Generally are working in an environment of misinformation about the development complexity and riskiness of AI of all the different applications. All the different issues we do need proper debate about its development.

But again, you know, this document from the world economic forum that I'm referring to also says something along the lines up and I'm paraphrasing here. It's a fallacy to believe that these issues cannot be addressed through regulation. Well, I don't know how they know that, because I don't know that.

And Well I'm not going to say I'm smarter than them because I'm probably not. But I mean, if I don't know that probably, it's not proven to be true. Don't want to give myself too much credit there. You Nesco talks about managing the discussion and, you know, points to the two sides of it, the dangers of, or the consequences of misuse can be devastating, they say, and I think that's quite true.

Particularly in countries. Where if the wrong information reaches the wrong people, it can have very serious personal consequences. On the other hand hate division and life are good for business. This is where the the legal imperative to make money. Works against our social cultural needs. Also globally, inequity global inequality is mirrored online and now it's just thinking this morning about all the stuff, but I've read on ethics analytics and so on how much of it's coming out of private, elite universities and associated academics and and the countries that it's coming out of including our own here in Canada.

But, you know, Britain United States, Canada, Australia. And are these the societies and cultures we want determining, what counts as ethical NII. I don't think we have a good history here. I mean, there are many things that these societies and cultures have done, that is good. But there's no question that they've played a major role in producing the global inequality that is now mirrored in the online world and mirrored in analytics.

On the other hand, the potential benefits are enormous AI and analytics. Could end the need for work. That would be pretty significant but says UNESCO we need to agree on international. AI regulation. Again is regulation going to be what solves the problem. I think we need to dig deeper hence, the purpose of this course.

I I think we need to dig deeper, not just into the issues, but the nature of the sorts of issues and and the environment around these issues that comes up. And I only just found this diagram today and I haven't had a chance to study it in detail, but it really, it brought out a concept to me that really helped a lot.

I it's actually looking at the quality of a clinical environment, but it seems to characterize the discussion of ethics and morality a bit more generally. And when I looked at the diagram, I realized that it maps for point to. This course, the different things that we have to consider in ethics are not just ethical rules.

Ethical principles. What's right? And wrong. But rather a complex set of interrelated, factors moral climate, moral community, more identity, distress, sensitivity, agency, and integrity. And without a whole lot of work, as you can see, with the light orange text, I was able to map each of those to one of the modules in this course.

It wasn't a big force to do that now, how well that matching stands up to closer scrutiny, I don't know. But nonetheless, I think it's still gets at the idea that our understanding of ethics isn't simply an understanding of what's right, what's wrong? That there's a lot more nuance than that.

And that's why, you know, I encounter when I encounter statements that there is a consensus on ethics and analytics, you know, I really wonder about that and there's a study for example, of from fuel and other suggesting that we have reached this consensus and the links to it. There are on a slide and I spend a lot of time studying this document in particular.

These researchers look at a set of ethical principles and guidelines, will be looking doing much the same exercise in the next module. And they mapped those to the ethical issues that arise. And when you look at them, you look at this chart that they present. And it looks like, oh yeah, everybody has reached this consensus on ethics and analytics.

We all have this common sense of what the ethics are. Again, you look at who produced all of those documents on AI guidelines and ethics and coming from basically the same demographic. So that should be one flag, but more than the point, when you look at any of these ethical issues in any sort of detail, the consensus falls apart.

Yes, it's true. Three, four, five, eight, ten. Twenty different codes of ethics. Say something like privacy is important but when you ask them, what do they mean by privacy? They're working with three, four, five, 18, 20 40 different definitions of privacy. Where the importance of privacy plays out in different places and even more when you get out, you know, when you go outside, the narrow domain of studies addressing ethics and artificial intelligence, and look at wider discussions of ethics and privacy, generally, that consensus really falls apart, a couple of examples here on this slide, one from public library of science.

The headline for that article is sharing is caring but is privacy theft and I think that's pretty good question. You know, how do we do open science with privacy? What are the ethical decisions there? It's not simply project privacy end of discussion. Another example, the document on the left is a representation of the Panama papers, the Panama papers.

You know, 11.5 million documents etc. Describing how people are avoiding taxes by secretly stashing money in anonymous bank accounts in Panama. Now is that the value we want preserved and AI analytics and and, and learning. Generally, it's not clear to me that it is. Those are just two examples out of many that I can draw into the discussion here on the issue of privacy and privacy is just one of the issues about which there will be many different points of view of perspectives, different definitions, different statements of what's important and what's not and crucially different statements about what's ethical and what's not.

So this is the stage where we're at. This is the last slide. That's why I'm not moving forward. This is the stage where we're at now, in this course, the stage where we look at, what all of the ethical issues are that people have talked about and try to find.

It's not just you know where they've used some words and common and we can't avoid doing that because we have to work in words but to try to find all of these examples of where these issues are discussed to see the different definitions of different perspectives, the different understanding of the issue.

And when we understand the ethical issues of analytics, I think that will be in a position to say more about whether there is a global consensus on what ethics and analytics should be. So like to say, what's the last slide, I really should have a nice closing slide. But that's what.

At least I am up to in this module and we still have a little time for comments. Questions reactions.

Nothing allow each other because I first so I'm stepping back but and he thoughts. Jim, you mentioned a global consensus and how access is privileged you and Iron Canada? I think, Mark, you're in the US. How did we bring in? What were the some mechanisms we can use to bring in marginalized voices?

That's a good way. I'm not sure. I know how to do that, but I think it's and especially when most of this course is it is an English. Yeah, you know. How do we how do we bring those other voices in? Yeah. And let's be clear and we should be clear.

They're probably not going to be in this course at least in this offering of the course, because there are how many people in it and we all seem to share the same demographic and we can't avoid being what we are. But I think it's very clear and I think you make the point that these other voices should be heard and that, you know, any discussion of the need for the ethics, what the ethical issues are and and what the resolution of them would be requires this global consensus and that's why I have that section on the duty of care sitting there near these near in the second half of the course.

Because overall lots of philosophy that brings in many of those aspects. And you know, again the others there there are ongoing discussions of AI and ethics that, you know, I'm watching and listening to and sometimes offering comments to where this need is just simply, not considered. In fact, contribution from people outside the field of technology simply aren't considered as needed.

You know, it's a technical issue. Will solve it internally. I will present you with the solution That seems to be the attitude, and, you know, there's no way, right? Always title thing. So, don't think either of you think so either. Because based on what I've heard from you in the past gym, based on what I've heard from you in five minutes, AI translation.

I don't think it's there yet but that might help bring some diverse voices together. Yeah, it does help and helps quite a bit. Not helps me quite a bit like right? Yeah, I said good morning to a friend in Chinese not too long ago and got his got good morning and his name in Chinese.

All correct and he was quite surprised at this, you know. He was like, how did you do that? Because, you know, he doesn't even present his Chinese name when he's communicating with us in English, but it was possible, but it's not easy yet, you know, I have a, you know, publish on Twitter.

Publish a wordpress etc, capacity, and I'm building a grasshopper, right? I'd like to have a publishing French publish in Spanish or publishing all languages or even better. Just my browser, just converts whatever I'm reading into the language of my choice but we're not there. Yet getting there, but we're really close.

We're really close. I was reading a nutshell, right? And I actually am wondering is, if the original post was written English and then used chrome dressing with that. Because and there were no somatical errors that I saw. Oh, wasn't the native English speaker. There you see. I don't know.

Yes, you know. I don't know the person because somebody has stumbled onto. Yeah. Working it and open access, indie web and stuff. Yeah, so maybe an English native English speaker that posted up in an alternating right there. That was a lot of the second translation is to get hell compared to other experiences.

The interesting to hear someone like yeah. Tell us whether the Dutch was any good. Yeah, right. Yeah, right. Yes, you have no way of but, you know, even the translation is just the first step. I've had quite a few correspondences with people in Arabic over the years and it's not simply that they speak a different language, but they're the whole character of how they express themselves is different.

You know, there's the obvious, there's the religious overlay on everything but then also the way they relate to other people, the way they relate to each other is it's just a fundamentally different way of expressing one's self and the translation system translates. It literally but I don't think the literal translation is really getting at what they're actually saying to me.

You know, and we get that in French even as well. Whether there's a lot of idiomatic expressions, but Google is getting better at translating, but let's still some of them creep through. Yeah. I'm taking a course in agricultural communication right now and that's what it focuses on. Yeah. You know, there are these culturals ways of speaking him that we're not even aware of, and that it may, you know, at some point we're gonna come to where it's just a little too complex.

Yeah, get the full moon. Yeah, you know, even with human a layer of human conversation happening to the physics, it's still, we would take a, you know, we take a United nation's lifetime expert type person. Actually communicate most of the meeting and then there's those things that just can't do.

Yeah. So yeah and that's where we can't put aside relationships because having a relationship with someone well will prevent me from making a certain amount of Rome assumptions based on just hearing certain words. Yeah, yeah, that's where trust is assuming that intention. And yeah, all these parts of a relationship and screwed over those little three cars that is that a bias we can code into AI was that's a good question.

Yeah, to keep that question in mind for I think unit six or module six or seven. When we go back and talk about how we're building our AI systems in the decisions we make, can we build trust and relationships into it? You know, I would argue that different topographies will produce different results.

Along those lines, I'm much. You know, I tend to prefer something that might be called a community of communities model where we're not trying to connect everyone to everyone all at the same time. But even the community of communities model you need to form communities from different cultures and people tend to associate with their own culture.

So Well that's our time and I'm sensitive not just to your needs but also to the needs of the people watching on YouTube either live now or later on, but through the rest of the week I'll be adding more discussion of these issues. I'll be aggregating and reading your blog posts for sure.

I'm putting them in the newsletter. Yeah, count on. It was not just me that freezes and I'll be adding that other tasks and trying to make that make that work. And I'll also be catching up on some of the stuff that I didn't cover from last week because I had horrible technical problems that ate two full days, out of my life.

I'm starving a blast going this. I don't care. That ate two days over in my life. But I have fallen behind in some of the stuff that I wanted to distribute. So I'll be getting all of that in the newsletter through this week. And then, of course, we'll get back to together on Friday and talk about it.

Yeah, I have a strategy finding session, I can't get out of them Friday. I put it in my calendar just in case it gets a drop. By the way, I like it. You have a calendar links to it. That's really helpful. Good. I'm bad it was yeah, mostly it's like by the weekend I have time to catch up and dabbling.

Oh, that's fine. That's why these sessions are recorded, is sounds good. So that you can benefit from that. You know, this whole move is an experiment in, how do we make the whole mook accessible and useful and open to people. So and and so I'm trying to do that.

There's more stuff, I'd like to do. I wish we had a live show. Cast audio broadcast but we don't have that but maybe in the future. All right. Then I'm gonna wave goodbye and tell you. I'll see y'all next time when I see you again. You're never minutes even after you start recording, huh?

Yeah.

Stopped live stream. Okay, we're no longer recording. There you go. So this is brought up an issue. This course, is brought up an issue that I one of my hobby.

Prescriptive Analytics

Transcript of Prescriptive Analytics

Unedited Google Recorder transcription from audio.

So, this is  the presentation on prescriptive analytics, module two, ethics analytics and the duty of care and I'm, of course, Stephen Downes. And thank you for joining me. If you're watching you probably not on YouTube right now. I don't see any viewers or if you're watching the recording that comes later.

So I'm just going to set up these slides and these slides will be up and running here in just a second. So and again I have to set up the slide show because Microsoft always resets it to presents full screen, but I've done that now. So I'm going to start and there we are.

Prescriptive analytics. So the topic of prescriptive analytics, essentially is a branch of artificial intelligence and analytics that makes recommendations for me. It'll say, you know, we have an example here of marketing analytics work this deal, not that one. So this product, not that product. Do this task, not that task etc.

And we can see how this is going to have some obvious applications and teaching and learning both from the perspective of a teacher. Making suggestions about what to do next in the classroom for example, and from the perspective of a learner, what to study, how to study, where to study it when to study it.

So as with previous discussions of applications or uses of analytics, I'm breaking this down into a number of different topic areas or categories as I'm calling them. This breakdown is purely arbitrary and of course we could think of many other ways of dividing up different applications of learning analytics but that's what I've chosen for this point.

So the first and foremost of these and of course you had to be expecting, it is learning recommendations and this breaks down into two major categories of prescriptive analytics. On one hand, we have content recommendation systems. Now, content recommendation goes back, several decades. And there have been a variety of different approaches to contact her content recommendations over the years.

These days, it's based on a collaborative, filtering and other AI metrics that identify what people like you who are in your program, who are similar, aptitudes, maybe similar learning styles. Any range of properties what they have done well with, then the system recommends, a similar sort of thing for you a more difficult challenge.

But one that has been of interest in the AI and analytics community, for some years is learning path recommendations here. We're not just recommending learning resources on a case by case item by item basis. But rather we're recommending a path through a variety of different alternative resources or even topics for learning in order to reach a learning outcome or a learning goal.

So these kind of recommendations may even select courses or lessons or topics. They may simply outline a learning path or it may offer personalization parameters. Again maybe based on learning style, it may be based on mastery learning maybe even on things like time limitations or knowledge background, they're used by course generators course or learning objects sequencing systems and they can be used for both online and offline.

Education. The second major category and this is related to the first is adaptive learning. These are AI systems. That look at how a person is performing in the context of a course or a learning environment and ads this system to that, we see this morning games actually, then we do and educational applications.

A gaming engine will track a person's progress closely and adapt, the presentation of challenges enemies, whatever to the person in order to challenge them. But not too much. Similarly with adaptive learning you're trying to challenge but not to much the individual learner. You want them to push themselves in order to succeed but you want not to push them so hard that they give up.

So you have to of an artificial intelligence that monitors, what the student is doing, and then adjusts the difficulty of a content accordingly. Another class of prescriptive analytics is, especially useful, in large, open online, courses. And that's adaptive group formation and maybe educational scenarios, the use of small groups is encouraged in order to enable each person to take part in the discussion to engage in small group, dynamics conversation, feeling more, like they're participating in contributing to, the course, rather than simply receiving content.

This is not something you can do with a single group of a thousand people or more. So you need to break down your course into small groups. This however, can be difficult people who know each other, make cluster into groups, but you're going to have a especially online a lot of outliers as well.

People will be participating to different degrees. You'll have people who are very engaged and active and other people who are more lurkers or as we call them legitimate. Peripheral participants, indeed, in your group, you may have people dropping out of the course, part way through. And so there's the risk of a scenario where a person is the only act of member left in their group.

So there's a need to have adaptive group formation that takes into account. All of these and additional variables. And to manage the group for nation as the course continues. In order to ensure that people have a useful and productive small group experience

Related to this. And similar to this, in many respects is placement matching. This would be used by systems or applications that are looking to provide real world experience. For example, an educational program looking at co-op placement, may use an AI system in order to match potential students, potential co-op employees.

In other words with potential employers, There's another case in the government of Canada where the treasury board of Canada pilot a project with us called Micromachines, where government employees would be considered for short term assignments, in other departments, just to broaden their experience and help them learn new skills.

And an AI was used to match these potential short-term placement people with the term placement opportunities. Any place where you need to match a person with a resource or a person with an opportunity, you're going to have the need for an opportunity to use artificial intelligence and analytics to make this more effective and especially a lot more efficient.

And that brings us naturally to the topic of hiring. I've been involved in a number of AI for hiring or AI for job interview projects, over the last few years. Right now, we're seeing corporations already, using artificial, intelligence, and automation in order to push the hiring process. I haven't seen a case where MAI has outened higher to person yet, typically what happens is that these systems work hand in hand with human recruiters making suggestions creating the shortlists, you know, matching a pool of candidates to a specific job profile or opportunity.

That said, these systems can do everything from phone interviews, posting ads, screening resumes, even prescreening candidates. So that when the human actually gets into the loop, they're looking at much less time and resources in order to accomplish a hiring goal. Similarly artificial intelligence or analytics can be used to make pricing decisions.

We're seeing that a lot already. In other industries for example, in airlines AI supported applications help with what is called differential pricing where this system. Calculates how much a person would be willing to pay for an airline ticket based on their background? The time they're purchasing the ticket other factors, how frequently they fly, whether they've purchased business, class tickets in the past etc.

And then makes the offer accordingly companies like Uber adjust their pricing according to demand, we've all heard of Uber's famous or perhaps, infamous surge pricing, whereas demand rises sodas the price and this is calculated not by a person sitting in a room. But by a fairly sophisticated analytics engine, the same sort of thing can be done in a learning environment.

Both for the pricing of tuition or other fees for access to learning as well as the pricing of resources such as books and applications notes. And other sorts of support, probably I don't have direct evidence but probably this is already happening in the publishing industry. There have been examples where companies that publish online newspapers or magazines or journals have been using AI to determine whether or not to put up a paywall blocking access to a resource based on whether that AI thinks that a person would be willing to pay for the resource as opposed to say, somebody who's just casually browsing as a result, I get a lot of these payrolls, but I'm still not willing to pay for them this feeds into a general set of applications around decision.

Making generally AIs, can support decision decision making at pretty much any step of the process. Typically AI will be used to handle the comprehension of big data, which is what we saw. And things like descriptive analytics, and even diagnostic analytics, basically setting the stage for a person to make a decision.

The AI can also map out the set of possible actions for the actual decision major to consider This person. Now will take into account other information that the AI might not have access to, and then I actually make the decision, Although we can imagine catway a scenario where the reason of other information available.

And so the that is making the suggestions about possible actions, maybe actually in the best position to make the business position. So those are overall some applications that constitute the area of prescriptive analytics. Perhaps you can think of more cases, or more types of cases where an artificial intelligence engine or an analytics engine can make a recommendation, or make a suggestion to you.

Certainly there are many cases and learning and development where these tools can be applied. If you do find such a case, then I recommend you go to the applicate, the list of all applications, in module, two of the course, and submit them as your own suggestions. And that way we can get as comprehensive a list as possible for the different prescriptive applications of AI and analytics.

So that's it for this video. The next video in this series will be a look at some applications that are generative in. That is applications that we use in order to create things. If you're watching live and you might be, I'm going to be doing that right away. So, give me about two minutes then reload, the reload, the activity center website.

Otherwise, if you're not watching me live, look for the next video in the ethics analytics and duty of care playlist. Or if you're listening to this as a podcast, this will be the next item in the podcast. So that's it for now, I'm Steven Downs. See you shortly.

Generative Analytics

Transcript of Generative Analytics

Unedited Google transcription from audio.

So, my audio recorder is on. So, once again, this is a session on generative analytics for the course, ethics analytics, and the duty of care module two, the module is applications of analytics. And the idea of generative analytics here is that an artificial intelligence system, doesn't just diagnose or even recommend things in recent years, especially we've seen a greater and greater capacity led by tools like GPT three for artificial intelligence to actually create new artifacts or new learning content.

And so, that's a significant change from what we've seen of artificial intelligence in the past. And so it's necessary to look at this new category of applications or uses of AI and analytics, generative analytics. As with the previous videos of the series, the method is going to be that.

I look at a series of types of generative analytics and discuss. Each of these briefly again, the purpose of this isn't to us a deep and definitive knowledge of any of these technologies in particular. Any one of these slides could be an entire course on its own. The purpose of going through an overview like, this is to give us as part of the overall, framing of the subject of ethics in analytics.

Someone understanding of what the applications are. And even in particular, some understanding of what the benefits of analytics are. So we can understand the motivation for wanting to use these systems and even to use these systems in contacts that some people might call unethical so to our list. So again, generating original content based on properties or parameters of the data combined with predictions or requirements.

For future data, is what gives artificial intelligence. The capacity to create new artifacts. Neither human's nor machines work from a proverbial blank slate. When they're creating something new, they create from something that already exists and then the anticipate or project or associate new properties, with the old thing, maybe combining reshaping.

There's variety of techniques that can be used in order to create new content. I think that understanding generative analytics will help us understand not only how computers can be of, but over time we'll also help us understand how humans can be creative. So, the first and probably the most famous examples of generative, content computer generated content in an online.

Environment is chatbots, you may all recall Julia, the chatbot from the 1990s but was pretty bad. There was once a chat bot that quote unquote ran for president of the United States. It was called Jackie. And today, chat bots have become sophisticated so customer service agents that are able to carry out.

I won't say an intelligent conversation with you. I've had some pretty unintelligent conversations with automated chatbots but conversations that are smart enough to detect. What it is that you're trying to understand. And at least make the effort to throw some resources, your own way. So, there's a several aspects to a chat but one aspect is the understanding of what you're saying.

And this is significantly difficult, It's not simply a case of audio to text transcription. Although it is that but it's also a case of being able to recognize what our significant or salient concepts in what you're saying and be able to associate those concepts with typical sorts of requests or questions that are person might ask in a sense, the AI that played against Ken Jennings.

Deep blue in jeopardy was really nothing more than a sophisticated chat button and pretty smart one too. Of course, not all content is going to be generated and real-time conversations. There's a wide range of applications already today. That automatically generate content that appears in leading newspapers. For example, the Washington Post has for a number of years used artificial intelligence to write sports stories.

These are fairly formulated stories that don't need. A lot of extra work and customization in order to produce, you know, a perfectly acceptable project. But this, as this capacity advances, the types of content that can be produced become more complex and and also more compelling. A reference here, an application called calf chai.

That is a machine writing an algorithm that can write articles from scratch. And the question is, it's as you see in the diagram, can a machine learn to write for the New Yorker. I would also include in this category, AI generated software. There are computer algorithms that support or actually write software for you.

I use a product called visual studio code by Microsoft. And over time, I've seen more and more useful software authoring assistance, appear as plugins for that environment. I've grouped a number of things under the heading of auto-generated animation, maybe I should change the title of this. But here, I'm thinking not just of cartoons that are created by AI or analytics but also images or video such as is produced by deep fake.

Any sort of animated content, what's significant here is that the AI is able to produce videos. People sounds, whatever that are sufficiently new as to be considered unique and there's a series of posts and examples out there. You know, they these people do not exist, this sign does not exist, this poem does not exist.

There's even an artificial intelligence that produces death metal on a 24-hour seventel week basis. And I might not be the greatest music in the world but this is certainly beyond the capacity currently of any human.

Intelligent and insightful conversation and content production may help artificial intelligence produce coaching applications. Now, what's interesting about coaching is that this is something that hasn't been available to, most of us for most of the time. Sure, athletes get coaching, it's expensive in time consuming, which is why the it's the best coaching is reserved for professional athletes.

Executives, get coaching. The highly paid executive has an expensive personal coach or mentor, to help them through those difficult business meetings or, as they say tough decisions, but for the rest of us were kind of on our own with AI and analytics. Backed coaching, we can access the same sort of resource that athletes and executives can access.

We're also always we're already beginning to see some of this and analytics tools that are diagnostic in nature that give us feedback on our performance. But now when these tools begin to offer comments suggestions recommendations training programs, encouragement, motivation and more. Now you have something that is very much producing the the same output and hopefully the same result as an actual coach.

They may be argued, you have to have a human to have a coach. But, you know, the choice isn't for most of us between having a human or having an AI, the choices between having an AI or having nothing and having. The AI coach is probably much better than having nothing learning analytics will not just provide coaching in the moment, but also coaching that helps us over the long term.

For example, on the slide here, I have the suggestion that they may help students develop self-regulated learning. Maybe I'll also me help with things like pattern recognition, critical thinking with negotiation conflict resolution other, things like that. What they call the soft skills that help people get by in an increasingly complex world.

Related to all of these is content curation now. It might be said and not in reasonably that perhaps content curation should be classified as a type of prescriptive analytics, focusing on the role of the curator as someone who recommends content but arguably. And I would argue here is about much more than selecting and presenting content.

There's also very often and act of interpretation and presentation. That happens in, this does involve the production of new content, everything from the creation of metadata to the writing of those short, synopsis that show up on the little cards beside artwork in a museum to the creation of a collection that may mix different things.

All these are risk for the mill of an AI content curation engine and so we can we can imagine seeing in the future. New ways of saying content. Presented that give us an almost unhuman or yeah unhuman not in him but on human perspective on different artifacts from different cultures with different backgrounds.

And like for one, I'm looking forward to seeing that and of course everyone's bug. Bear is the whole idea of artificial teachers or artificial tutors. And right now we're not predicting robots talking to us in front of a classroom teaching us. Instead of a human I wouldn't make a whole lot of sense for a variety of reasons but we can easily imagine even today artificial teaching assistants.

And in fact there was a product called Jill Watson, that was used as an artificial tutor in a university class that actually fooled the students into thinking that she was human. Should I call a robot teacher? She, it's hard to say, isn't it? These teaching assistants are already being deployed in fields, like law medicine and banking, and we will begin to see them in other less profitable courses and programs in the future.

And it's one of these things. Again, we're not going to go from nothing to artificial teachers. We're going to progress slowly as the capacity of these AI supported tutors in teachers is slowly increased changing and in some ways eliminating many of the traditional duties of human teachers. So those are some of the generative analytics and you know, you may well, think of many more ways and which an artificial intelligence can be creative if you do.

Then I recommend that you go to the all applications page and module two and make your suggestion for a new type of analytics under the category of generative analytics. So, that's it for this video in the next video. I'll be talking about somedayantic applications of analytics by day on tech.

What we mean, is they go beyond making recommendations beyond creating new content and into the realm of telling us what should be done? What is right and what is wrong. And I think that's where some of the most interesting applications of analytics are coming up. So that's it for now.

Thank you for joining me. I'm Stephen Downes.

Deontic Analytics

Transcript of Deontic Analytics

Unedited Google transcription from audio.

Hi, I'm Stephen Downes. Welcome to this episode of Ethics Analytics and the Duty of Care in module two. And this particular video is on the topic of Deontic Analytics. I'm just going to put the URL into the activity center, in case anyone is watching live again. If anyone knows how to get the URL from the YouTube live streaming application, before I actually go live with my live stream, please tell me because I have to be in every one of my videos going through this little lecture size and then, you know, after that, I would have to trim the video and, you know, so that it starts properly.

I've also got my audio going, so we're just about to set this, to get going here. And so I'm going to officially kind of start it. So I'm Stephen Downs. This is the module subsection, I guess called, deontic analytics, and that's a bit of a mouthful. It's the last of the types of analytics applications and learning and development and it's probably the most contentious number reset of applications that were looking at when we're looking at AI in education.

Let me just get my head together here. So there we go. So they don't take analytics in a sense deals with right and wrong. It's a basically the idea that the computer the artificial intelligence is telling us what's right and wrong but of course this subject of right and wrong is more subtle than that and they want to catalytics are more subtle than that and they bring in areas from very domains here.

For example we have enterprise transformation and human impact. This would be in some sort of production or office kind of environment and the sorts of recommendations that come out of the system are recommendations about education, regulation adaptation, social policy. You can see how these all go beyond mere prescriptive analytics and they even go beyond generative analytics in their not telling us what is they're not telling us what can be but really they're telling us what should be given everything at knows.

Here's what we should do. Here's what would be the right thing to do and that's why I call it daunting analytics. It comes from the idea of dialantic logic which is the logic of art and should. So these analytics, look at things like sentiments needs desires, other such factors the range of human emotions.

The range of economic environmental on other circumstances to tell us what we really ought to be doing or saying or making policy as etc. You'll see we've got a number of examples of this and we'll go. I'm just as we have in all of the other videos of this series.

So one place where we see, deontic analytics already being applied is in the definition of community standards. Now this is a tough one because we think of community standards as something that is defined by the community and something that we detect rather than defined. But what happens is when you have a system doing the detection for community standards, for example a content.

Moderation algorithm that in some way measures what a community deems is acceptable and then moderates for that, you're very likely to set up a feedback loop where the community standards become self-fulfilling prophecies. In other words what people think should be the standard gets interpreted perhaps slightly differently by the AI what the AI interprets the standard as being now becomes the new standard.

Any deviation by the AI, from what is actually, the standard becomes the knee standard and it's going to be pretty much impossible for the AI not to deviate. Because if you ask members of the community, what the community standard is, first of all, you may get many different answers, but more significantly.

You're going to get in precise answers answers using only the vocabulary that's available to the members of the community, the AI isn't under any such restrictions. So the AIs understanding, if you will of a community standard is going to be much more nuanced and taking to account many more factors than human would.

And so, with is going to change the standard for better or for worse, and I may take into account things that actual members of the community probably wouldn't bring up, you know, in the wider world. For example, climate change global warming is a significant factor affecting our lives and we may find the algorithm may find.

That this is something that elsewhere in these people's lives. This matters, it might be not something that's discussed particularly in the community, but the influence of concerns about climate change, may come to define what the niche standard is. So for example, climate change denialism is not part of the community standard.

Even though there was never any rule or even statement about it, it would just be considered, you know, not right to be doing climate change. Denialism in this community.

The AI can through actions like that, actually influence human behavior. This is, especially the case the more, an AI learns, about an individual person, they can learn about their habits, their behaviors, there wants needs desires as exhibited in what they look at what they write about. And then take on social roles that influence their behavior.

The roles might be a role model or an advisor as in, you know, the rubric teachers that we talked about earlier a partner or, you know, an artificial mentor or as a delegate or agents for the person. And we can see how acting in any of these four roles would influence the behavior of the person minimally, the person would have to respond to what the AI is doing on its behalf or in an effort to help it, advise it, or teach it in some way.

But as well, especially with something like the role model a person will possibly begin to mirror or imitate what the AI is doing. If the AI is able to present itself as seeming, sufficiently human. Then it's possible that you know, humans copy each other and humans. In this case might start copying the AI.

There have been cases where human behavior has been influenced by artificial intelligence. We'll talk a little bit about that in some of the next slides.

When looking at these wider environmental and community contexts and artificial intelligence can learn to identify what is bad and wrong. And by that I don't mean you know, spot crimes or things like that. But look for patterns of behavior that in themselves are not wrong. But our suggestive that a person might be exhibiting bad intentions.

We see this already in airport security systems and similar kinds of security systems where perfectly innocent behavior by most of us. But when taking in context and assembled, with other behaviors triggers an alarm. And, you know, you're kind of gate agent pulls you aside for some extra screening that sort of thing.

Now, we will talk in the future about how this be misapplied and misdesigned and resulting discrimination and other such problems. Nonetheless, this kind of activity is certainly within the realm of possibility for artificial intelligence. A similar sort of logic exists in an AI powered lie. Detector. Once again, the AI doesn't know your lighting.

All right, it's not spotting the lie, but what it's doing is it's assembling a range of information, a range of data about you, everything from how you look, what sort of emotion you seem to be projecting. Whether you appear nervous, whether your heart is racing, whether your temperatures elevated, all of these symptoms, none of which constitute a lie may suggest to the AI that you're lying and may as a result cause the AI to conclude that you're lying.

Now, how good is this technology? Horrible. It's terrible but that's now right. What about when it gets good? Yeah, polygraphs never did get good. Although it's interesting because we hear them referred to as you know, take a light detector test. If you want to prove your residence, your essence.

You know, we we may see people say well subject yourself to questioning by the AI. If you think you're innocent, unless you what the AI says, certainly within the realm of possibility, conversely, an artificial intelligence spice spot, what we want and society and begin to amplify it. Max Tegmark has an interesting perspective.

On this saying everything, we love about civilization is a product of intelligence. So amplifying, our human intelligence with artificial. Intelligence has the potential of helping civilization flourish like never before. And it's interesting because the presumption is being made here. That isn't that intelligence is good but it would not be surprising to see an artificial intelligence.

Reach this conclusion on its own by looking at environmental factors looking at the behaviors, looking at the results of behaviors, it would probably conclude much like Max Tegmark has that intelligent behavior results in good results. Results in everything we love about civilization and such an AI would then through, you know, all tutoring, or advising or even just putting its, it's artificial thumb on the scale for various evaluation metrics, begin to promote products of intelligence.

Now, how do we know this would happen? Because we already see it in a reverse, right? We see that artificial intelligence has the capacity to amplify the bad. When the metric in question is to increase engagement and interaction on a platform. It turns out that the way to increase engagement in interaction on a platform is to get people riled up and showing at each other in the way to do that is to amplify extreme statements or controversial opinions.

So that's what it does. That's when it produces. So we can see how an AI could amplify the good in the same way that it is currently amplifying the bed. And these two, you know, the identifying the bed amplifying, the good. These are actually, you know, a pair of applications that go together.

You can identify and amplify the bad, you can identify and amplify the good. The problem comes up when you identify the bad interpretive as good and amplify that, and that's when you run into problems. And that's why Max Tegmark says that this all is good. As long as we manage to keep the technology beneficial, we've also seen artificial intelligence implicated in the project of defining.

What's fair. Now fairness is a property that we would like for our official intelligence. We see that in many, many documents but what is fair, what counts as fair. There are a variety of factors that can go into a definition of fairness, arguably, thousands, hundreds of thousands of factors that's beyond the capacity of a person.

And typically, what we do is we define fairness by some sort of rule or principle. But rules are principles always have edge cases. They always have exceptions and they all ask always have people trying to gerrymenger the system to create their own particular kind of fairness, which is unfair for everyone else, an artificial intelligence cuts through that.

At least that's the theory here. We have an example of an I that's defining fairness in US elections. One of the features of US elections is that the electoral districts are what they call, jeremy manager, there, altered by committees in order to prove, you know, increase the probability of one party or another party or more usually just the incumbent being elected or reelected now it's possible easily possible to draw these districts more fairly but what counts as fair and that's what the artificial intelligence determines here.

It determines what fair district boundaries would be in order to. Well there's the question right in order to shall we say best represent voter intentions or best represent the populations the demographics, the balance between rural and urban the balance between different, ethnic groups. All of these things are factored in by the AI and should be weighed according to a wide range of needs and interests hard for us to do heard for an AI to do but arguably they already do it better than humans.

We can see this sort of approach used in other areas. I saw for example, recently a system that is redesigning the tax system along similar lines, using an artificial intelligence to determine what sort of tax system would be fair for people. The AI indeed could play a role eventually in changing the law itself.

There's two major ways that can happen. First of all, the very existence of AI can cause the creation of new law. It's simple example is a copyright law for things that are created by analytics engines. Such as deep fakes who owns the copyright to that does the machine on the copyright does the person who built the machine on the copyright?

So if you're using say a Microsoft analytics engine Microsoft owns a copyright, is it the person who wrote the software or is it the person who flipped the on switch to actually make the the new content there needs to be a decision made in decision? I did not need to be made until artificial.

Intelligence came along more significantly, though. The way I performs could actually inform the content of that law and to, to get a sense of how this works. We go back to learns less a who back in the year 2000 was writing things like code is law and the idea here is that the the capacities, the demands, the dictates, the actual implementation of computer software is much more detailed than any law could be.

And so what happens is the way the program is written because the defacto law and a particular environment. So if you write a system that prevents you from being able to upload PDFs, then the defacto law is, you can't upload PDFs and it doesn't matter whether it's legal or illegal.

But that sort of question goes by the wayside. What matters is you can? Or can't do it. So, now, we have two ways in which AI could change law, first of all, AI can give us new capacities. We didn't have before such that, the law does not prevent them and they go beyond the intention of the scope of the law.

Simple example was the scraping and use of faces on the internet for the purpose of facial analytics. And emotion detection, there was no law against collecting all these faces and analyzing Nola. And so this became quote, unquote, legal under the the idea of code as law. Any other hand, there are things that you can't do for example.

You can't really hide your identity by not sharing all of your information in one place and artificial intelligence makes it possible for a company to gather information about you from many different sources. Something like that is called an identity graph and so they can create a representation or a portrait of you a user model.

We might say. So you no longer have the right to not have that information be publicly known because AI makes it possible for that information to be publicly known and it's one of these things. Once it's possible. It's really hard to put back in the box and make it illegal.

So these are just some of the ways that analytics can change law. If we look at all of the different applications. Everything from opens text to legal, robot to Lex predict, which would protect, you know, what, a judges verdict would be to case taxed, and so on. All of these applications, we can see that artificial intelligence is going to have a significant impact on law, finally, easing distress.

And this is sort of an application of the AI is tutor, or AI is coach, but here, the application is going beyond, just helping you do whatever you want to do, and it's actually interpreting what state of mind you should be in and works towards promoting that state of mind, I don't know if you can hear the train going by, but it's a big one because of course, so similar systems can interact with students can monitor sentiments and emotions that may be interfering with their learning or socialization.

Almost acting. As I say, say here, a fitbit for the mind, we know that this can happen because it has happened and there have been privacy group, complaints and papers published etc. About Facebook experiments to use the social network in the way data is presented to manipulate users emotions.

So if the emotions can be manipulated, then they can be this kind of system can be used in order to ease distress. Of course, the flip side of that is this sort of system can be used to increase distress to agitate people to foment, a conflict and disputes. In the like I've put easing distresses the title of this slide because that's the application.

I would like to focus on. I know, I'm biased so that wraps up the list of day onto applications. You may think of more they are to clappifications. And if so go to the all applications page a module too, where you have the opportunity to add your own and add links, which are examples of discussion or software which instantiate these applications.

And it's the last bit of content that I'm producing for module two overall. I hope that you've seen that there are many different types of application of artificial intelligence. It's not all content recommendations, it's not all learning path. It's not all predicting, whether a student will pass or fail.

There's a wide range of possibilities and affordances of learning analytics and AI in educational technology in learning and development. And what makes this whole subject. So pressing, We're not going to not use this stuff. There are just too many things we can do with it, that it won't make sense to anybody to just turn it off.

So it creates a pressing need for an understanding of what the ethics of the application of AI are. But the same time. When we're talking about the ethics of artificial intelligence, we can't be talking about the ethics of a narrow range of applications. We can't be using. If you will stereotypes in our thinking about AI applications, We have to be again with a sense of the broad scope of the field before us, the capacity of AI to do everything from describe, what things are out there to tell us the way things should be and everything in between and we need to decide, you know, not not just what we want.

And what we don't want to follow this but how even we're going to decide what we want from what we don't want. So the next session the next module, as you know, will be the issues and learning analytics. And we're going to take a similar approach that we did in this module.

Instead of going through lists of applications, I'm going to go through lists of issues. And the purpose is the same. It's not to study these in any depth. We're not going to be able to study them in depth. They just isn't time, we don't have the capacity and there are other people doing that.

The purpose is to get the broadscope of issues that have been raised in the field. And at the end of that module will look back and see what sort of associations we can draw between what we know about what exist and what we know about, what sorts of issues there are, but all of that comes in future videos for now.

That's it for this video. I'm Stephen Downes. Thanks for joining me.

When Analytics Does Not Work

Transcript of When Analytics Does Not Work

Unedited Google recorder transcription from audio

Hi everyone. I'm Steven Downs. Welcome to another episode of Ethics Analytics, and the duty of care. We're in module three, looking at ethical issues in analytics, and todays talk is on when analytics, does not work just a preliminary before we get into this. I know that producing a whole bunch of videos and calling it.

The course isn't ideal pedagogy. And it's important to understand that these videos are not. The course, these videos are things that I'm doing to create some content that we can work around. And is, well, I'm recording these videos in order to create an audio track, which I can convert into text, which gives me a body of textual material to work from, to eventually create a book out of this.

But the course itself is the the graph that we've produced in the website that you've been working on. And it's important to understand that what makes this course. Isn't this content? What makes this course are your activities around this content. I will say, you know, I recognize as well.

I'm the first to recognize this could be the world's most boring course because, you know, and module one, when we went through a whole bunch of applications of AI analytics. And in module two, we're going through a whole bunch of our module three, regards through a whole bunch of issues in module four.

We're going to do ethical code after ethical code. I might change up how I deliver the videos, but still it's pretty dry content. And the idea here is to give ourselves the necessary layers of I won't say knowledge, but layers of data or information that we can draw inferences from one of the problems that I've found in the work that's being done on ethics and analytics, is that this basic work isn't being done.

People are beginning from the point where their intuitions leave off about what ethical issues. There are what applications are. And as a result, they're getting a very narrow intersection of applications issues. And therefore potential solutions were drawing this map drawing maps has pouring. Sorry, but if we don't do this work, we're not able to do the really interesting work that takes place in the second half of the course anyhow.

Having said that let's continue with when analytics does not work. So let's face. It learning analytics is complex. We're working with complex technologies. New technologies that we don't have a whole lot of experience working with it's difficult to master. It would be difficult to master even with oat the technologies because it's based on the aggregation of big data and statistics and all the rest of it you need, if you're gonna build these systems from scratch, you need advanced mathematics.

Who knows that that would be useful and not the mathematics that they taught you in school necessarily either as well. Awful lot of work done with matrices in analytics, and I don't know about you, but they never taught us about matrices at all and 13 years of mathematics education.

What I went to school, there are many sources of error, many sources of admission. And any of these places where an can go wrong, but creates an ethical issue. If we look at, you know, if there's a quick overall, look at some of the things that can happen, you know, the difficulties with training data difficulties with the model or the LA collection of links.

This structure of the network, that's produced as a result attacks. On real world data, steely of models, etc. There's all kinds of places that analytics can fail. So, let's look at some error, just simple playing error is probably the first least talk about, but most significant source of failure of learning analytics.

All kinds of ways you can make errors the data might simply be wrong. There's many ways data can be wrong, can be missed transcribe. People might put any 2004s instead of 2014, they might spell people's names. Wrong dates were wrong, addresses wrong, all kinds of ways, it can be missed transcribed, in this course I'm using automated transcription to create text.

Automated transcription is a source of error, and if I use that text is input to an analytics engine. I'm creating error predictions, maybe incorrect. Most analytics is based on their most predictive analytics is based on regression. Which means drawing a line through the data. We should know that these lines are not always predictable.

There may be poor implementation, the wrong algorithms are used wrong. Kinds of algorithms are used that are organized in the wrong way. Even more there are different meanings of, correct. What is the correct? Interpretation of a line of graph? What is the correct data to use in order to make the sort of inference that you want when errors are made?

Who's accountable, is it the person who collected the data, the person who wrote the software, the person who implements it and practice and, you know, and this will come up again, how do you correct errors in an analytics system? There's often no way to retract or change errors that have been made because you don't know where they were made.

That's just one kind of ethical issue. All of these can raise ethical issues because if you take any of these errors and then you apply them to a real world situation, you may well be causing much more harm than good. The second aspect in which analytics can fail is with respect to reliability again, we can look at the data that goes into learning analytics system and right off the bat, you want reliable data as opposed to, for example, suspicion rumor, gossip and unreliable evidence.

Imagine, we took the collected speeches of Donald Trump as input for our analytics system. What kind of result would that produce if you have, reliability, your protecting your system from accidental inconsistency, some data, saying the temperatures five degrees Sunday, the same, the temperatures eight degrees and you're also protecting it from deliberate manipulation.

There are different kinds of reliability and high illustrated them here on the slide. One type is interreter reliability such that when two people do the same thing, you get the same data test, retest, we're not reliability. If one person does something at one point and does something at a later point, you get the same data.

That's a type of reliability. Parallel forms. Reliability, very similar. If you do something once and you do something twice, the results of version a and version, B are the same and then internal consistency reliability. The overall approach is such that if you have one attribute and you have another attribute, and you have another retribute and they're all produced in the same way, then they're all the same attribute.

So you can see, these are many different ways that analytics can go wrong, you know? And it just takes a simple coating or and believe me, I know, you know, you have two forms for input data because you're you're getting input data from different sources and you code one slightly differently than you code the other and that creates a reliability issue.

Because now even if it's the same data, it's coming in in different forms, it might look different in the database and all you built in a source of error consistency. Failure consistency basically is well, they're different ways to look at it. The definition here is, when the state recorded by one part of the network is different from the state recorded by other parts of the network.

A lot of analytics systems are based on distributed systems. It's not one big central database. You have a variety of different databases in a distributed system. One of the key challenges is that all of these different databases end up reporting the same thing. You don't have one database saying Bob is 40 years old and another database saying Bob is 42 years old.

Those two differences have to in some way be reconciled. And that's a hard problem, like a really hard problem for distributed systems. It's a hard problem even in databases generally which is why an entire theory called database normalization was created and the concept of single source of truth was created.

But sometimes you can't normalize it database. Sometimes you can't have a single source of truth and in that case, you have to be checking for consistency. And if you're not, well, there's your source of error bias is probably the most talked about source of error in analytics. The problem of bias, pervades, AI and analytics.

It can be in the data itself that can be in the collection of the data, the management of the data in the analysis or interpretation of the data, or in the application of out interpretation, we all know the story and it'll come up again of tea, the Microsoft bought that became very racist because the input was racist.

We've heard of cases, where the bias in the sampling, resulted in a bias in enforcement of the law. For example, a sample saying that black people in this certain district or more likely to me a crimes, how it turns out that black people in that district were more likely to be police and more likely to be accused of crimes because of racism on the part of the police force.

But now once this gets to be taking as data into the system and then applied, your system is going to tell you these black people in this place or criminals. Well, they're not. They're the victims of racism and that's why bias is such a persistent and pervasive issue and analytics will talk about it.

More through the course here. We're just flagging it as one of many sources of error and such systems misinterpretation. I mentioned that we bet earlier analytics engines, don't know what they're watching. And so if they draw conclusions, for example, if they'd identify entities in the data, there is a persistent possibility that they will misinterpret.

What that data is I've put down here a famous illustration called the duck rabbit. And for those of you who can't see the image, it's a drawing. And when you look at it, you could see on the one handed duck and you have the two long bills, or the two long parts of its bill.

And then it's, I and so on or if you shift your perspective ever, so slightly, it looks like a rabbit in the long things are. Actually it's ears and it's actually looking the other way from the duck. Now, we can shift back and forth in this perspective, fairly easily is humans and AI can't.

And in tests, the we found that the AI thought it was definitely absolutely a duck or maybe a goose waterfowl of some sort and the possibility that it's a rabbit just never occurred. So that's an example of misinterpretation and it's a persistent problem with artificial. Intelligence distortion is another case of another source of error.

In AI. We see it often in the effects. It's well known that people can be gradually dead into supporting more and more extreme views. Mmm. This is a swell known side effect of recommendation engines and we talked to the about in a previous section and it's well known that a when people have taken a position on an issue they will when questioned entrenched their views interpret evidence in favor of their views.

See the world from the perspective of their views. So and this leads to a hardening of position and sometimes a radicalization of position. Now, the same thing that can happen to a human. In this way, can happen to an artificial intelligence. Where once it decides that it's going to lean a certain way, it just keeps going that way.

That's what happened with the duck rabbit thing, right? But it doesn't just lean that way. It creates a feedback loop where now, begins to interpret everything as evidence for leaning that way, even if it's not good evidence for leaning that way. And so it goes further and further into that position.

Thereby not just misrepresenting the phenomena but actually distorting the phenomena. So, this is distortion and again, here's where we have the instance of tei, that became a racist spot very quickly, simply by taking this input and then expanding on it and, and building on it, bad pedagogy. This one is specific to learning analytics, but if you think about it, all the ways that an artificial intelligence can support pedagogy are also ways AIs.

Can support bad pedagogy, if it's poorly applied. So the good kinds of things and I've got a list of them here. Personalized, learning adaptive group, formation, 24/7, response, virtual reality, learning, for all new methods of teaching assistance for teachers or increasing tech experience for students. These are all good things but these can be misapplied personalized and customized learning can descend into stereotypes.

There is, for example, right now, a huge debate about whether learning styles are a thing. Well, you can use learning, you can use analytics to detect learning styles and then use that to shape the pedagogy. According to the people who say learning styles are a myth, this would create bad pedagogy.

Same with group formation. If you do it badly, you'll end up with groups with one person. If you do a badly, you'll end up with groups where people all come from the same place or all have the same background etc. Instead of the desired diversity in groups that you might like new methods of teaching again garbage in garbage out, right?

If the new methods of teaching that you're using, the AI for are not good methods of teaching VAI is simply going to amplify and implement these bad methods. And so on, we could go through this entire list irrelevance. This is kind of a, an interesting one. I imagine the scenario in which AI produces no positive learning outcome.

However, even if I'm learning outcome or perhaps outcomes of minimal value now, the question becomes, is it ethical to spend all of this time and money developing a eye solutions and applying ace high solutions when you're not really getting any benefit? And this isn't just idle thinking there's a reference here that shows that there are significant negative benefits of AI.

And if we look at this as measured against the UNESCO strategic development goals, or sustainable development goals, and if you look at education and specifically, the ratio is about four to three roughly in favor of which means for every four positive benefits, there are three negative benefits. And that's pretty much a saw off and the use of AI could have significant other negative negative benefits, you know, cause considerable other harms and won't say negative benefits.

That's a terrible way of putting it, you know, and we've talked about, you know, the use of electricity and so on, but we, but other people have talked about these of electricity. The environmental impact, the dehumanizing impact, all of these countering against the positive pedagogical output. So those are our places where artificial intelligence can be an error, you may think of more, if you do think of more feel free to submit them, just go to the all issues page in module three and you'll find that there's a form where you can submit your own issue and you can categorize that as an instance where artificial intelligence does not work but I think it should be clear.

You know, again everybody talks about bias and yes bias is a source of error for artificial intelligence and a significant one in the source of a lot of grief and heartache but it's only one of many. There are many ways that analytics and artificial intelligence can introduce error into our application of analytics and therefore, cause harm by misrepresenting the environment that they've been entrusted with, not just representing, but understanding and identifying best practices or good applications from.

So that's it for this video. I'm Steven Downs. We've still got a few more videos from this module, but I hope you enjoyed this one as much as you can for a list of hairs and I'll see you soon.

Module 3 - Discussion

Transcript of Module 3 - Discussion

Unedited Google recorder transcript from audio.

Now, everything running. Okay, so I'll keep an eye on in case we get more people joining us, you never know could happen. We've had a high of three participants now in this course, during the online discussions. So it's getting better and better as we go along. So for those of you who are watching the video or listening on audio, this is the discussion session for module three of ethics analytics, duty of care.

I'm Stephen Downes. I'm joined by Sherita and Mark who have been stalwarts attendees throughout so far. And quite frankly have as much to do with authoring, this course, as I do because without their contributions, this would be a very different looking course.

So, not really sure where to start. Did you guys? So there were two tasks in this week's this week's edition I guess of the course the first and I saw a bunch of things that were added to add examples or links to the ethical issues. And actually, I found myself having some ethical issues before even this this, this particular discussion because I was a, you know, I've been doing the the slides with the audio which is taking me longer than I thought it would but but that's okay because I'm still getting good content out of it, at least, to my mind, but I was doing the section on bad actors.

And so part of what happens when I do these slides, is I look up more resources because I don't have enough already. I talk implementation and I hit on a paper, and what was it called? I forgot what it was called. Got a good memory but it doesn't last very long.

Oh garden paint list. Post just post. So this is what happens when you're not ready. It was called AI enabled future crime. It's from the journal crime, science as pretty comprehensive. So I'm putting a link to that article in today's newsletter. So you'll be able to see the link and actually so you don't have to leave.

I'll put it right in the chat here. I quite liked the article. Now a lot of the prime goes beyond scope of this course, although you know, pretty much any kind could fit into the scope of this course. So I wasn't sure, really where to draw the line. Anyhow, there it is.

In the chat by good like it. And so I thought I'd share so and I'll be doing that throughout, you know, the course, adding more posts to the newsletter. That I find interesting. And if you guys find anything that you think I should share, send me a note and you know, because you know, there are good resources and resources that aren't necessarily apparent and the newsletter.

So did you find anything of interest and adding the examples to the the issues? But I did, I found it quite interesting. I found my was going down a rabbit hole because there are so many things that can pop up, you know, even on page one. Yeah, if you're screwing so and and the last thing that I put in which is just a few minutes ago was under Eminem and the tour browser, how you know, I used for before, you know for whatever.

Yeah. But what I found interesting was, when I thought about it. I thought about good possibilities for poor tour. Yeah. And then I thought about bad possibly and you know gray area in between such as you know, you know pirating a movie and right where you can protect yourself using a number of ways.

Not that I've ever done that, you know. But so I found that really, really interesting. So I thought, okay, I'm gonna go down all of these things and get lost down a radical. It's fun. Yeah, that keeps happening to me that I couldn't turn down reading and studying this crying studies thing, really?

That sounds fascinating. Yeah, was it was pretty good. How are you marks? I'm good. Yeah, I have that same reliable problem. Yeah, but this week I was at most my time in different radical. I did add some resources but I haven't gotten to the Lincoln art. Mmm. And and that's fine because I'm sort of laying back and I read a couple of my teams first.

I think it's hard to pronounce this again. Yeah, and I'm just trying to rap. I had a round. I mean, I get a concept of the way people, but the actual been doing of it and the purpose of the, knowing of it and doing it purposefully. So that's none of my hold there.

I'm gonna another standard online college class. Yeah, the sweet side that could mean something up by that price, which is unusual, but it was a good way that it was one of those kind of openings to kind of what you want. And of course, that we're on my way.

Is that? Yeah. So how did this course compare with? That course, just what was your experience? And I know this is a much smaller course, but I mean, just generally. Yeah. So this is actually one of my areas of study is online and picture and particularly learning management systems.

I'm actually certified in three of them and this college state university in Minneapolis Minnesota users of fourth one piercings, right? Space B2L which is not the best system and then they don't support their teachers and then on my features can you she's not been probably brain and she never sees this like her a lot.

And, you know, it's so to compare a navigating your course is no more difficult than navigating earths and that doesn't mean you're easy than that, but she's in a probably the most expensive about former readings system. And so in my mind as a student, I should be able to click forward them and just work through the materials.

She posted videos on this next nation, you know, you might need to jump out for would be something, but I should be able to just click through each week. Yeah. And and so it's actually structured quite a bit like yours in that you have to keep your turning to the beginning and then branch from there and which is fine.

I mean that's standing because but from a user experience and I always look at it from the student user. Yeah and I've actually yeah to experience sports more. You've just clicked straight through the course some courses sort of break into pieces where you can get on a roll and click to the material but never through 14 weeks and that to me is you know, it's an ideal.

It's probably unbelievable. But to me that it's that's how of course should potentially. So and then wanted to add a drug course has is that Well wanted to ISIS. First of all, of course, this more agents and everyone is as equal agents. And that's, that's an ideal hiring vacation as a move something?

Yeah I should say. I mean there are certain things but as you well you know exactly who's talking about. We can name the board by people. Yeah. Better talking about this North American here. And then of course, I lost the word thought there. Oh, and your course does not hide where the closest.

And this is my biggest complaint with most online. Oh yeah, it's this one. Putting each at a certain time and that is not to never because sometimes we get behind sometimes we have an opportunity to do it and you know, life is not in the neighbor's mind and there in the case of this course.

If it's not up yet. It's because it doesn't exist yet. Yeah. But I mean but anybody can click in and see where we're in. Yeah, yeah times yeah. That's you know again because. Yeah, but at least the map is. Yeah. And in this report I'm in the most highly online voices.

They've refused to show you now. I don't know, strange. Yeah, especially if they were credited. So I mean you know how you your most checked nation and the course listing in the faculty the catalog that the faculty has to work with. How many students not the only because of leaders that means they're governments and I even yeah.

Yeah you know particularly what well I would respond to is you're saying that the person who's teaching the course you know is 10 years. Yet that person may not have had very much training on how to put a course together, and, or how to teach online, and this is very prevalent in university.

I mean, even back enough when we teach in a university, who taught you to be a teacher. Yeah, no one, no one. Yeah, so really short story. So far, one of relations of my college career, turning after the great recession of 2003, once in a shared governance meeting where the chancellor of the faculty were going out.

It was a four campus community college district. So we had a chancellor and college suppressing faculty, I believe. And the chancellor who had taught in that in previous decades. And then the laugh degree. And from that was in this confrontation with the back of the radical back that. Remember and the chances said to this member, unlike you all, I have a teaching student and I was just like, you kidding me, kindergarten teachers, haven't thinking certificate, but you know that was a revelation.

And it's it's an issue that's been raised before. By, for example, Tony Bates who, who has argued that length, that all collegeing university. Professors should have, you know, required courses and how to teach ideally as part of their PhD. I have mixed feelings about that because now somebody becomes a professor of physics, not so much because they want to teach physics, but because they want to do physics research and so, you know, it sort of goes against, you know, good pedagogy to force, somebody to learn something.

They don't want to learn at least to my mind. I know we even know it's a requirement of the job that they actually know how to do this sort of stuff. So I'm a mixed feelings on that. On the other hand as soon. Yeah I would wonder why there wouldn't be a practice room included before setting a research removed so fast.

Don't let doctors come out of their classroom and into the operating room, you know. Yeah for airline pilots out of the simulator. Yeah. And in the content and I would argue that teaching is a debris, serious. Professional stop. I'm thinking though that a professor researcher and some subject probably ought to be viewed as a resource rather than, as the designer of learning because they're not going to pick up how to do it in just a few courses.

Anyways, you know, I think the analogy might be, you know, we don't want aircraft engineers to pilot aircraft not because they haven't been trained and how to pilot aircraft but because really, that's not what they do. They design aircraft, they don't pilot them. We have separate people to pilot them and these separate people need to know something about the airplane, but they don't need to know how to build one.

How's that for an analogy? Is that work? Yeah, well it's reminds me of the it's probably apocryphal but apparently you used to have to be able to build apart to get environmental license in Russia. Again, I still don't believe that that was true, but that's that was the story in the Soviet Union that in the Soviet Union, you had to demonstrate mechanical capability before the night.

I can believe that part of it the mechanical capabilities part because they didn't want you abandoning the card in the middle of the road just because you couldn't fix it. I think what you just said, though, points to my abandoned favorite word, as you can only, you know, music so much, the importance of my patient.

Yeah. Is that and this refers to our conference, you've already meets you that if you have a learning technologist or learning designing working in tandem with a subject matter expert. Yeah. Then then you're going to get a good force. Yeah. The students are going to get the better learning.

Come the institutions going to, you have a better reputation, you know, you have a better product. Yeah. Another reputation and enrollment one. So I think we're headed in that direction except for the useless. Yeah. Something that's probably true. Yeah. What that's and yet. Now that we hired, you know, 10 times administrators, we used to be decades.

Those people should be looking at cutting cross in other ways, than reduced in faculty to find their resources. Yeah, cutting themselves out and putting those resources to increase support for that. So the holy grail to bring us back to our topic of all of this is supposed to be artificial intelligence, but it's not obvious that this is easily possible from one of the physics.

Interesting, one of the bad actor things that I was doing just before this session was misrepresentation, where you have a snake oil salesman basically coming along and saying, AI all this. You should be invested in that and, you know, I can think of some of these LMS systems that bought into some of these AI systems and made their inventors rich, but didn't really add very much capacity to their LMS.

That bothers me. So, but it's interesting. Characterization makes me think by adding AI to, I'm learning designer and a subject for you would have your infinity? Yes, this education. So particularly this is yeah. Where people are not gathering in the world more around the fire. But our media is that perhaps those three elements would be the best.

This one, I would be interested in the debate between those three elements because each of those elements in a way, has a different purpose that I did plans on how they see the tasks here, I am referring to a artificial intent, you know, intelligence as an entity that you know, it's okay, it's 2021, you can do that.

Yeah, you know if that entity has has a perspective in the lens the you know, instructional designer has a perspective and a lens, they may never have taught anybody any and that comes out when they try to teach the subject expert, how to use the technology? So I love to see the debate.

I think it would be a great triumph for us. Yeah, the debate would be really interesting. You know that the you know the the learning desire designer may say we don't need that, we don't need that. We can make this happen like this and AI may say but I can predict that X will happen and you know the teacher or where the the crop might say you have no idea where I could go with this.

No. But if it was networks interactive and let's feedback boots. I think that Trinity is really create a main way. Eugene learning. I think we extremely interesting. I'm advocate for another piece is to put the students energy. Oh yes, yeah. So then yes. Quadrupon and working. Oh yes. It's now the young holy trinity.

Yeah exactly. Yeah. And then to, you know, to be able to go that you know, and that's what they're institutional score, right? So maybe an especially, isn't primary, focus isn't mowing the lawns. And even with pool cleaning, maybe institutions primary should be the oldest space, where something like this, the assemble reflect and improve growth, you know.

So do you want the AI to be a participant in the discussion or a mediator? How before we have to? We have two once a moderator. Okay? And and one is a a Siri, what do you think about this? Yeah, like just have flash of irrelevant responses coming from Siri the temperature today is 14.

Yeah. What's the background in back? Something? Whatever it is of any ideas to improve. It's responses. Yeah. Speedy. Not that. This one. Yeah, the other thing and this is part of the reason why I'm doing everything in this course, even though it's stupid is actually making the decisions. When you have multiple people, you have multiple people having a say and that work with teams both of developers and instructors and all of that.

And here I am putting on my professor hat even though I'm not one, you know, I've got a pretty good idea where I want this to go. And most of my experience working with these teams is failing to convince them that they, it should go in that direction. And now I can imagine having to argue with an AI correction and being overruled.

Yeah, but it might have to be some rules in routines. Yeah, there are plenty of existing rules and education. So again, back to renovation and of course, as professors are played them on YouTube. And, you know, I can't think of the name of that document, that an instructor has informed me to document that was submitted for accreditation of.

That course, is a name. I can't figure out all accredited courses. Have to claim and that they conform to the intentions of the outcomes and the accreditation background of that course. And I've never did that, I never did that. Because well yes because by the time of faculty members assigned, the course that's forgotten about exist.

It's it's it's in that space between the institution and yeah, it's in that space and professors. Yeah. Again, never became about that part of the system and of course, they would use this the suggestion that there was informed to it. Yeah. But the pigeon inspiration has to be able to attend that it questions about the presentation.

I've been in some presentations about anyone, right here. It's my my experience is that I developed workhorses from scratch and I had just put in certain things like you can't have, you know, all the grades happening at the end. Okay. That's something 100%. You know, it is really little, it's a bit pieces.

You know, you provide syllabus the first time you do it. You'll always change that unit and put all of this stuff in to, you know, an advisory committee and they've rubber stamped it. I never. I mean correct. Yeah, I know. See, I learned in seconds and was made yes as a student representative.

Yeah, you know, and I had the time, you know and most we still but yeah it was and it hurts me also on American war, both Canadians. Right. And so I'm speaking about an American experience in California, right? So that's a particular accreditation agency. Yeah. So I realized that I'm guessing that if YouTube doctor Brown, because your institutions are also critical, but I'm not aware of how your graduation agencies are structured with their different or the same as are.

We have regional violations. Yeah, if we do too, I think we do to imagine the provincial, you know? Yes. And also there are a couple of now separately through since the federal government has happened. That only reason since the first questions presidency is one of the federal government was completely out of business.

But they have become involved in. So there are a couple now of national whether particularly handy for online again. It's I think it's going to be a good thing and I think it can be a better to because once you get the rules and accreditation rules too tight and then your ability to change your course or to do something totally radical like I'm not going to grade, I'm not going to, you know what I'm going to do is I'm going to offer you all an aid and people who really move forward.

Will you get the a plus you know you're not allowed to do that no more. Now what I did. So you know and accreditation would basically tell me no, you're not alone. Yeah. Yeah, that is a better pedagogy than having, you know, flash tests level. It's so much better clickers.

Oh that one it was gone everywhere and so that's in my experience. So so again that's a bad thing of good. Yeah, right. Well imagine wanted to the attention between what we're doing, which has been yeah and the potential over instruction situation which is basically how it's operating these days especially with this topic administrations and all the tasks and the course data.

And so, what we're searching for, I think is a minimal way that allows more agency, more community and yet, you know, offered to some structures so that we move from. So when I move from this class to a different from completely instructive class, the overlap is going to be very small, right, right.

And so what we're searching for here is some sort of open framework that of a discipline. Whether they're something better expert or not and anticipates, the way the next horse will go based on this is loose thing around the highest structure. And my idea in my ideal world, we wouldn't actually divide all of this by courses.

Anyways, just be the professors would be participating in this overall giant graph. And then people who are studying, that would just go from place to place on that graph. And there might be advanced happening at different times and so on. But, you know, you wouldn't be locked into a series of events or anything like that.

I don't know if that would be better, but, but that's kind of what I would prefer. But then I really fall on the no rules side of things. So, the other thing that I would say, I'll go again, this has been off topic, is that teaching young children and then high school and then adults is really, really different in terms of how the people approached, what they learned, how they learn etc Once you get into adults, who are doing this, because they have an interest and because they eat a specific skill because they need an answer for something that they're interested in.

You have a totally different student. And, and one where I think the more open way of doing things is really better because, you know, I don't learn what they want. Learn. Right? I mean and they don't learn what they don't want to no they don't be really stubborn about it.

Yeah and I still have some mandatory courses that interest you wants me to take and they're only half hour of courses but they've been sitting in the hopper for several months now. Yeah. Proving my boy. Yeah. So I I use three different or these different types of learners and I promote the last one.

I'll get you in a second but it hasn't thought on yet. But anyway, so yeah, that's developmental learning and that that's a perfect word for it. And whether it takes six years or eight years and 12 years, there's some discussion about that. I argued that to I'm old my parents are old when they academy and their parents are old women.

They have them so back. My grandparents had very related informal. They went sleeping. They weren't ignoring. They weren't uneducated. So you know their case can't be made that we have almost too much. Anyway. That's gotta go. Then there's advertising adult marketing but it's all that it's more. Taskbarient. That's more careerists.

Yeah you know that's mean it's a long literature email. It's almost on you know no you have to win that. Put also and I don't have a word for it, but it's not everybody's interested in that skill. Not everybody is interested in learning something for a career. And what happens with adults is, they're often interested in learning something just because they're interested in learning something, or they wouldn't be there.

And yeah, that you have that word. What is it? It comes from Australia. Yeah, and only probably 20 years old. So it's a brand new word photos. Yeah, each, yes, I've seen that. Yes. And that is itself corrected about. Yeah, that's different than the antonym and mechanical and that's the result learning classes and second language class and you say you can high schools and and it's a perfectly good category.

In fact, I think that the modern university has somehow slipped out of the bedigo into it but the vendor is hoping and I think that's a great port because it self-directed this the first part of that stuff and that's the part that can be open and creative and a grant.

So how would AI?

So, yeah, that's that's the question. I've been thinking of, as this conversation has continued. So how it fit into that? So let's go all the way back. So is going to ask way back when we are early in this discussion, talking about accreditation and, and all of that, imagine we had a eyes that monitored the courses and decided that they should be a credited or not accredited, depending on how they were being conducted.

What are your thoughts on that?

Who programmed the AI? It's my first question here. And who does the report? Wanted, another example, I love bringing up amongst the containers. Is the Edinburgh University? Yeah. It's an actual, okay? And they own their own building on campus. They have their own funding, their own industry, their own rules and at the university process them things happen.

Yeah, real real consequences happen. They also started the pretty here started the, what the fridge, all the frames. Okay. Yeah, that came out of the bathroom and you. So if they only got it's they programmed me, there we go. Very high. Then one purchased from the corporation by the Missouri.

Yeah, yeah. And and exactly and then missing from that conversation unfortunately the space is in the United States is to faculty. Yeah. They used to run the universities saying no longer work. No, and that's, that's true. I think increasingly around the world. Yeah, yeah. And that's you see a lot of these applications of AI in learning directed toward leading people to specific outcomes, usually outcomes related to employment objectives that are determined by.

Well, basically a mixture of people in the community who hire people and university administrations and politicians. And it seems to me that that raises the issue of whether that really is the appropriate way to use an AI and learning. I don't know if I have it down as one of my issues yet, but, but, you know, just the idea of using the AI to wreck to direct people toward a certain end in a system where really we'd like to promote autonomy of some sort, especially at the higher levels.

Yeah, it seems like we're evolving and more imports. The German model of sorting people in the careers and services. Awesome. Which used to be existing is in California, and it's almost like these ones, but we might now with six hours. Six. Yeah, comes back down to a discussion or a debate or

It would even be a discussion or debate as to what would the benefit be to any of those entities? Yeah. And and there would be some kind of tension there. They would have to be some kind of thing. There, I always say that it comes down to governments, that's Google's license inside and it would be an ongoing debate.

Hopefully annual reviews are figured, oh, and, and is the AIB annual reviews. They get, they implement theirs to. Yeah. There's what we, you know, more detailed and very long. And the students, I would hope would be the most critical. And the faculty would hopefully do the most online points.

You see what else on what's needed? What improvements? Are you and then the administrators, you know, they have to be drying above not using that, so this part of it. So it would be few minutes. So we're being interested in and I guess you'd have the color code in the school group, so we were there.

Okay. I don't know. We're going to find out, people will eventually find out how this is, but they risk here and then it actually shows up one of the issues that I was writing about talking about this week, is that, when you let the AI make decisions. Now, of course, there's always the requirement to have as they say humans in the loop to, you know, affirm the the judgment that the AI makes.

But what happens is that it becomes kind of a rubber stamp thing, and sure you have the meeting. But you generally tend to defer to the AI because, you know, it's done all about work and you probably can't hold smart. It's you just rubber stamp it and it ends up being the AI making all the decisions.

Anyways, yeah, that's a bad black box problem. Yeah, yeah, yeah, that's the black box of course this machine is smarter than me, hey, you know, ever to argue with an ATM. So perhaps that's in purpose to argue with monitoring. Yeah, I keep coming back to what you said, just a few moments ago and that had to do with power, who has the power, right?

And, That's of that. Could be a real slippery slope. Yeah, when you're dealing with, you know, aon that could be a real slippery slope. And this would AI the more with AI have more of an infinity with more pictators, right? People who ask total control, okay, I don't know, maybe that, but whatever.

So, would AI have an infinity for that? Good questions sees. Yeah, but again, the AI that we've interacted with will build by corporations. Yeah. Right through our primary school and certainly power. Yeah. Primarily one could even say that they certain of those corporations have the power because they can also very easily influence legislature.

Imagine the only tool we had on our government. See, Imagine the only two we had. Sorry Mark was the Facebook algorithm for choosing what we hear what we can say? Yes, I think that points to the multiple algorithms. Mm-hmm. And then back to power multiple centers of power, and here I go to be in America but across three or four.

Exactly, not the best way to govern, but perhaps one way to make sure powers not sunflowers. Course, the final M result of that is still made. Yeah, we're looking at, yeah, but you're not the first to live that either. In ancient Rome they had clubs that were designated by color red.

The green, I think the other one was yellow. They eventually broke down into two facts and said, I believe they were blue and green and they would have pitched battles in the streets of Rome. And they, you know, the only thing is like they weren't politically agencies particularly but but you know, so there were sort of like glorified fan clubs, but you but the thing is over time, what happened is all the, you know, all of the balance competition ended up being like perfectly even and and it all boiled, not to just two sides you didn't matter how many started with you ended up at two and they were perfectly balanced and you could never get a resolution our hours you know the distillate him world war one before new technology emerge.

We had two sides that were perfectly bound, you know, and he couldn't get a resolution, and that's what happens, you know, it's good to have checks and balances, but when they're equal over time, they progressed to a point where the you can't get past. Disagreement. The US had three has three.

So in principle to could always vote the third. That's not really how it's worked out because this three checks and balances are now become irrelevant. The major political agencies in the US are the two political parties and these three areas just become the arena in which those two political parties fight against each other.

And again, never reach a resolution. That's my observation. Anyways, and it's a problem in AI too. You have competitive like there are neural networks which are having to pools, right? And you get your input into the system, and then you allocate that system into different pools of neurons, and they each do their own thing, and then they come back together and fight it out.

Right? And in one case, and that's their, there's I have the example of a duck rabbit in one of the slides, and one case, the competitive pole, just simply swamps out all other opposition. We might think about is, you know, being similar to how the US reacted to communism, right?

No, no, you can't have that period. We'll move this way or you know, or alternatively you get a case where the AI can't make a resolution. Can't decide between doc or rabbit and can't do anything. So it's a design issue and and it's, you know, it's it's a design issue in society and it's also design issue.

I think in AI and I it's not clear to me that the methods of resolving it are good methods of resolving it. I mean might be but what would be a method of resolving it? In terms of they are us, I think having that's a good question.

I would almost say not trying to resolve it if I had to answer that. I mean, I think the problem happens when you try to force the AI into rule like behavior, pick one alternative or the other pick Republican, or Democratic duck or rabbit and all of these problems are so complex that one or the others.

Never going to be the correct answer. And sometimes one. Sorry, but species of duck or a rabbit? Yeah, that's just called getting it wrong. Yeah, we're getting on dangerous character there. Yeah, I was reading this morning about early and AI not being able to distinguish between men and women.

Yeah. Artificial skin effort and apparently that's been small. But then this brings me back to human agency. Yeah. Because I really en agency and periodity because now we have people who major recognition will not be able to sort. Yeah, it's just they have figured out a way to present themselves in a way that is not fighting here and I don't know about you are running so I'm online all the time.

Yeah, and so there's the creative response of the demand binary deformity that AI might require. Now the human like no. So it's I love the term, wicked problems. Yeah. And I have that's where this force is inside of the problem. Yeah, there will be no resolution and there will be no solution.

There will just be exploration and building this graph and see what that suggests and new iteration next year. And yeah and that's why we're building that craft. I've two things just to to bring us to a close. The first thing comes up came up in my thinking, as we talked about this, it's almost like contemporary applications of AI and learning are attempting to replicate.

What Plato proposes in the Republic where all of your attributes are identified by and play those case, the philosopher king in our case and AI. And then you were placed into the right role in society and that is your role. That is your lot in life. There's been a lot of objections to republic because you can imagine and I think similar objections come up when we try to use AI in that way in order to sort and catalog, and categorize and allocate people to their places.

Second thing I just wanted to open briefly the, the graph exercise, and this is more for the people watching the video than, for you guys. But, but so, but I think it's still be useful to you as well. So I just want to show how it works. That's like I've been waiting for.

Okay good. So it is you sorry one unit market all up like you can't this is something you can break so yeah, right. All right, so okay so let's go. You should be looking at sorry and bouncing around to be here because the the stupid sharing thing covers over my tabs.

There we go, right? Hey, you should be seeing the ethics analytics and debut, of course screen right now. Okay guys. So, I'm gonna go to ethical issues and I'm going to go to the appropriate task. So add to the graph and we'll access, I didn't have to make that extra click but to access the graph tool, click here.

And so, here it is. Ignore that. Ignore these, I need to take them off. But the way I load the graph right now, that's how it works. So this is more than I wanted to put in a single instance of the graph. But I'm just working with a default file for now, I'll change up this display a bit, but the idea here is that we're trying to we're looking at these applications that are on the left and then these ethical issues that are on the right.

And the idea is to ask yourself what applications raised what ethical issues. And so if you think for example suppose you think that dashboards raise the issue of content manipulation. It's probably don't, but let's say you did, right? So just click on dashboards and the description comes up in the right in case you need to refresh yourself and then press the alt key and then clutch your mouse and drag and then once you've reached the other point unclick and you've drawn a line that's tricky because I always want to just drag and that's what happens to me.

That's I will probably change that once I figure out how you just drag the line, but right now if you just click and drag you end up moving the things and I'll probably flip those. So you have to do all, click to move the box but just click and drag.

But right now that's how it works. So just draw your lines. So we've got dashboard to content, manipulation plagiarism detection, the fear and anxiety. I think that is a real one plagiarism detection to privacy generally. Yeah, I think that's one. So, in the future, I'll just have a few on the left hand side and if you know, on the right hand side, so give up facing this overwhelming, you know, graph of a hundred different elements on each side.

So once you're done that, you're happy with the graph as you've drawn it. Right. Click and that pops up this. Now, there is this you can add a new item, it will say, the new item in the information that it sends to the server. But I don't do anything with that.

You can also wipe the claim but then all you've done is white to clean. The real thing you want to do is click export, click export and the system will take care of the rest for you. Don't tell you. You graph was submitted successfully and it'll list things that you've added click okay and you're done, it'll take you back to the test.

So that's how it works. Now the way this is going to work hello. Now right now what it does is it's saves what you've done and adds it to the end of a file of links. So I'm just building up, but this file of links would be fed back in to the system.

So that when you click on, say, let's go back to applications of learning analytics and we can look at each of these applications. So all applications. So let's pick one plagiarism detection. I think this one we were using so right now. Oh we don't even have any articles I thought I had an article in there I guess not.

But, in addition to articles, and links, there would be another section underneath that would list, the ethical issues that were raised. And so, and it's not just me defining, what issues are raised by plagiarism detection. It's all the participants in the course who are defining that, right? And so that builds this interrelationship, so that you can explore this issue, you can explore the other factors around this issue.

And as a side, that's really why you can't. I mean, we could in this course you couldn't have it. Go from the beginning, all the way through to the end. Theoretically that's doable. And and, you know, I've done that with some of these next previous things but because we're building this as a graph, you know, it's the sort of thing where you can just wonder through the graph.

There is no one path through the graph. Although what I'd like to do is create more range of different paths so you can experience this differently or you know, maybe do something like a click capture or whatever so that you can go through the course and then the way you went through the course, becomes a path for someone else.

Something like that is well the, the grass that is created in this case between the application and the ethical issue that might be selected, once it might be selected a hundred times by different people and that gives us the possibility of each connection. Each link between things is having a weight and so the we can use that in order to highlight the issues that are being selected the most most often and and, you know, wanted to highlight parts of the course, or even just highlight the display here.

We'd say, you know, well, pick maybe the five ethical issues that got the most length here, and that allows us to focus in on those. So that's how it works, and that's what it's intended to do. It's intended to be very easy for you guys but very powerful work.

Yeah, so just for my own clarification. Yeah, the reason he said you can't break it is because you're aggravating them. He's suppliers graph, right? Yes. So probably just want to be clear about that and then the white flicker does that just clear that connections you made or does that activity entire graph bottle, clear?

The whole graph and then you'd have to reload it. Yeah, nothing. Perming is happens to white clear. Now, I think let me just check this. We'll go back to the go back to the activity. Oops.

Learning how we're doing. Is we're pulling a copy? Yeah, the module to wherever it is anyway that we're pulling back copy modifying it and submitting it to the. That's right. Yeah, this is just a copy, doesn't matter what you do to this, so let's say it's wipe, it clean.

So that's all you've done. But now if I reload the page, so I'm reloading it here and there it is. Again also, I think, let me just check, I'll draw a line here, click on the line. Oh, all you can do, there is change color. I'd love to add a deleted to that, but I don't think there is a delete at the moment.

Now, there isn't just change color and even that doesn't work very well. Okay. So that's too bad but you know it'd be nice to have an edit or delete on this. It's not a huge thing but it's a certain way doable. It's just figuring out how to write the code, but yeah.

And you ever figure out a way to zoom in and out. Now it's all on one. You know, it's that I did not. Yeah that's a really good point. I'll talk about that somewhere. I think it was on Monday, I did. Yeah you message us if he knows cuz it seems to me there is a way.

Yeah but I would be a lot better for sure. And you know is this help maybe so I think yeah no yeah I know all it does is load this intro game. Well with pain dropping put that single items, white export we color drag. Yeah I don't see anything that says changed the size.

Yeah well yeah can inside the side. Yes. Zoom. What we're looking for is a zoom. Yeah, means connect drag alt plus drag wearing or red color. White acting items. No, yeah, click this and look right. Okay, yeah so I don't see anything for scale and knowing how this works on the back end, that could be a bit difficult because it uses the the web can this element.

And so each one of these boxes is defined with an xy location and so to zoom it in and out you'd have to change all of those XY locations and resize the box. Now this computer it could do that, but writing the software if that software doesn't exist to do that writing that software would be a bit tricky.

Yeah. All right. Sure. How tricky, you know, I could I can easily figure out how to make the boxes bigger and smaller. But if there may be a way, if not, I'll just make sure all of our graphs get inside the box and then you can change the size of the web page by checking and control.

And using your most scroll which works pretty well. You're still only seeing that one part. Yeah. And then, if you scroll it too big, you know, now they're sort of in trouble, but yeah, you can't see the whole graph if the graphs bigger than the box, that's the problem.

Yeah. That's so I hope that has helped and I hope that this is an easy but interesting exercise. That was the intent anyways and something different that you don't normally do our course,

So any final words, before we wrap up the, the module ethical issues

Okay, I'll just go. I'll just go back down to wherever so there's going to be lots of rabbit holes in the next module. We're looking at ethical codes and you guys will be able to benefit by me. Having God down a rabbit hole and dug up dozens and dozens and dozens of ethical codes and having running them through an analysis.

And yeah, so that that'll be a bit of fun, I think. But we'll talk about that. Starting on Monday. All right, so I'm gonna wrap up. Only nine minutes late that's not too bad. So see you all next week and have a good weekend.

Bad Actors

Transcript of Bad Actors

Unedited Google Recorder transcript from audio.

Hello and welcome to another edition of ethics analytics, and the duty of care. I'm Stephen Downes. I'm happy to welcome you to this session on bad actors. Part of module, three of the course, ethical issues in analytics. Now in this section on bad actors.

We're looking at people who use analytics and artificial intelligence for well, as the name suggests bad purposes, These may be people who use it for illegal purposes or immoral purposes. And thereby, we see the intersection with ethics and analytics. It might seem a bit unusual to include bad actors in the discussion of ethical issues in analytics because of course, ethics are addressed to people who are not bad actors.

They're addressed to people who are trying to do good or at least avoid doing bad in their use of analytics and AI. However, as we'll see many of the actions that are undertaken by bad actors using these technologies have implications on ethical individuals and wider society. And so as a result actions of bad actors, create a need to be addressed under the heading of ethical issues.

Generally also want to note that the definition of a bad actor can vary from culture to culture from country to country different cultures, may think that different kinds of actions are bad as well. Sometimes an action that is bad for one country is good for another country and of course in that case each of these two countries will view the bad or not bad status of that person differently.

So we need to be careful when we talk about bad actors and understand that the the phrasing bad represents an interpretation of their actions. And certainly not a statement about their ontological status as people. So we'll be focusing on bad actors in AI and analytics. Specifically, we're not looking at bad actors generally.

And in fact we'll be looking even more specifically at bad actors in AI and analytics in the area of learning and learning technology. Now again, we'll see that there are cases where there can be bad actors and bad actions that might not seem directly related to education and educational technology, but pretty much anything that you can do, bad with AI.

Can be done bad in an ethical context in the context of learning and learning technology. So what I'm going to do in this just like in many of the previous videos that have been set up for this course, I'm going to go through a number of examples of the things that bad actors can do using this technology and talk about some of the ethical implications about and specifically with respect to the ethical implications in learning and teaching.

So the first instance of bad acting is misrepresentation. What we have here is an unfortunately common case possibly in learning technology where the promoter or the proprietary of some system misrepresents the capabilities of that system. Typically, by saying that it can do more than it does, although there's a subclass of instances where the system may do, something that they don't tell people.

That it does like say collect data for advertisers, the more useful case pretending that the system is able to do something that cannot. That it cannot happens, especially in the field of learning analytics where say a vendor might say that the system is capable of predicting accurately, whether someone will drop out of the course or that the system is capable of effect.

It effectively, recommending learning resources for a person this may or may not be the case. Very often. These claims are presented without suitable evidence or in another sub case. Maybe presented with fabricated or insufficient evidence. There's also a sub case of misrepresentation when the proprietors of these AI and analytic analytics systems use them in competitions, in order to attend to validate their claims and they cheat at those competitions.

So there's been a number of cases where they're proprietary has, for example, embedded data into the hardware or into the algorithm where it can't be detected and then used that data in order to create more accurate than usual predictions or perhaps the data models have been pre-trained in some way.

And again, in a way, that can't be detected again with the results of producing more accurate results than that expected. In fact, in the course, web page there's an instance from team where Baidu was found to be cheating in some of its prediction tasks in these competitions and was banned from the competitions.

Another widespread use of AI is to promote conspiracy theories and we think of this as an instance of bad acting because of the damage that conspiracy theories can cause in society in general and in educational institutions in particular. Now, we need to be careful about how we define this.

And so I am drawing from this article published in nature here. A conspiracy theorist is a person or group, who promotes an alternative narrative alleging a coordinated campaign of disinformation. Usually on the part of recognized authorities or institutions, in other words, what they're doing is attempting to get us to believe that the system is trying to fool us is trying to pull one over on us.

That the system say is rate. Now, one thing to note about conspiracy theorists, they might not be wrong, there might actually be a conspiracy, so we need to be careful when we say that a conspiracy theorist is necessarily about actor. Now, certainly from the perspective of these recognized authorities or institutions, a conspiracy theorist will be thought of as a bad actor.

But, you know, it depends on the context. Whether they are about action, then the less we know that conspiracy theorists can. Replicate animate analytical methods in dissemination. So making it look like they're using analytics in AI in order to research inclusion. But perhaps not actually using it apps improperly using it, perhaps appropriating somebody else's analytics and misrepresenting it in order to promote their conspiracy.

So these are all ways in which analytics can be used by conspiracy theorists to promote the idea that the authorities are lying to you, stalking is a prevalent concern and well, in the world generally and in the online world, in particular and has been the subject of fairly detailed analysis, although the use of AI and analytics to assist in stocking, has not been nearly as well.

Covered, at least not. That was my result when I went looking for it. In any case, what happens in stocking is that an offender for whatever motive uses online or social media or other technology including analytics, or AI to interact in, some inappropriate way with a victim first by following the victim, finding out about the victim and then secondly, by promoting or creating unwanted interaction discourse with that victim, the victim may be identified by personality, by their attitude, by their socialization, and it creates for them.

A, you know, a barrier to their internet and social media use. So the cyberstalking itself takes place in a social media or technological environment and this environment can include a learning environment cyberstalking does happen in LMSs. It does happen in social networks that are used to support learning for the victim.

There may be psychological physiological or social costs. Certainly there ability to freely use technologies impaired, they may occur financial costs, they may have to take legal recourse in order to block the offender. Meanwhile, the offender is creating fear is creating this continual behavior, often adapting to whatever measure the victim takes in order to prevent the stalking and this creates the need for moderators or mediators.

First of all, to have a mechanism for detecting and tracking cyber stalking. And then, secondly, some capacity to intervene and the sorts of people who can intervene or perhaps civic stakeholders, legal authorities, or online platforms, This serious problem, it's made worse by AI and analytics because it gives advanced investigative power to everybody at low cost.

And of course some of these people will use it for this purpose. Another example of an unethical use of AI analytics is collusion. Now usually when we think of collusion, we're thinking of price fixing and there are certainly evidence that the use of AI analytics can result in price fixing and indeed, as the paper referenced here shows that algorithms consistently learn to charge super a competitive prices without even communicating with either each other by super competitive prices, what we mean is prices are that are higher than would otherwise be the case in a normal competitive environment.

And basically what's happening is that if a competitor lowers a price there are algorithm learns that their punished for lowering that price through a reduction in profits or perhaps through the reactions of their penetr who also lowers the price. So they're not getting any greater share of the marketplace that are receiving lower money.

So the algorithm learns don't lower the price and indeed in the interactions. Well, not me interactions with the other algorithms, but in the environment, in which these prices are lowered and raised it, learns to raise the prices higher because that will provoke the reaction on the part of other AIs that are most beneficial to itself.

So they're not actually talking to each other, they're not colluding in the traditional sense, but they are learning from each other that they can both benefit if they prices are higher. So, collusion isn't limited to price fixing there can be collusion over contract negotiations, there can be collusion over requests for proposals and other, other mechanisms for purchasing.

There can be collusion about political influence policy development. A range of other cases and generally collusion, when pollution happens, the AIs are learning to create greater benefit for the owner of the AI at the expense of either their clients, specifically, or wider society. Generally, another form of use of AI or analytics, that can be considered an instance of bad acting is AI enabled cheating.

Now it's interesting. When I went looking for information on this, I found mostly information on using AI analytics to prevent cheating, tons of resources on that and we've looked at that application earlier in the course. I found very few cases, where AI analytics are actually used to promote cheating, but that said, I did find those examples and I found two specific types of examples.

And one case there's a system that used in AI to match students to an academic, ghostwriting the academic ghostwriter would write their assignments for them. And they, of course, would pay the ghostwriter. This is really hard for an institution to track down because unlike y'all with other kinds of as they say contract cheating, the reason an example or an instance of the writing out there on the internet to compare what the student is handed in.

So a system like turn it in, is simply not going to work. Additionally, if the ghost writer the same ghostwriters use throughout the course, systems won't be able to detect any change in the way a person presents their written work. So this can make it very difficult to track down these instances of cheating.

The second case is using an AI to actually write the essay itself on the slide here. I actually have an advertisement for a company research AI that will quote start generating results and help you improve your essays unquote. And it's a step by step guide to how to use an AI to write your essays for you.

So, obviously, the systems aren't great yet, but that's said they could even currently fool an instructor or a marker who wasn't looking closely at the content. I know that never happens, but but if they didn't look closely at the content, it could fool them. And of course, the development of AI is going only a one direction at the moment, it's getting better and better.

And so we can easily imagine much more sophisticated products coming out of these systems in the future. Another type of bad acting with AI and analytics is audio and video impersonation. This is the, the famous deep fakes kind of model, where the AI is used to generate fake images or fake video In the case of in personations the AI is actually using other data in order to impersonate a person to make it seem.

For example, that a person has said something that they didn't actually say or done something that they didn't actually do. Of course, there's a wide range of purposes to which this can be put, including authentication, you know, logins things like that cheating, misattribution of sources and much more. Again, it can be hard to detect the roaring anti-deep, fakes AI systems, that look at things for example, the way the eyes look in order to detect the the impersonation, but as this technology you get, it's more sophisticated.

It becomes more difficult to identify the the fake and distinguish, it from the original. And this has a wider impact on the use of video for learning generally because it undermines the trust that we have in photographs and visual imagery and it makes it harder for us to accept when we see something on video to accept that at face value to accept that when they say that, somebody said such and such that somebody actually said such and such and therefore, it undermines, our trust in digital media.

Generally. Now, this next application, might not seem like an ethical issue for learning and teaching technology nonetheless. I mean, including it in here, because it demonstrates how some of these wider range applications can apply in our, more narrow circumstances. The the category here is driverless vehicles as weapons and sure, it's not an academic issue, but academic institutions, schools colleges universities.

Very often have a physical presence. And, in the past, this physical presence has been the subject of attacks. I think, you know, most naturally of the case of school shootings in the United States but that's just the most recent kind of violence. We can think of, for example, the terrorist attack on a school in Russia, where many people were killed.

We can think of authoritarian regimes attacking and shutting down, colleges and universities. And so we can picture at least in our mind, the idea of an autonomous vehicle being used as a weapon at an institute of higher education. Now might be quite simple like, say somebody using an autonomous car to driving to a crowd or the autonomous vehicle might be equipped with bombs, or weapons or whatever.

This is not hypothetical. We've already seen not cars but drones use as autonomous weapons. There's at least one documented, case, where in a conflict in Libya. The drone was sent in. On a as they say, shoot and forget mission. Now, there's two major ways that this can happen. The first major way, is that the owner of the vehicle uses the vehicle as a weapon?

But probably not going to be what ethical people in institutions do. The other way, is for the vehicle to be hacked, or otherwise misappropriated. And and then use as an autonomous weapon. This is something that does impact ethical institutions, and people, because the fact that there autonomous vehicle whatever it is, could be used as a weapon, creates an ethical concern around their ownership and use of that vehicle minimally.

For example, people might say that they have an ethical responsibility to secure that vehicle and ensure that it's not used for hacking purposes. This is the same sort of thing that the same sort of thinking that applies when people have computer systems, there's an ethical obligation to protect your own computer system from hacking because your system might be used as the basis for a bot net where a botnet sends spam messages or denial of service attacks to other people.

So, there is a concern here. And this ethical concern affects ethical people. Finally tailored fishing. Now a fishing message is a message, usually by email, all of their examples of text messages being used as fishing but contain a link or attachment or something. And they're trying to get you to click on that link or attachment in order to introduce you.

Perhaps to give some information or grant access to your computer system and then this information or access. Well then be used for unethical purposes like maybe stealing your money or representing you as maybe a cosigner to someone's loan, whatever. There's a range of possibilities here. Spearfishing is a type of fishing that is personalized.

That is the attack is sent to a specific individual. They're usually named and they may contain information about that person as part of the content of that message and because it's personalized the person receiving the message is much more likely to believe that it's real and therefore much more likely to respond.

Spearfishing in other words is more effective than playing ordinary fishing. Now, what researchers have found is that deep learning models. For example, GPT3 another AI services can be used to lower the barrier to spearfishing using these tools people, even without any coding skills, cannot spearfishing attacks on a large number of individuals greatly increasing the chances that they'll be successful.

So again, this is the sort of instance, of a bad actor using an AI where the reason ethical implication on the non-bad actor because it requires their cooperation in order to work, It requires access to their data and it requires a mechanism whereby, they can be fooled into clicking on these bad links either because they're not aware or they're they're not paying attention, whatever.

And this is something that can happen to almost anybody. I think it also creates ethical implications for organizations that run email services and messaging services. For example, I look at my email services. I have one from Google, a one from my organization. And I find two very different types of content get through to me.

Google is pretty good at preventing spearfishing attacks. My own organization, rather less so, and I've had to report dozens and dozens of attempted attacks to our centralized computing services. So it does raise a question, how much responsibility do I have as an individual to report these? How much do I have or just my organization have to prevent these.

And if one of these things happens, what are the implicit ethical implications of that. So that wraps up our list of bad actors, I could probably have come up with more. And one of the references for this module is a reference to the use of AI in digital crime.

It's a fascinating read, and I do recommend that you read it. Bad actors themselves are not necessarily subject to ethical principles or more accurately are not concerned about ethical principles, but the actions of bad actors have ripple effects. I need ripple effects, do create ethical issues even for the most ethically minded of user of learning and teaching technology.

So we've got two more sets of ethical issues to go and I'll be getting two those within the next day or so. I then, after that, we've got the section on ethical codes where the format of our videos is going to change a little bit and we'll be narrowing more and more on the ethical content of this course.

So, that's it for now, thanks for listening for those of you who stuck with me, through two. Previous attempts, to record this video without sound. I thank you and I'll see you next time. I'm Stephen Downes.

When It Is Fundamentally Dubious

Transcript of When It Is Fundamentally Dubious

This presentation looks at cases where the use of AI us fundamentally dubious. This includes cases where the consequences of misuse are very high, when there is the potential for feedback effects, cases where classification is used to infer agency, and cases where we don't know what the consequences may be.

 

Unedited Google Recorder transcription from audio.

First five minutes clipped...

 

Pull of justice here that suggests that people need to have actually committed the act in order to be held responsible for it. Now, it's not 100% true. You know, there are criminal sanctions for things like conspiracy and to like nonetheless, you know, suggesting that somebody is going to be liable for punishment, simply because of who they are or where they live, is inherently problematic, and it's all the more.

So problematic because of the possibility of feedback loops existing within the system, you begin predicting criminality by a certain group of people and that results in not surprisingly, focusing, more policing, resources on those people. But the very fact that you're focusing more police resources. On those people means that you are more likely to catch them doing something wrong.

And for voila, you have increased criminality the people themselves, haven't done anything different from the people that live elsewhere, but they're being put at greater scrutiny. It's like if you put speeding cameras in one part of the city and did not put them on the other part of the city, well, you're gonna discover that, all the speeding takes place in one part of the city and then with you take this data and put that into your analytics, obviously you're creating a false impression and you're putting people at risk of being unfairly targeted, unfairly charged, when really there?

No different from everyone else. So that's what we mean by fundamentally dubious. Similarly racial profiling. It is arguable indeed. I would argue that there is no ethical application of analytics to identify specific races for special treatment. Now, the argument could be made to the contrary that in order to achieve equity, it's necessary to identify systematically disadvantaged groups in order to provide the support and the protection that they need.

So you know, this argument isn't going to be straightforwardly wrong in the sense that it's not obvious. That all cases of racial profiling are going to resolve in fundamentally unethical or dubious practices. Nonetheless, if the purpose of the race will profiling is to do anything other than benefit, the people being profiled then, I think that it really is of a case of fundamental dubiousness to the application of AI in this case.

And again, it's similar to the predictive policing issue, where your predictions about a certain racial profile might create a feedback effect where you apply more scrutiny to them based on what they look like. And this greater scrutiny results in more frequent observations of the behavior that you're attempting to target taking this same approach but now applying big data and increasing analytics to this results in something called identity graphs, the idea of an identity graph is that you use multiple sources of information in order to construct profiles of specific individuals here.

For example, on the slide, we see an illustration of such a profile. The person is very smith. And on the left hand side, we have Mary Smith at home. Her full name including middle initial her age date of birth, home, address, family email, addresses cell phone, whether she's registered to vote and presumably how she's registered in systems that require registration her interest.

And many more things could be brought together. Facebook accounts, Facebook post information Twitter accounts shopping habits. Credit card purchases etc. You also have Mary Smith at work. The company she works for what her gender is her business. Identification number, the business name the address when it's open what its website is the business social media and so on sales, volume and possibly even things like her salary, for specialization her business interests, her business, contacts, etc.

All of this information is assembled to create a profile that is then fed into an artificial intelligence system or an analytic system in order to perhaps sell her things in order to perhaps to predict when she's looking or a change of career or perhaps to seller house, whether she's looking for certain services to identify, how she will vote to target information and propaganda to her etc.

Again, this creates the case where we're assigning and agency to a person who was not necessarily exercising that agency based on commonalities with other people. And identity graph is useful for analytics only if it is combined with other identity graphs in order to generate these predictions. A secondary fact that's come up with this, is that the information about Mary Smith isn't just about Mary Smith.

It includes her family. It includes her friends. And so by collecting data on Mary Smith, you're actually casting a fairly wide net of data and therefore drawing conclusions about people who may not have given their consent for you to use their data. And of course, Mary Smith in this case mean, how to given her consent for you to use all of this data.

So this sort of practice and it's it's hard to say that it's fundamentally dubious because it's so widely used by marketers and political organizations and like yeah, at the same time, just one presented this way, it does seem to be fundamentally dubious and AI and analytics based on this practice seem to be doubly.

So the discussion of automatic weapons on robots is something that has already occurred in our ethics course, and it is our viewable and I would argue that the arming of autonomous robots is fundamentally dubious nonetheless. Just as in the case of identity graphs, it has already begun to happen.

We do have cases of autonomous drones being reported to actually being used in our armed conflict and specifically in the Libyan civil war. The second edition of that also is pictured. We have these armed robot, dogs being used to security guards and one person in the course commented on how the use of the word dog makes it seem like this potentially lethal weapon.

Is it so scary after all? Because you know, we all like dogs. So as we'll see in the next section, this sort of use of AI raises all sorts of questions. If you're shot by an autonomous dog, who is responsible for shooting you. Who do you sue who has the authority to use an autonomous dog to shoot you.

How does that authority come into place? There are all kinds of questions that haven't been answered by society and yet governments and private agencies are still beginning. The process of arming autonomous weapons, fundamentally dubious. Finally, there's a general class of applications of analytics that can be covered under the heading of when we don't know what the consequences will be.

For example, there's a report of a suggestion that colleges should put smart speakers and student dormitories. Now, a smart speaker, doesn't just speak it. Also listens to what's happening in the room, So that it's able to respond to commands. And to suggestions, and presumably also pick up information that will be used by advertisers in order to market to the people who use smart speakers.

And the question is, we don't know what will happen when we put these into student dormitories or as the biomed central article says, we simply have no idea what long-term effects have having conversations. Recorded in kept by Amazon might have on their futures. So there's you know there are different factors, influencing the consequences, there are anticipated consequences but significantly unanticipated consequences.

Some of these will be beneficial and used on a post-hawk basis in order to justify the use of the AI in this case but some of them will not be beneficial. Oh we don't know how many of each there will be also when we don't know what the consequences will be, we're not prepared to mitigate against the potential of those consequences we're not prepared to cop our hand.

The impacts not just on the person in question but on the overall social system. Imagine for example that the conversations of students in the dormitory of an elite university are accidentally leaked. Well, I week we can have no doubt that some of these conversations are politically incorrect to use the currently invoke euphemism.

The students will say things in private that would probably render them unemployable in the future, maybe not all of them. I wouldn't think I was among those. Of course, I would say that but some of them and they might not know, they probably would not know that they're being recorded.

There's a fundamentally devious application of AI here. In order to make this work, it's arguable and I would argue that this simply shouldn't be done, not just because it's inherently wrong, but because we don't know what the outcome of this use of analytics and AI will be. Even if the there are no bad benefits, bad benefits, if bad consequences, even if it turns out after the fact to have been fine the argument here, is that, before the fact, we did not know.

It would be fine. And we created this unnecessary risk. So that's the end of this short presentation. Again, it's probably possible to add to the list of fundamentally dubious applications of analytics and AI. I think I've covered some of the major ones and you get the sense here of the sorts of things that come to play.

When there's a high risk of bad consequences when accountability and mitigation aren't clear. And when the actual use of the AI creates affects that are a magnified beyond what they would be. Otherwise all of these create cases where AI and analytics are fundamentally dubious. The next presentation in this series, will look at the final set of issues, looking at social considerations of AI and we'll have that to you shortly.

So for, now, I'm Stephen Downs. This is the course, ethics analytics and the duty of care and we'll see you again.

Social and Cultural Issues

Transcript of Social and Cultural Issues

Unedited audio transcription from Google Recorder.

Hi, welcome to another edition of ethics analytics and the duty of care. I'm Stephen Downes. We're in module three and looking at social, and cultural issues of analytics and AI with an eye on learning analytics and the use of AI education, training and development. Generally, this is the last of the sessions that will be doing on issues.

In AI will be looking more about the ethics later on. We've already looked at some of the issues that arise when analytics works, and when it doesn't, we've looked at the influence of that actors. And we've also looked at some of the uses of AI an analytics that are fundamentally dubious today.

We're looking at a wider category, we're looking at the social and cultural issues that analytics may raise. And this is a class of issues that addresses the social and cultural infrastructure that builds up around analytics. So we're not looking at the impact, the direct impact of analytics or the immediate ethical harm or good, that maybe caused by analytics, but rather the wider ways in which it changes.

Our society changes our culture and changes. The way we learn and think and work and interact with each other. There's quite a few of these issues. There's going to be some overlap with some of the topics that we talked about in previous issues, but our focus is always going to be in this case on the wider sorts of issues that arise

To be in with let's consider issues of opacity and transparency. And we can see from the diagram here that there are different degrees of opacity and transparency to different types of AI and analytics. For example, in neural networks as with fuzzy logic, the inputs are fairly clear but the operations and especially the modification or evaluation of the network structure is less clear as is the decision or output In things like machine learning and meta-heuristic AI and analytics, even the input is to some degree opaque, we don't know exactly what data is going in or perhaps more accurately, how the analytics engine is considering the data that is going in.

Now this raises a wider range of issues because the decisions that are based on analytics and AI will have to be justified due to ethical concerns due to our ability to trust in the systems that were running, and the institutions that deploy them. But because of the way analytics is structured and we'll talk about that a lot more in the course in the modules ahead.

It becomes a lot more difficult because of the black box nature of AI and and more accurately because technically we could examine every single node, every single connection, but the the complexity and the lack of labeling of these nodes, makes it very difficult. If not impossible to have a straightforward description, that's understandable to people of what's going on inside an AI engine.

So there needs to be for the wider social use of AI a better understanding of how to make the decisions and the way AI works less opaque and more transparent sort of a related phenomenon and it's not talked a lot about in the literature is the phenomenon of alienation.

We can we can depict that and several ways one way of talking about it is the way that AI and digital technologies. Generally impose themselves as a barrier between one person and another and we can think of the very social situations in which that comes up between, for example, a decision maker and the person affected by the decision or between an educator and a student, when AI especially in digital technology generally imposes itself in this way.

It creates this distance between the two humans in the process and has the potential effect of alienating one from the other. The person, especially at the output end the student or the client doesn't feel connected to the human that is providing the service or making the decision. So the capacity of someone to access jobs, services, other social, economic, and cultural needs, feels more distant and more in personal and the person feels less, and less a part of society, and more something separate, or apart from society.

And this can lead to much more widespread and long-term social issues related to both of these is the phenomenon of explaining ability. And the idea here is that if an AI has an impact on someone's life, then we need to be able to provide a full and satisfactory explanation for its decisions.

This is tricky. Not simply because of the complexity and not simply because of the opacity, which I discovered earlier. But also just because of the nature of explaining ability, an explanation is an answer to a white question. You know, why are there roses in the yard? Why was I found guilty?

Why was my job application rejected? And typically a reason is given in terms of straightforward causes and effects. There are roses in the garden because you planted them. There you are guilty because the evidence pointed to your guilt, your refused for the job, because you don't speak the language.

You know, we can understand that but in real life causes are a lot more complex in that one of the advantages of AI. Is that it takes into a account multitudinous factors that humans don't take into account when they're simply deciding what to attribute as a cause of a certain effect.

Now, that makes artificial intelligence predictions more accurate sometimes. Even uncannily accurate, but it makes it hard to explain because now we don't have access to this simple causal story. So we don't get around this simply by saying, well, you know, coming up with a story that's a nice simple.

Causal story explain ability in terms of artificial intelligence, he's going to have to be done using a variety of methods. That on the one hand, educate, the person being explained to about the nature of cause and effect. And on the other hand, taking advantage of logic like counterfactuals in order to create a story that does not necessarily depend on stretched cause and effect.

For example, why did you not get the job? Well, if you had presented information about your ability to speak the language, then you would have been successful. That's a counterfeitual. It's not specifically a cause or effect but it does give a good enough story to the person about the explainability of AI, even this though is hard to do and certainly it's not the case that everybody involved in AI.

An analytics. Do anything of the sort, another factor related to all of? These is accountability, this is going to come up on a number of differential occasions. I mean, you can see already the relationship between explaining ability and opacity who is accountable for the actions of an AI. Here.

We have a person who has been denied the job and even if they know that they're being denied the job because they don't know the language who is responsible for that decision. Is it the person who programs the AI is at the person or organization that provided the data on which it'll base its decision?

Is it the owners of the AI or is it the end user? The person who actually pulled the switch, turn the AI on and applied to this particular situation. Now again as with causation, we could say that there are multiple people accountable all down chain but our traditional perspective of accountability doesn't really work that way.

And ultimately socially, we expect there to be one person in charge and a racist. The question of whether this social expectation can persist in a world where there are multiple agencies for responsible for the actions of a named individual. Artificial entity.

One of the interesting impacts of not just artificial intelligence, but digital technology generally has been the clustering of people into what have been called filter bubbles. What we see pictured here is a representation of the books, read by people on the left and people on the right in American society.

And as you can see, there's barely any overlap between them. Now, this is a function of how these books are recommended to these individuals and how they're described to each other. A lot of that right now doesn't take place through the work of analytics and AI specifically but it is the result of network processes and especially things like social networks and data networks and so it's reasonable to assume that if such decisions are automated, the results will be very much the same.

And there's a long-term social risk here at play as we read in this spectrum. Article, eventually people tend to forget that points of view, systems of values ways of life, other than their own exists, such as situation, corrodes the functioning of society and leads to polarization and conflict. Now, there are many factors in digital technology, including the motivations and the incentives behind digital technology.

But all of these also inform how we design and apply our artificial intelligence systems. And so it's reasonable to worry about what happens if we are. Not careful with these motivations and incentives. If we're not careful with how we designed the input to these networks and the functioning of these networks within, and I to prevent tank too much, social cohesion, and filter, bubbles part of this is the result of feedback effects and we're going to see feedback effects a few times.

We already saw it in the case of an application of AI that is fundamentally dubious, that is the use of predictive policing and that's a classic feedback effect. The idea here is that the AI predicts that a certain region of the city will produce more crime. So the police do more policing in that region and because of this increased scrutiny, the results are that there's more crime, at least more crime detected by the police.

And that's fed back into the data system, thus reinforcing the conclusion that it drew in the first place. And the difficulty here is for the problem here is that this conclusion may well have been wrong in the first place. So there will have to be as Ross. Dawson, writes careful consideration of the social dynamics of predictive information in some cases.

It's arguable that it just should not be used in other cases where the stakes aren't so high. It's not obviously a dubious use of AI but it could lead you astray. It could lead to decisions about the allocation of resources, the organization of labor, the recommendation of content, etc.

Being, you know, increasingly incorrect. Classic example, about latter phenomena is the YouTube algorithm, which recommended more and more extreme videos on a particular subject. And we also have seen that happen with the Facebook algorithm the case on the Facebook algorithm. The thumb was kind of on the scale to actually increase and promote this feedback effect.

So arguably, we have the impact both of feedback effects and bad actors in that situation new types of artificial intelligence, also lead to new types of interaction. And in such cases, it's going to be of increasing importance to look at the impact on traditionally disadvantaged groups. These impacts will often come in a shape that we don't expect.

That's, we have to be particularly vigilant. One example that was given, was that an automated vehicle parked in such a way, that it blocked access to a person in a wheelchair. Now, we don't think of that typically, as, you know, a type of exclusion and yet to the person who's being inconvenienced in this way, it is very much, a type of exclusion.

So when we're teaching or training and artificial intelligence to operate their students to be a requirement that we somehow include this context so that we don't get the undesirable side effects such as the lack of inclusion. There's another factor with inclusion as well and that's with the creation of algorithms and the creation of data sets that are used in artificial intelligence for one thing these data sets need to be inclusive.

They can't all consist of only one ethnic group or only one nationality as well. It is preferable that the teams who are building these AI solutions are diverse as well, so that it actually occurs to them. To think about cases where an AI might result in a solution or situation that is not particularly inclusive.

Like, the one with the person in a wheelchair being blocked by an automated vehicle. It might just not come up to a person who is not disabled. So generally there's this wider, social political and economic impact that they may AI may have from the perspective of creating more or less inclusion in society.

It's not clear how this issue is addressed. It's not clear how you can add an awareness of contacts to your typical AI and so it's certainly is a longer term ethical issue to be considered

Artificial intelligence and analytics also raise numerous issues of consent and in some senses me even redefine what we mean by consent. Certainly and society and culture they've increased awareness of the need for consent. And of course, that's a lot of the thinking behind the European general data protection regulations GDPR, but it's something.

Also that applies across the board ethics as a whole. Not even thinking about artificial intelligence, but ethics as a whole and especially research, ethics definitely talks about consent and conditions of consent and mechanisms of consent. The concept of consent isn't just simply clicking okay on a box, arguably that satisfies.

Neither the condition of knowledge nor the condition of permission. There's the concept here of informed consent. A person needs to know what their agreeing to and not just what it is that they're agreeing to. But what the potential consequences are, what the potential risks are and and the permission granted needs to be explicit sent as a concept, that applies to both the provider and the recipients of services.

And there have been discussions about cases, where providers may refuse consent and questions about whether that is unethical, this happens, most frequently in the area of medical procedures where for one reason, or another the application, and the procedure violates, a person's ethical code, but it may apply in other areas as well.

There are cases where people working on analytics at Google. For example, refused to participate in certain projects. And so we have to ask first of all, was this refusal of consent, ethical or was refusal of consent in some sense. Unethical, this would be asked if, for example, the refusal of consent could be argued to produce a wider harm to society.

Consent also includes rights over access to data use of data, erasure of data, even the repurposing of data and all of that is wrapped up in questions. Like, how are the harms identified? How are the harms if they occur remedy, what meaningful alternatives to consent are provided. If the only way you can use a service is to consent, and if the service is in some way required, then it's arguable that there are no alternatives to consent more broadly, the use of analytics and artificial intelligence is leaving to what many call a surveillance culture.

And there are different ways in which this comes up different sorts of ethical issues. That arise as a consequence, there's a whole discipline now being created called surveillance studies. And what surveillance studies is about, is not simply the ethical implications of being surveilled. Some people are arguing that they have a right, not to be watched and other people are given that they have a right and indeed an obligation to watch.

But beyond that we can ask, how does the awareness that we're being watched change society? How does it change our behavior? How does it change the way that we interact with each other in the period of the pandemic? There were questions raised about how people were behaving once. They were being observed by other people directly in the face by a zoom camera and they didn't interact in the same way as they did.

When they were speaking face to face, they felt more on stage, more sensitive to their physical presence. You noticed me, looking to the side here. That's where I see the video of my image being projected and indeed. As I do these videos, I'm very conscious of my hair and, you know, my smiling enough, things like that.

These are things that they might not normally think about, or perhaps, they're thinking about them in different ways. It's not clear that these create ethical rights or ethical wrongs, but we don't know that unless we study it, another aspect of surveillance is that the people doing the surveilling have a much better understanding of you and your environment and your contacts than even you do.

And so, they have what's called algorithmic, certainty. They can tell how you are going to behave. What products you are going to buy who you're going to vote for, and this has a long-term impact on market. Economics democratic process and cultural and social values, generally, and we have to ask how do these long-term impacts play out?

What is the ethics of surveillance with respect to these long-term outcomes? Maybe algorithmic, certainty is good. Maybe we find the gate a society that actually responds to what we believe and what we want. But on the other hand, maybe algorithmic certainty is replacing our capacity to change our minds and make new decisions in the light of new information.

Certainly abroad area for study related to this our issues of power and control and these issues come up again. And again, when we're talking about the use of AI analytics, AI has the potential to alter social structures of power and control but also has the potential to entrench existing structures of power and control.

So that those who are disadvantaged or disenfranchised remain forever, just advantaged and disenfranchised surveillance says Edward Snowden is not about safety, although it's often argued to be for the purposes of safety. It's about power and it's about control. What's happening over time is that as we have more and more data and as we have more and more processing power, the the way we take decisions changes now this might be a good thing and you know, we need to keep in mind both sides of this and the diagram on this slide we have first of all, not enough data to take good decisions and so simply the people in charge made a decision over time as you have more data volume, more processing power.

This allows for what they call today, evidence based decision making. Now, there's a lot of discussion we can have about that, but the viability of the evidence about who decides what the outcomes are, who decides what the benefits are, but on this chart and probably currently this is represented as immunity immediate state.

Ultimately, once you have sufficient systemic complexity collective intelligence. However, that's defined replaces top-down control. This is actually a scenario that I'm anticipating and working toward and trying to understand. It doesn't follow from the fact that this is what happens that this is a good thing, it might be the case that collective intelligence is the worst thing that we could be depending on in order to govern ourselves, it might be that collective intelligence, removes individual autonomy and freedom.

You know, again it's about power and control. It's about algorithmic certainty or it might be that collective intelligence allows, the many voices who do not today, have power, some mechanism for projecting their power and creating systems that work toward their benefit and to their long-term gain. There's no easy answer to any of these questions.

We're only beginning to comprehend how algorithmic decision making creates collective intelligence, the conditions under, which it would create collective intelligence and the sorts of structures that we need to put into place in order to make sure that we get ethical collective intelligence. I think this is an important point to make here and it's not one of the long-term ethical issues in general.

But the question of who does what is important, we're being sold right now, and and I'd say sold is the right word, a picture of a future with artificial intelligence, where it provides the calculations, the copyation power, the instant pattern recognition and we humans provide the creativity and the empathy and so, we're handing hand living happily ever after humans in charge AI doing what we want, but especially with recent advances in capability.

And we we looked at a number of those in module two of this course, there's no reason to believe that in the future. AI will not be able to outperform humans, both on the computational side and on the creative side. And that gives us a very different picture of a future with artificial intelligence and analytics, I don't want to say it gives us one with no role because I think that's probably inaccurate.

It might give us one with a different kind of hybrid role than the one that we're being sold right now. But I think we need to be aware that the future of AI won't. Excuse me, won't be the way it's being depicted in this picture. Here's the problem. Doing these videos live.

I love doing them live. I think I do better when they're live, they're certainly faster. But I feel I got things like my throat. Turning into a frog. So related to all of this is the possibility and in fact, perhaps the likelyhood of an oppressive capitalist economy developing out of all of this Audrey Waters.

Looks at this, she writes scholarship, both the content and the structure is reduced to data to a raw material, that's used to produce a product. So back to the very institutions where scholars teach and learn I would argue about it's not just scholarship that's being reduced to data pretty much all forms of creativity and interactivity.

Are we being reduced to forms of data zoom? Which today announced that it's going to be selling advertising on its free version. Also took pains to say that it would not collect the data of the contents of zoom interactions in order to inform that advertising. Now maybe you believe zoom, maybe you don't.

But the point here is that your conversation with another person using an interactive video product produces data, and this data can be gathered and commodified in order to produce new products. And it gives the people who produce these new products and advantage, far beyond anything that individuals could produce, it's equivalent to the advantage.

That a manufacturer, who owns a clothing factory has over a person who sees shirts by hand, it's a that kind of scale. Now historically, when imbalance is a that sort of scale have occurred, there has been a concentration of wealth and power like the one not worth seeing today and and increasing increasingly oppressive economy in the past, that has not resolved in good things for the economy.

Because ultimately, the people eventually are either worn down or they revolt and, and both is possible. I mean, like, we take a Marxist perspective, they'll probably revolve, but, you know, the Marxist perspective isn't always right. And if we look at some of the more impressed countries around the world today, they're not in revolt, they're just in brutal.

Repressive conditions where the mass of people need wretched lives. So you can see the ethical issues that rise when AI and analytics are able to take every thing that we produce and turn it into raw material for the production of materials that we currently produce. And we currently depend on for our own livelihoods.

AI is also increasingly becoming an authority and one way of talking about that is to talk about our sense of right and wrong. Again, this is that picture that we're sold, right? Where AI will do the calculating in humans will do the deciding. Well the deciding is based on the sense of right and wrong.

Now, there's this picture of this sense that analytics me I cannot reason cannot understand and therefore cannot know the weight of their decisions. But we can imagine, I'm AI developing a conscience. We can imagine an AI developing a sense of right and wrong if for no other reason than that.

People are trying to teach AI what counts as right and wrong. And once the machine starts making these pronouncements, it's going to be very easy to allow it to keep making these pronouncements. On the one hand, it'll be really hard to argue against the AI because it has all of that data and all of that knowledge and you have just your sense of right and wrong.

It's like somebody trying to argue against the entire medical establishment using intuition. I mean, there's really no point. It's the there's there's really no equivalency between the two points of you also to be convenient to allow AI to make the decisions of right and wrong. We won't need to worry about it.

We just ask the AI and we can act accordingly. It takes a lot of the stress and the pressure out of life and even for people who are looking for, you know, the the gaps and the sense of the right and wrong looking for the loopholes having an AI clearly, what's right?

And what's wrong allows people to walk is close as possible to the edge of what's being wrong? What's, what to the edge of? What's determining to be wrong without going over? And if you think about it, it's a lot like speeding. We don't independently determine the rate speed or the wrong speed drive on the highway, we're told and we're told in two ways.

Number one we're told by the signs that are on the side of the highway and the second way is we're told by the police who will pull us over and give us a ticket if we drive too fast. Now the signs are a guideline. The police are the actual enforcement and everyone knows at least, in this society, that you can drive faster than the posted speed limit to a point and most experience drivers in a given reason know down to the exact kilometer per hour.

How fast that is on the 417 out here, it's 120 that might be 125. Depends on how it. Depends on how you feel. To me. It's before the speed limit was raised to 110. It was 119, you didn't want to go 20 kilometers over the limit. And if ai is allowed to determine the rightness and the wrongness of all acts the way the speed limit in the police, determine the rightness in the wrongness of the speed that you drive.

We will very likely move as far over to the edge as we can, so that we're still right? But we're as close to being wrong as as possible and it's arguable arguable that you're not sort of environment or even one where we just sort of likely follow the instructions of the AI that we actually lose our sense of right and wrong much in the way a person.

If they use only a calculator to perform mathematics might lose the same of proportionality. When they do multiplication or division, similarly humans might lose the sense of proportionality with respect to right and wrong. If we allocate the decision making to an AI, that's a long-term ethical consequence. It's not one.

Let's discuss a lot in the literature and it's probably one, but it's going to have more impact over the years. I think than many of the issues like, like bias, for example, that we talk about today ownership, the rise of creative AI, and there is a rise of creative AI, don't think that only humans can create this rise creates, many issues with respect to ownership and I've listed a few of them here.

Should AI algorithms be patented can intellectual property restrictions restrict uses of the data being used to train an AI who are the creators of AI generated art? What if an AI is used to create all possible arts? That is not an impossibility. There's one person who created all possible combinations of notes in a certain scale of a certain size and and then granted it to the public domain so that none of these melodies can be copyrighted.

But what if that was done by a company that simply took an AI created, all possible songs and copyrighted that could humans. Consequently be blocked out of content creation entirely can can humans even compete with AI generated content. I know very few people who go to the store and by handmade shirts.

You know, we all get our shirts that were made by machines by people, in Hong Kong. You know, my grandfather used to own a tailor shop where all of the shirts for me by hand. I mean, he even the cloth was made in the region and then it was sewn into shirts those industries, no longer exist, because that happened to all of the creative industries in the future.

Does that happen to things? Like this video, which is being lovingly handcrafted using the best technology I can buy? Does that mean this is replaced by an AI sometime in the future, which will have a nice musical track in the background and better video cost less and be more quickly, produced, and then, over and above all of that.

What impact might regulation on the creative capacity of AI have will there be, you know, right now in Canadian media we have what are called Canadian content regulations and a certain percentage of television shows and musical content broadcasting Canada have to be produced in Canada and autopactworks allow you to maybe in the future will have human content requirements.

So that automated radio stations, which already exist, must play, a certain amount of human created content. Let's certainly a conceivable regulation. And you know, it's the sort of thing we should be thinking about now because the people who produce automated content are probably also thinking about this sort of regulatory regime that they would like to work under.

It's not one that includes protection for humans responsibility, you know, if you can get credit for something, you can also take the blame for something. And again, the question comes up, who's responsible for a harm, because by an AI, I was involved in some e discussions on this subject where one proponent was arguing for the concept of AI autonomy.

Such that the responsibility for what the AI did could be detached from any human and actually assigned to the AI itself. Now, that's an inherently problematic concept. At least to me it is other people that might be implicated or the developer of the AI. Particularly if they're a black hat, developer has pictured on the slide here or they might be the owner of the AI much.

In the way that the owner of a dog is responsible for the actions of a dog. Another thing is AI technologies has failed. Another say can place further distance between the result of an action and the actor who caused it. It's a remote causation problem. There is this questions about who should be held liable and under what circumstances, It also allows for the creation.

As I commented a bit earlier of an environment where complex causation is the norm. There's no one person responsible for the act, multiple people and multiple systems are responsible for the act, and it's becomes hard to place. Blame on anyone This creates large contractable social, and cultural problems Global warming is is an example of this.

There's no one person or no one agency responsible for the economic system that functions basically, by producing global warming. It's clear that we want it to stop, but it's not clear that there's any person or even group of people that we can talk to and have change their behavior in order to make it stop, we're told that we should each undertake a personal actions, and taking individual responsibility for global warming.

And so, we do things like use paper straws, and drive, electric cars. And yet the engines of our society in the basic makeup of our society, depends on being able to produce greenhouse gases. We look at the supply chain. For example, we've already seen what happens. The instability that happens when our supply chain cultures and yet the supply chain is a major contributor to global warming.

So, how do you assign responsibility in that case? It's not simply the person that bought a shirt from a Hong Kong. Instead of one that was tailored at home. It's a collective kind of responsibility and in AI and analytics. Generally pretty much all attributions of responsibility are going to be of that sort.

We need to figure out how to handle responsibility in such a case. We also have a condition known as winner takes all in some people. Perhaps, oh yeah. Well, you mean capitalism, but it's not simply capitalism. I've put a number of images on the slide here because I want to identify that there are multiple causes of a winter, take all kind of environment.

So the first the ethical question in broad strokes is, how can the database of some large corporations and winter. Take all economies associated with them be addressed, how can the data be managed and safeguarded to ensure contributes from the public good and a well functioning economy? These are good questions.

The problem is, we can't simply answer those questions because we have multiple mechanisms that produce minute. Winner takes all phenomena. Um here's here's one summary of some of these the focus on relative performance instead of absolute performance. A good example of this, a sports economy where you're not trying to reach an absolute pinnacle of performance.

You just need to be better than the next person in order to win. And just being better is enough to create a huge imbalance between your salary and their salary. There's also a concentration of rewards such that you reward only the winner and allocate very little to the rest of the people who are losers lotteries work that way, right?

The lottery will concentrate the reward on just two or three people who win the large pot and the vast majority of people win. Nothing. This kind of thing can happen in an environment that is a competitive and overcrowded where many people are trying to attain the result that the person who eventually wins does.

Think of, for example, music. There are many people who play music and would like to be successful in music, and because there are so many people, it creates much more interest and popularity. And so, the the people who are successful are able to be very successful. Meanwhile, because there are so many other people.

The relative rewards that are allocated to these. Other people are very small because there's so many of them another focus on winner takes all. Phenomena is the mass market. The mass market allows one individual to reach many people in society, indeed, all people in society. And so a person who can appeal to the masses is able to excuse, excuse me?

No, I just thought I'd sneeze there because, you know, I had a frog throat earlier some of these live video, don't you love it. The mass market allows someone to become very well wealthy by extracting, a very small amount of resources from very many people. This is how the commodification of AI works.

Anyways, we take the the AI company takes such a small percentage of the value of say, somebody's conversation on video conferencing system, such a small percentage of the value, but by reaching a billion people of that video conferencing system. They can create enormous wealth for themselves. This is aggregated by network effects and feedback effects.

The, the network effect is something along the lines of the following, the value of a network increases at a much greater rate than the size of the network. And that work of two people is not worth very much network of 10, people's worth quite a bit more. A network of a hundred people's worth much more than 10 times and network of 10 people.

And so on, I would say it's exponential, but I'm not sure if the actual mathematics is that exact exponential. So the idea here is that, whoever can be the one who has access to that network becomes the winner and competing that works. Even if they're just a little bit smaller, they're so far behind in the benefit that they produce, that they fall further and further behind.

So ultimately you get just one network. That's why I really, we have just one telephone number. That's why we have just one road network. I mean, can you imagine an alter road network? It would make no sense. That's why Facebook can become almost completely dominated. Because again, an alternative to Facebook starts so far behind in utility, even if it's close to catching up, it's not nearly as valuable as Facebook.

Is companies and organizations take advantage of these to create winner. Take all scenarios. They also put their thumb on a scale of a bit by creating lock in and barriers to exit. That is to say they make it hard to leave their network or their product. Have you tried getting your data from Google?

Google says it's possible. It is not easy. Have you tried getting your data from Facebook? You cant get your data from Facebook, have you tried switching from a Microsoft product to an open office product? And again, there's a significant locking here because there's a lot of learning and adaptation required to move from Microsoft to the competition.

Finally, on top of all of this, we have the affirmation, the feedback loops that I talked about where the prediction of success ultimately becomes a self-fulfilling prophecy. All of these lead to winner. Take all phenomena winner. Take off phenomena are not good for society arguably. Now there's going to be the set of people who say no it's good that we have billionaires because they're able to amass the resources that we need for a really high profile projects, like sending William Shatner to space and to a degree that is true.

On the other hand, the billionaire was able to do this only 20 years after all of us as a society were able to do this 20 years. After 50 years. After. So I'm not convinced about argument personally but that argument exists, certainly the winner takes all phenomenon produces a lot of losers and this has ethical consequences assuming that you believe that the situation losers find themselves in aretha.

Ethically problematic. If you believe that having a large mass of people in the country, having economic difficulties that is ethically wrong, then you may be obligated to say that a winner takes all scenario is also, ethically wrong. I think there are a lot of arguments that go back and forth here especially in the economic side of the debate but I think that technologists and educators also have to become involved in that debate and start to talk about what is the ethical distribution of the rewards from analytics and artificial intelligence.

And I don't think there are any easy answers here. Moving toward the end of this presentation. There have been concerns raised about the environmental impact of AI based systems. So we talked a little bit earlier about how responsibility for environmental impact is very difficult to allocate and we have a similar case here we have for example the training of an AI model that according to this study anyways produces far more CO2 emissions and say traveling from New York City to San Francisco.

Now that of course depends on where the AI model has been run. If it's right here on Ontario, the emissions are almost zero because something like more than 95% of Ontario's electricity is produced from non-co2. Limiting sources, of course a large amount of that's produce by nuclear energy and people may have different ethical objections to that.

Nothing is easy and nonetheless the environmental impact of something like AI can be raised, who is responsible for that one. Would think the people who benefit from AI are responsible for any damage that it causes. However, this has not been the case for previous systems such as the oil industry, such as railways, such as cars and so on.

So arguably it would take a significant change in society for this to be the case with AI also. And in the Forbes article cited here, mentions this. There may be environmental benefits to the use of AI generally. Perhaps having automated systems will reduce our impact on the planet. Certainly having an AI managed temperature control in a house, could maximize efficiency in that house.

Although if the person in the house is like me, it'll maximize heat in the house and end up with a worse system, there's no simple answers here again, but with respect to the environment again, we're we're weighing the benefits against the cost. We're weighing what we count as an ethics bearing cost as opposed to simply say in in economic cost or a personal cost or a convenience cost, the ethics of a particular strategy.

It's always something that overlys that strategy. And even, you know, even in the cases, like, you know, the environmental action of the planet, the needs ultimately to be an argument with respect to why it's bad that we destroy the environment of the planet. And you know, because it's not immediately obvious that from the point of view of the planet, that this is a bad thing.

Finally, the wrong way. Finally literacy issue of safety. The impact of AI on safety. Could be very direct. As for example in the Uber self-driving, car case pictured. For those of you who don't have video, you see a car and you see a human figure being flung through the air as it was just struck by the car.

But again, with respect to cause there could be any number of causes related to AI and analytics. That result in poor safety, they could range from an inadequate safety culture both on the part of the designers of the IAI and the users of the AI. They could be the result of misdiagnosis and errors.

They could be the result of blind spot in the AI model. I'm just never predicted that a person could ever pull that switch. For example, AI an analytics could lead to unsafe, turns of behavior. If we come to always trust in the predictions of the AI and lose our sense of caution this pattern of behavior might ultimately be harmful.

It's kind of like the person who depends on a calculator for math. This creates a pattern of behavior such that when they're presented with an obviously wrong. Mathematical result. They don't have the education in the background to understand that this result can't be right. And then there they're led into a mistake.

There's the possibility of vulnerability to attacks on the part of AI. I mentioned this a bit earlier with the the risk of hacking and cyber intrusion. I'm finally, there's the impact of compliance and regulation. Here we have the wider social issue of how the enforce compliance in the AI industry.

What mechanisms we use to regulate AI. Do we dare let the AI industry regulate itself, if not, who should do it? What should the penalties be? How would they be enforced? And the light, these are all issues. These are all ethical issues, because they speak to what constitutes, right?

Use of AI, and what causes you from wrong? Use of AI. Most of what I've read and the ethics of artificial intelligence, barely touch on any of these social and cultural issues. They're far more concerned with what happens when AI goes wrong with things like bias and misrepresentation stuff like that.

And their in a certain way you talking about blind spots in a certain way, they're blind to the possibility that the use of analytics. And AI could significantly. Change our culture from the perspective of learning and development. They could change what we need to learn. They could render. What we have learned, not useful anymore.

Somebody who takes 10 years in order to learn how to create high quality content, find some cells replaced by a five night $95, AI engine, a person who trains to become a photographer is replaced by a Google vehicle auto driving around taking the best pictures and then using an AI to curate them and present them in flicker albums or whatever.

You know, these issues go well beyond in my observation, the current discussion of the ethics of artificial intelligence and analytics and and we'll see that as we look in the units to come at the ethical codes and values, underlying, these ethical codes, first of all, and then later on the ethical principles, in the sorts of decisions that we make when we're applying these systems, but all of that in the future.

For now, these were the social and cultural issues related to artificial intelligence. I know it was a long presentation. I hope you found it. Interesting if not just listen to the audio on high speed, I guess it's a bit late to say that or read the transcript. I'm sure there'll be some people who do both.

Thanks all for listening to me and I'll see you next time. I'm Stephen Downes.

Ethical Codes and Ethical Issues

Transcript of Ethical Codes and Ethical Issues

Unedited transcript from audio by Google Recorder

Hi and welcome to another edition of ethics analytics and the duty of care. I'm Stephen Downs and I'm just going to copy the video URL into the activity center. Again, I can never get this URL until I have actually started the video. Something that is very annoying to me and always causes a little delay at the start of these live presentations, but I've done that and I'm saving the page now.

And so anyone who reloads or accesses the page right after this moment, we'll be able to see this presentation starting up on time. So this this talk is a part of module four, for the ethics, analytics, and duty of care. And, in this module, we're talking about ethical codes, generally.

And we've looked at some of the overall properties of the ethical codes. I am not going to do a video on all of the ethical codes so far. I've looked at 73 of them. The will probably be adding more over time and that sort of presentation couldn't be particularly useful.

So what I am presenting in this module after the original overview which I already gave a couple of days ago, is a look at some of the features that these codes have in common. In this particular video, we'll be looking at some of the ethical issues that underlie these codes.

And in future videos, we'll be looking at some of the values that underline these codes the duties or obligations and we'll also be looking at who these codes are intended to be applied to, and who these codes consider has their clients or as their subjects. The people who we have ethical duties or responsibilities to.

So, as I say, for this one, we're going to be looking at ethical codes and ethical issues. Specifically, I'm going to run through a number of these ethical issues. Again, we're thinking of these from the perspective of teaching and learning and learning analytics, and AI in particular. But of course, there are broader replications for all of these.

So let's start our rundown. And and I want to distinguish what we're talking about here from what we talked about a few days ago. Well, last week, although there is obviously an overlap. And previously, we looked at a range of ethical issues surveillance tracking anonymity is a very detailed list.

And one of these activities in this course, is to have people try to look at these ethical codes with respect to these issues. So, here's a page representing one of the ethical codes, the association of computing, genre, code of ethics. And there's a link right here that says graph issue.

And once we're into this now, just make this bigger for the purpose of our video here, is the box representing the ethical code in question. Here, are boxes representing all of these ethical issues? And the activity here is to draw a line from one to the other. So, if you believe that this ethical code addresses, the ethical issue of surveillance, you would draw the line.

Similarly, with tracking similarly, with anonymity, if you do not believe this code, addresses, the issue of page of facial recognition, then you would skip that line and you would move on to the next line. Where you think this code addresses, these issues? Once you're done that right, click and then click on the export option and this will be saved.

You'll see here are the actual associations that you drew just click, okay? And we'll come back to the the task in question. So that's the assignment. And the sorts of issues that were looking at today, might be thought of, as a subset of these issues, it might be solved thought of as a super set.

In other words, categories of these issues, they were arrived at from a different process. They were arrived at by looking at the actual codes of ethics and trying to extract by inference, what ethical issues were being addressed. So, it's not exactly a match and that's going to be the nature of this discipline.

Perhaps, we can come to an overall understanding of what ethical issues, these ethical codes address, but it may take more work with that graphing application in order to do so. So let's look at the first of these and you'll notice that this ethical issue doesn't even appear in our long list of ethical issues.

And that long list of ethical issues, is derived from a fairly comprehensive reading of articles and papers on ethics in analytics, generally. And the principles are not necessarily the principle of doing good. Isn't necessarily explicit there, but in many of the ethical codes that we started here for module four, they do make reference to the specific good that can be done by the discipline and question.

Now the disciplines that these codes cover include things like journalism health care, psychology business, and a accounting etc. Not just artificial intelligence and analytics and that was very deliberate because other codes of ethics, analyzes that I look at focus specifically on this domain, but different domains look at ethics differently and I want to raise the question.

Whether there are questions of ethics. Same other domains that ought to be raised in this domain. In any case, the good that can be done is something that shows up in many of these types of theories of ethics. The UK data ethics code. For example, expresses an intention to maximize the value of data.

The sorbonne declaration points to the benefit of society and economic development, that accrues as a result of data research, the open university, the search that the purpose of collecting data should be used to identify ways of effectively supporting students to achieve, their declared study goals. So, you can see here that there's, there's a clear sense of benefit that is required, but the sense of benefit that is required, is not always interpreted in the same way by different people.

Another ethical issue that comes up a lot in statements, especially of academic ethics. But also, professional ethics is academic or professional freedom in some cases, not merely considered to be a good, but actually expresses itself as an obligation on the part of academics or professionals, where it is necessary for them to promote the concept of academic, professional freedom, and to refrain from actions or agreements that would infringing on academic or professional freedom poorly, this sort of freedom is not limited to academics.

It includes things like doctors and journalists and psychologists, how it's defined varies a little bit. But essentially, it boils down to the idea that the professional needs a certain scope of freedom without consequence in order to instantiate the values of that profession. For example, a medical practitioner needs to be able to base their decisions on treatment, on medical considerations and to not be in fringed by external say political considerations.

In the case of academic freedom, the principle is that the academic should be able to research and express points of view without having to worry about losing their position as a consequence of those views. Now, like any freedom none of this is absolutely. We've seen many cases over the last few years and indeed probably through history of academics being removed from their positions because of some of the positions that they take.

But overall and if you look at the diagram here this comes from a research study on academic freedom over the last hundred and twenty years, it has increased quite a bit, it began to increase significantly with the end of the cold war in 1989. If this is looking at a globally, heavily freedom really declined during the second world war, and also went into general decline of from the 1970s through to the end of the Cold War.

Today, academic freedom is fairly high around the world. There have been concerns about it recently being in French upon again, though, this isn't universally true around the world. In some places, it's being more inferenced upon than others and that's what the little map there. Shows another fundamental ethical issue being addressed by these codes is the question of conflict of interest conflict of interest is the idea that a person would use their position to personally benefit from their position, whether directly through the offer of gifts, or through other means, it's expressed explicitly, or is prohibited if try that again, it's expressively prohibited by many, but not all codes of ethics, but, you know, a lot of codes of ethics, conflict of interest can involve things like the sale, sensitive information, external employment, insider training, biased, supervision close, relationships, and nepotism.

The personal use of corporate or company assets gifts bribes commissions, etc. I think it's an interesting question and one we should ask is what counts as a benefit from the perspective of conflict of interest other codes. When they address conflict of interest, they're less focused on the benefit being received, but rather on the integrity of the profession.

And we see this and professionals like journalism where as one code of ethics states. Professional integrity is the cornerstone of a journalists credibility and here conflict of interest extends even to the idea of maintaining independence, being above the fray. For example, many journalists make a point of not being a member of any political party, not being a member of any particular point of view organization.

Even sometimes to the point of not voting a similar restrictions to column, we'll call them that don't seem to apply to other professions. But there is this sense in which the professions are expected to maintain a neutrality over and above day to day issues, politics world events and the like scientists.

For example, assert that research and development is a global enterprise and not something that is a characteristic of one or another nation. Any other hand. Nationalism is something that certainly thrives in science as well. So it's not 100% here, another principle. And one that many people are familiar with is the question of harm, many codes, explicitly state that professionals that are covered by the code, should do no harm on the origin of this, of course, goes back to the hypocritic oath of our interestingly, many codes of ethics trace the origin of this back to the neuromberg declaration where there was harm created in unethical experiments on humans.

And so the principle of ethics derived from that is that this harm should be avoided often in these principles though. The nature of harm is very loosely. Defined harm might be applied. The question of whether harm has happened, might be applied directly to clients or subjects, but some codes consider the effect of downstream harm.

For example, if you're doing data collection, the question of whether that data being collected, immediately harms the person in question, but then subsequent uses of that data or subsequent uses of that research over time. Might harm other people as well. Home is not necessarily limited to physical her things, like discrimination and human rights.

Violations are often cited as sources of harem some codes. Describe what will not be considered as harm and you can see the need for this in. For example, medical research, where harm might sometimes be caused. We're talking a little bit about that in terms of some of the, the core values of underlying ethical codes.

Another aspect of the question of harm as an issue, is the consideration of risk versus benefit. There are actions that could harm a person or a group of people. However, these actions might benefit a larger group of people, or even society. As a whole people often talk about risk or balancing the risk versus the benefit.

And so the the issue will rise is on what basis. Do you conduct this balancing, how do you weigh the risk to an individual or to a group of people as opposed to the benefits of the larger society? I think that many people have admittedly not all would say that the benefit to society never outweighs the harm cause by killing a person, other societies will limit the definition of this, the harm caused by killing of an innocent person.

For example, we're or the harm caused by the killing of a child. The risk versus benefits, sort of question, really brings out issues in the discussion of harm. The idea of doing no harm quality and standards is something discussed by numerous codes of ethics, quality and standards are often defined in different ways, the diagram illustrates some of the aspects of quality and standards and it's it's kind of ironic because in a discussion of quality and standards.

If you look closely at the document, it's really not a very good document. Or if you look at the diagram, it's really not a very good diagram. There's little, there's pixelation around the text. And the, the resolution isn't that great. So the circle has little bumps anyhow. So there are aspects of quality and standards ranging from customer focus.

Evidence-based decision making continuous improvement engagement of people etc. The international standards organization defines a number of definitions of quality and standards in different domains. And then, of course, there are many principles, like, say total quality management or six sigma intended to raise that as a value quality and standards.

So are defined by in different ways by different people. Sometimes quality and standards are defined in terms of competence and when you when that's the case, you see the ethical principles talk in terms of stewardship and excellence. In other cases, quality hundreds are described in terms of qualifications and the principle might create a requirement to prevent unauthorized practice of a discipline.

For example, unauthorized practice of medicine, preventing by unqualified, teachers, etc, and additionally, quality and standards might be described in terms of exemplary behaviors such as research integrity scientific rigor recognition of sources, etc. So in any profession, there is typically a long discussion about quality and standards. It's certainly an issue that comes up a lot.

It's not clear that it's an issue that has been resolved to anyone satisfaction. Although the standards bodies do attempt to reach a consensus on these sorts of issues we can ask. Now, finally, what are the limits? A lot of ethical issues that are rice, especially in the field of artificial intelligence and analytics are built around, what?

The limits of the technology should be and we see some examples of that. For example, IBM said, it was cease work in general, facial recognition technology, they did that last year, we'll see if that holds up and there have been other cases where companies have declined to continue to pursue research in a certain area, open AI when it develops GPT three set originally, that it was so powerful, but it really shouldn't be released to the public.

And then of course, a few months later, they released it to the public. There's the standard stated in the Silimar principles to create not undirected intelligence. Not general intelligence in other words, but beneficial intelligence. So one of the limits is that whatever is being developed should be developed for the good of?

Well, good of someone of society of the person who has it. That's often left vague. There's also a case where many individual researchers, and sometimes companies will refuse to work on military or intelligence applications. This is often cited as a reason for not working in China, that has to do with intelligence applications.

But also to, we had researchers at Google saying that they did not want Google participate in a military intelligence program. Finally the wrong limits that are based on things, like scientific merit, and research needs the research, ethics boards. That I belong to does have a requirement that the researcher be able to show that there is a legitimate scientific merit to the work that they're doing finally.

We ask is all of this enough do does the list of issues described in this presentation, constitute, all of the issues that come up. When thinking about, ethics, analytics and AI does this list in other words, comprehend, all the issues that were raised in the previous chapter is? It's not clear that it is, although it's hard to say where it doesn't, it doesn't cover everything.

We look at the individual issues, the good that can be done academic or professional freedom, conflict of interest harm, quality, and standards, and the limits of the research. And it's hard to say what other issues fall outside that. I mean, we can think of issues like slavery. For example.

Let's certainly an ethical issue. Does that fall under any of these categories? Well, our arguably, it falls under harm, perhaps it falls under conflict of interest, depending on your views, about graduate student employment, and perhaps it also falls under the heading of where the limits are. So, you know, again, it's hard necessarily to pick out and ethical issues say, whether it falls under this categorization, but this categorization was of obtained by a study of these ethical code.

So it can be stated that if it's not covered by these categories of issues, it's not covered by the ethical codes, but also too, it's important to note. First of all, no code not, one of all of those surveyed was designed to meet them. All of these purposes, all of these issues, different codes are intended for different saints.

Some codes are intended to prevent harm. Other codes are intended to promote things like professional freedom. Others are intended to promote good. But no code, addresses all of them.

Neither were

None of the purposes. I'm just trying to get this sentence straight, right? So, no code of those surveyed was designed to meet all of these purposes and none of these individual purposes was specifically addressed by all of the codes. So we don't have an all or only situation we can't point to a code and say well this code covered everything because none of them does and we can't point to an issue and say this issue was covered by everyone because none of the issues is.

So right off the bat these codes are talking about different things. And so, it creates it, makes it very difficult to find a sense of unionity when you're actually talking about different things. So, that's it for the, the ethical issues at least for the purposes of this video. We'll be talking about the core values and priorities, that underly, the actual recommendations made by these different codes of ethics.

So, in this video, we're talking about why people were creating these codes, what sort of things they are seeking to address the values part. Basically talked about how they go about addressing these, and that'll be the subject of the next video. So, we'll keep this short. We'll finish this here.

This is ethics analytics and the duty of care. And once again, I'm Steven Downs.

Duties and Obligations

Transcript of Duties and Obligations

Unedited audio transcription from Google Recorder.

Hello, everyone. I'm Stephen Downes. Welcome back to ethics analytics and the duty of care Today, we'll be talking about obligations and duties. This is part of module for which looks at ethical codes. And the idea here today is that we will be looking at the question of who we owe obligations and duties to with a focus on ethical codes.

And as always with a focus on analytics and AI and specifically in the context of learning and this sort of getting hard to stay zero down on our topic because you know, we might start looking at learning and teaching but we end up looking at the broad scope of obligations and duties as their instantiated ethical codes across society.

And this presentation what we're going to look specifically at are the different entities or perhaps more accurately, the different types of entities to which the ethical codes are you not in some way, we owe an allegiance to a loyalty to or a duty to in some sort or another often internet in an ethical principle or ethical code.

The locus of duty is not clear, for example, of a company, skewing the data in order to sway. And AI model, it's where to particular set of outcomes does the employee have the duty to disclose this. To a media, does the employee have a duty to disclose it the clients or funders or does loyalty to the company prevail in such a case.

And as we widen our consideration beyond simple transactions, the scope of our duties widens as well. Our duty to traffic travel to Africa, to support a new, try that, again, our duty to travel to Africa, to support a learning program, may not conflict where the duty to preserve the environment, but it may or desire to eat meat.

May conflict with what activists like Peter Singer like to think of as a duty to the environment for a duty to animals. So as soon as you have a multiplicity of duties to different entities, you've created a whole new range of cases where ethical principles can come in conflict with each other.

So in this presentation, we'll look specifically at the different sorts of entities. And we'll look to some degree at the sorts of duties. That we might have toward these entities and talk about where these can be found. Or in some cases not found in the ethical codes that we've studied so far as a part of this course.

So let's begin. Then with duties to self, no different ways of looking at this most ethical codes, have a principle of conflict of interest which is to say that we should not conduct ourselves professionally in a way that serves ourselves or benefits ourselves. And yet by the same token, many of the ethical principles talked about a way of culture cultivating, a better self and indeed, some of them might be thought of as promoting desirable attributes of self.

So, on the one hand, the no one principles. For example, make clear that the ethics of a member of the public service is selflessness, but on the other hand, we have a number of associations say things like self-knowledge or promoting things, like self-knowledge regarding how their own values attitudes experience and social contexts influence their actions.

Interpretations choices and recommendations. They desirable attributes of self. We see things like autonomous self-realization human agency into promotion of individual capabilities, or to participate in programs of professional growth, like in-service, training, seminars, symposia workshops and conferences etc. And there's a principle. And I illustrated that in the diagram, on the slide here arotay, which is a principle of.

If you will be all that, you can be rise to your full potential and it's an aspect of character ethics, and we'll talk about that in the next module. But the idea is that the ethics promoted by the ethical code, includes an ethics that promotes the idea that a professional or a person following the code, the augments the wrong capacities to the greatest degree possible.

And so we have codes talking about excellence and integrity and the rest of it.

Another group of people who could be considered the object or subject of our ethical codes, is a group of people will call here, simply the last fortunate and there are many ways of describing what counts, as more fortunate and less fortunate. But what we find is that they are not very frequently referenced in codes of ethics at all.

They show up in other codes like hammer rabies code includes the edict. The strong may not oppress the week and Peter singer talks about it in his 2009 book, the life, you can say, but overall in the ethical codes that were considered for this study, there was basically no discussion of an obligation or a duty toward the last fortunate.

And I think the resistance to considering such matters is telling I think that the focus here is on the ethics of the person, in the profession and serving, if you will paying clients and only in some unusual circumstances, do those clients ever include the less fortunate? Now, as it turns out, I did not include in my original sample, a code of ethics, from say, social work.

Although, as I speak right now, that has been added as ethical code number 44, or sorry. Number 74 in the study. Nonetheless, I think there is something that could be said about the lack of attention to the last fortunate in professional codes of ethics in academic codes, or codes.

For teachers, students are very frequently referenced as the object of specific obligations or duties. Although, this comes up in different ways, the code solely, assigns a three-fold responsibility, in their duties, as educator. They are training the individual perhaps so that the individual can rise to the best of their capabilities.

But they're also responsible for training, the worker and the citizen and it suggests that the obligation towards the student isn't universal. It isn't specifically. And only an obligation to a student, but also an obligation to. Shall we say future employers or the country? The national education association. Also focuses on the obligation toward the individual and here suggests that teachers strive to help each student realize his, or her potential is a worthy and effective member of society.

The opening university code to search, that students should be engaged as active agents in the implementation of learning analytics and others. They should actually be part of the discussion. And this includes discussion of informed consent, personal learning paths. And other interventions depicted here on the slide is a document or an image rather from the Carlton University student affairs office.

And it's illustrative of the idea that the rights of students and therefore are obligation to students are often conditional. And they're depicted especially by administrators towards students, not simply as rights but as rights and responsibilities. And you know, this is a discussion that happens reasonably often in the discussion of rights, where the suggestion is, you cannot have a right without a corresponding responsibility.

And I find very often when the discussion of an obligation to a subordinate group is raced by these by the professional association organization. And question that these rights are represented ask conditional as concordant with a set of responsibilities. So here we have in Carlton, the various rights, including participation in students, associations freedom of discovery or sorry.

Freedom of discussion. Assembly confidenceiality right to a fair process and natural justice but also individual responsibility and accountability. And then, finally the students write to representation along the same lines. We have the ethical obligation to children. Again, we don't see a lot of discussion in the different codes of ethics, specifically, with respect to children, although it does come up the federal trade commissions commission back.

When all of this was first being discussed in the 90s noted, the widespread collection of personally affirmation from even very young children without parental involvement or awareness. And it's interesting that at the time, this was considered normal, and not a problem today, I'm not so much the case. The, for example, has an extensive provision on safety and security for children confidentiality and whistleblowing noting specifically, that adults have a responsibility to ensure that this unequal balance of power between themselves.

And children is not used for their personal advantage. Also point out here. And this one comes from peel that very often professionals will have a duty to report in cases where a children's welfare or a child's health may be threatened.

But also have duties with respect to parents or guardians and these show up basically in two major ways. First of all, where parents may act as a proxy for children with respect to matters of consent. So if there is an obligation of consent and if that obligation is being applied to a child, then that obligation is also to the parent who is acting as a child.

It also creates responsibilities for parents to stand in as that proxy and that we're going a bit beyond the scope of this particular discussion. But there's a whole range of responsibilities of parents to children, that could be discussed in a different context. There's also the idea that the parents especially or sometimes the guardian is in themselves, a special interest that needs to be protected.

For example, there's an Indian code of ethics that advises teachers to refrain from doing anything that may undermine students confidence in their parents or guardians. So, the idea here, is that the professional or the person following ethical code needs to recognize that parents have specific privileges with respect to their children.

I am or we might say specific rights with respect to their children. One of which is stated explicitly, here is to not be undermined, you know, as authorities or as role models. Etc. I also think it's worth noting that duties to one's own parents are not mentioned anywhere in the codes of ethics that we reviewed for this course, not at all ever another area, which is completely lacking in any of these codes of ethics.

And I thought I should put it in because it's so prominent. Outside codes of ethics is duty to one's family, or perhaps duty to the concept of family. Generally certainly, there's a large category of ethical principles, or ethical values related to that. There is a sense. Among many people, I think that an obligation to one's family is an ethical principle.

Certainly such an obligation, especially in obligation to one's parents, surfaces, and ethical principles advanced by people like Confucius. And there's a sense in which we have an ethical value along the lines of family first. And again, although I did not see this expressed in many of the ethical codes.

I think that if we considered what a code of ethics would look like for a workplace, it would include something like allowance for one's own ethical responsibility, for one's own family.

A group that is mentioned, a lot, is the client and in fact, in many, ethical codes the first and only duty is to the clients and this is, especially the case in service professions such as finance and accounting, or legal representation, where the obligation to the client is expressed as fiduciary duties, which are to quotes special obligations between one party often with power or ability to exercise discretion that impacts on the other party who may be vulnerable.

In the case of health care, the needs of the client are often paramount. The declaration of Helsinki. For example states, that the health of my parents. Sorry, the health of my patient will be my first consideration, and it cites the international code of medical ethics. In saying it is the duty of the physician to promote and safeguard the health well-being and rights of parents, including those who are involved in medical research.

In the case where there are multiple duties owed the client may be assigned priority with other other entities, receiving secondary consideration. For example, when research and clinical needs conflict, then the instruction is to prioritize the welfare of the client. Now, there is in many cases, and ambiguity in the concept of clients because the role of client can be divided into different parts.

On the one hand, the client is the person who's paying the bills, the client. And other words is the customer, but is the duty to the client, a duty simply because the clients is the one paying the bills. I don't think that that's the case. In many medical situations, for example, we see a split between the role of the patient who is the client and then the health insurance or in Canada provincial health care programs who are actually paying the bill.

And so here one with his home although it's not always clear but one would assume that the ethical duty to the client in such a case, is an ethical duty to the patient. Even if the patient is not the one paying the bills, this is something that doesn't just happen in health care, it happens and things like social work or public services to a large degree.

It happens in education, especially in the lower grades where children are rarely paying for the wrong education. Parents may be paying for the education and may thereby be considered the client in part but in cases, the public education, the government is paying the bills. The child is the client.

And then we see a bit of a split between the responsibility to the client, the student and the payer the government, and it's not limited even to the helping professions. We see, for example, in media and journalism, the client might be thought of as the audience or the client might be thought of as the advertiser and it depends on what part of media you're working with which of these prevails in a news broadcast.

For example, the needs of the viewer are generally considered more paramount, but in social media as we will know, the client is the person paying the bills, the advertiser and the user of the system isn't considered a client at all. The user in fact is considered the product their attention.

Being what is sold to advertisers and many research ethics principles, the research subject is described as the objects of ethical codes. In other words, researchers have and ethical duty to their subjects. We can talk about and we have talked about this in in some depth already. For example, the principle of consent, the principle of being fully informed of the consequences, the principle of harm, not being caused, the research subject, it seems this role, well, arguably, beginning with the neuromberg principles or perhaps, even earlier and this continues through to other disciplines such as marketing or journalism, where the research subject again, is, old ethical consideration.

We see this, especially in journalism, where the research subject. If we want to call them that is the person about whom the journalist is writing. So, if a journalist covers a car accident, for example, the victims of the car accident are the research subject. While the clients are the readers of the story about the car accident clients interest.

In this case, would certainly conflict with the victims interest, the victim would like some privacy. Perhaps or at least to be considered to be treated with respect while the client may want to know all of the tales of the accident. We see this tension between these two roles especially in paparazzi type journalism, where very often the research subject.

The celebrity being covered by the paparazzi is not accorded any ethical value and the the interests of the clients or perhaps the funder are considered paramount. Similarly, in cases of learning analytics, we might think of the students as clients, but they are also very definitely research subjects. And so again, as the open university, code asserts students should be engaged in as activations in the implementation of learning analytics.

In other words, students as research subjects have ethical standing visa the researcher in such studies. And so would be considered to be the recipient of rights and possibly responsibilities depending on how we've worded that. Another class of objects of ethical principles is the employer and here. What we have our cases where ethical codes state that the worker or the professional has an ethical obligation to their employer.

This is often referenced as an employees duty of loyalty and and it's most clear in public service ethics. Certainly I've seen that expressed in our own declarations of public service ethics in the Canadian government. Sometimes when new governments are elected, they make a specific point to remind public services public service employees that they do have a duty of employer but sorry, a duty of loyalty to their new employer.

Same thing holds true sometimes in the case of ethical codes for professors or for teachers where there's presumed to be a duty of loyalty or at the very least affiliation to their educational institution or to their school. Usually with respect to the standing of the school or university in public perception.

Some idea here is you know don't make your employer look bad don't make the school look bad and we see this in private sector employment as well especially in sectors like IT and journalism. Where that duty extends, not just to protecting the employers reputation, but also things like protecting their trade secrets or other confidential aspects of the work that's being undertaken.

I think it's interesting to point out that again in the ethical codes listed. There was no corresponding duty to employee that surfaced anywhere that I could find. Now the Route duties to employees specified in labor codes, but it should be pointed out that many professor, many professionals are employers.

They do hire and manage people, like office, assistants lab technicians, student workers and alike, maintenance staff. And so, it seems odd that their obligations to their employees would not be included in these ethical codes. What sort of duties would there be well? In for one example, it's an employer's duty to manage all the risk in the workplace.

And so, the employer should have managing risk in the workplace as an ethical responsibility. One would assume and that includes things like eliminating hazards, identifying hazards mitigating. Those it cannot be eliminated. Make sure making sure, adequate personal protective equipment is available. Making sure there's adequate supervision. And of course, making sure that there's training odd that this isn't in the professional coach.

Although that's said, I only surveyed 74 of them. And it may be that there's a large number of codes out there that actually do specify, this sort of obligation on the part of professionals. Sometimes in these codes, we can see at the very least, implicitly recognition that a funder may make a claim on the duties of the researcher.

As Dingwell specifically says. Now the funder is very often distinct from the employer very often distinct from the client or the customer. And when we get cases like, for example, government funding crowd, funding Philanthropic, funding court, corporate partners or say, venture capital or other funds. That the holders of these funds.

May expect that the recipients of the funds, have an ethical obligation to them and we see this actually implemented in some places. A good example is the recent round of requirements on the part of government funding agencies, including those here in Canada, to the effect. That researchers have an obligation to publish their results in an open access publication.

So in other words, a requirement of open research. Now whether this is an ethical obligation is something that can be discussed and debated, but it's certainly the case that this obligation was imposed and implemented by the funders. In this case, There were other cases where the obligation to funder may be less clear.

Although the idea of that exists can perhaps be a deast, through some examples, where it was kind of ignored. One of the best examples is in the case of Oculus Rift. Oculus Rift started off as a crowd-funded company, but then the, the founders of Oculus Rift turned around and sold the whole thing over to Facebook.

This drew the ire of the crowdfunding community who did not expect that they were funding, something that would simply be sold to a private company philanthropic funding also imposes requirements, sometimes these requirements aren't exposed aren't explicitly stated but may exist. Sort of as unwritten or unstated conditions of funding typically.

Philanthropies, fill up, philanthropic foundations, have an agenda or at least an idea of the sort of research or development project that they want to fund very often. These agenda are for particular political or ethical perspective. Perhaps, for example, they're seeking to support enterprises that demonstrate entrepreneurship or perhaps, they're looking for enterprises that involve community participation etc.

And so these things again maybe brought forward as conditions of funding. Now, again, whether these are ethical principles or whether they're simply structural principles, contractual relations between funders and recipients is something that may be discussed and certainly not resolved here.

Many ethical principles ethical codes speak of an obligation to one's colleagues and one way or another, this shows up in a number different ways, very often they will talk about colleagues interacting from a position of mutual respect with each other and and we find this in a lot of employer employee codes as well.

Certainly that exists in our own employer. The NRC. Also this obligation is exists between the individual professional and the members of the profession thought of as an association as a whole. So the idea here is that if the majority of the profession, sorry if there, if the majority of the members of the profession, follow the standards, the profession will have a good reputation, and many and members will generally benefit.

So the idea here is that by being a good professional. You are improving the standing of all of the members of that professional. It's also should be noted that this obligation this ethical obligation is in a very real sense. Self imposed. And as, as wheel says, if a member freely declares or professes herself to be part of a profession, she is voluntarily implying that she will follow these special moral codes and that's what produces the benefits for the majority of the members of the profession.

Now, this term stakeholders is used a lot, not just in ethical codes, but in discussions of ethical principles. Generally projects, consultations management, practices etc, the term stakeholder, really expands on the idea of the concept of the stockholder and is intended to represent a wider body of interests to which a management or a professional might be obligated.

The idea is that it's not only in the stockholder of a company that has a financial or a fiduciary interest in the conduct of a corporation, but other people or groups of people also have an interest in the outcome and such stakeholders could include customers employees investors suppliers communities, and governments to name.

A few one of the things, I think that important to keep in mind with respect, to the concept of stakeholders, is the sense that to become a stakeholder typically requires that. You have some investment in the outcome of whatever is taking place. I mean, vestment in the research or an investment in the code of conduct of the profession.

And that usually is taken to me a financial interest and that can manifest itself in two ways. First of all an actual financial investment that a person or group of people have made for example as a purchaser or perhaps as a funder of another price or a professional and on the other hand people who stand to earn money or lose money, more often based on the actions of the organization in question.

And and that's why the concept of stakeholders is especially relevant. When discussing the ethics and the management of public enterprises, first of all, in public enterprises that is to say enterprises that are run by governments or perhaps non-government organizations as opposed to companies in public enterprises. There aren't investors in the sense of people who have bought stock in the project or in the company, the investors are much more amorphous groups.

Like, you know, the taxpayers or the government. And when there is no direct financial contribution to the outcome, then you need to look elsewhere to find who is impacted financially, by the actions of the professional or the research project or whatever. And so that's why you turn to a wider concept like stakeholders.

So we see a lot, the reference to stakeholders in the ethics of artificial intelligence and analytics. For example, field says the developers of AI systems should make sure to consult all stakeholders in the system and plan for long-term effects. The open university policy is based on significant and I'm quote quoting here, significant consultation with key stakeholders and review of existing practice in other higher ed.

Since I'm detailed in the literature and even one of the delicate principles and we will talk. We've talked about that by Dresler and griller requires that researchers quote, talk to stakeholders and give assurance as about the data distribution and use. So again, stakeholder might be someone with a financial investment freeman.

Says, it's any group or individual who can affect or be affected by the achievement of the corporations or organizations purpose or performance? We might think of it as the interconnected relationships between a business and its customers suppliers or and employees or others. Have a stake in the organization, but there's really no firm definition of stakeholders.

It does, as I say, 10 to lean more on the idea of people with a financial interest and you know, there's no good way to say, you are a stakeholder. You are not a stakeholder. You usually, the consultation with stakeholders benefits those with the means and the interest to organize themselves into a group.

Able to represent themselves to this particular company or project. We also see reference to publishers and content. Producers this varies depending and librarians, especially or subject, to special obligations to publishers, according to some codes things, like, respecting or rights of publishers, making sure that the works are properly. Paid for all the rest.

This responsibility is often expended as a prohibition against plagiarism and we see, as you can see on the slide here, numerous codes of ethics, have edicts, opposing plagiarism. Certainly there has been a concerted effort especially on a part of publishers but content. Producers generally to make it the case, that respect for the producer of content is included as an ethical principle and specifically, the idea that respect for copyright is an ethical principle.

It's not clear to me that it is. It is clear that it's a principle of governance with existing law and therefore is part of our legal code. But by the same token, it could be argued and has been argued by people like Aaron Schwartz among others that the imposition of copyright, especially over works intended for research or education is in itself.

Unethical. So there's a current debate about this of. I think that widely many codes of ethics, those that are not silent on the subject agree, that the people who produce content should be credited for that. They do not and are in no way unanimous on whether any further obligation exists to publishers or content.

Producers, I've included a reference here to ethical obligation to specific cultural groups. I've kept that very general and part of the reason I kept that very general is because this particular group, this type of group of people appears nowhere in any ethical code, I'm having trouble with my mass nouns here, right?

Because specific cultural groups themselves are groups of people. And then I'm talking about groups of groups of people. It's a bit hard to keep all my known straight but we can talk about this for example, specifically with reference to say, an obligation or a duty to consult with indigenous people's in the conduct of research or in the application of analytics or artificial intelligence and illustrated.

Here is a set of consulting requirements. For projects involving Aboriginal people, in Australia and New Zealand. And this would include things like prioritizing their interests honoring, their evaluation results, prioritizing community interests, and the project and evaluation plan securing and honoring community buying etc. And we have or more actually, I think are in the process of developing similar requirements here in Canada.

And there is current debate on that, which is why I have a reference to the freezer institute on assessing the DD to consult indigenous people's here in Canada, but the list of specific cultural groups is much longer than simply indigenous. People specific cultural groups could be widely construed. For example, an ethical code could say that professionals or organizations, working in analytics, and AI have a responsibility specifically to women in particular as opposed to the same sort of responsibility to women and name, generally or perhaps, to visible minorities, or perhaps as has been raised on a number of occasions to specific cultural groups like linguistic groups.

I read an article just the other day on unicode saying, essentially that, you know, I can, I can draw a pile of poop on my computer, but I can't type my own name and not create a representation problem, obviously in analytics, and AI. And so, arguably, ethical codes could conceivably include a reference to an obligation to different linguistic groups as well.

And this is come up in a number of items outside of these ethical codes reference to specific religions or religious practices either in terms of respecting the values of the religion or in specific ways of handling data that's collected from religious groups. Should, for example, images be taken of groups that have an objection to their images.

Being collected, should the names of people who have who are now deceased be included in databases, which reference people who know longer speak of or name, the people who are deceased etc again. A range of things that could be discussed here, and it's worth noting again. None of this appears in any of the ethical codes that I study.

Definitely something to think about the references to of a responsibility, to society, as a whole are scarce, in the educate, in the ethical codes, but they do exist. British, the British, educational research association, specifically, argues, for a responsibility to serve the public interest. The null and principles, which apply to public employees, state that holders of public office, are accountable to the public two of the computer.

Ethics institutes, ten commandments recommend that computer professionals. Think about the social consequences and ensure consideration and respect for other humans of their research. And we can think about why variety of types of responsibility and obligations to society as a whole in other areas, it's talked about more explicitly under the heading of corporate social responsibility and includes things like health and safety.

Quality teamwork integrity professionalism, etc. All the usual, but and I don't have the diagram here, but I thought about including it, it could instead reflect for example, I'd hear to United Nations 24, sustainable development goals, which education is one. It's SDG4. And there are number of other principles talking about human well-being as a whole environmental protection and so on.

Again, rare to find any reference to social responsibility in these codes of ethics. Again, that's telling I've looked for these together in my original draft, I have simply law and country, and I decided to include God law, king and country to make it a bit more inclusive again. These are rarely mentioned in codes of ethics.

Although I do remember when I took my boy scout oath promising, loyalty to God and the queen and country, my jaw was many years ago, some codes specifically state that people have an obligation and ethical obligation of respect for the law. The IATP code, which was cited and educos reviews.

Recently is 2017, stakes are used to state. I shall uphold my nation and shall honor the chosen way of life of my fellow citizens, which sounds like truth justice. And the American way of although even Superman's slogan has been revised away from specific reference to a specific country. Otherwise, we don't see that so much.

There is no reference to God or religion and many of the ethical codes, except with respect to, you know, promotion of diversity and equity, etc. In other words, not favoring one religion over another again, though, I only looked at 74 of them. And there are know many organizations or many places around the world where these may explicitly be part of an ethical code of conduct Similarly, with King, or Queen, or Sultan, and Country.

And finally, finally the environment and once again the environment is very rarely mentioned in any ethical codes. The association for computing. Machinery, talks about obligations to society. It's members and the environment surrounding them and the AI higher. I forgot the name of HLEG talks about the obligation to social and environmental well-being, including sustainability and environmental friendliness social impact society and democracy.

Other than that, it's just not there. Which again, is a bit surprising, particularly considering the number of environmental issues that are becoming more prominent in today's society here. Thinking not only about things like the pandemic and other direct and immediate threats to him and health. But also, of course, the ongoing issue of global climate change, the degradation of the environment, the consumption of resources desertification and additional problems.

The extinction of species and here. I might cite Peter Singer again and the general well-being of the environments and all of us who'd well within the environment, virtually no discussion of this in the ethical codes. So I think the ethical codes offer an interesting perspective on who we are offer or who we are obligated to and how we are obligated to them.

I think that it is probably a narrow or perspective than we might think are ethical obligations. Apply generally could be said that these ethical codes apply to professionals only insofar as they are professionals but that allows for a separation between professional ethics and personal ethics. Such that a person could behave in a manner that a person would consider unethical under the auspices or protection of the status of being a professional that race is questions, with respect to accountability and responsibility.

A person might destroy the environment but say, well it's fine. I'm personally in favor of the environment but professionally, I have no obligation to protect it and that's seems to me to be a bit problematic. So it certainly is the case that while a study of analytics and artificial intelligence ethical codes in particular and ethical codes in general provide an insight into the thinking and especially the ethical thinking of professionals in the field.

It is also the case that when we study these, we see, not only the sorts of things they agree on and disagree on. But here, when we look at the objects of ethical obligation, we see fairly significant gaps in their ethical coverage and with an all encompassing technology, like artificial intelligence and analytics, it seems to me that these gaps are in many cases where some of the most significant ethical implications are going to arise.

Certainly, we cannot depend on what has been done so far when talking about the ethics of learning analytics and artificial intelligence, the coverage is simply not complete enough, the considerations simply not broad enough, that's it for this presentation, I'm Steven Downs and look forward to talking next time. The next presentation in the last for this module is a look at the bases for the values and principles that are found in the ethical codes.

In other words, what sort of reasoning and so far is any reasoning at all? Is applied is applied in the creation and justification of the ethical code. So we've looked at should be a good in interesting discussion. I look forward to it until then I'll see you later.

Module 4 - Discussion

Transcript of Module 4 - Discussion

Unedited audio transcription from Google recorder.

There we go. Now I'm recording audio and of course we were streaming in live videos so should be aware of that.

And I mark, welcome again. Hope you're unmute. I think you're unmute front of work. Yeah, your music. Oh yeah, okay, let's crank up the volume, a wee bit. So let by recording can pick it up. Just in case you're wondering, I do use the audio recording like this and the reason why I don't just simply record the zoom presentations.

Although, I suppose I could, I haven't been because I'm live streaming to YouTube and so there's no need the YouTube will capture the video, but I record on audio. First of all, to have a backup just in case, but secondly, because the live other way, the Google recording, also produces a transcript, as I go along, and what that allows me to do is almost immediately after our session, I can post the audio instead of having to wait for Google to produce the video.

So I can download the audio so I can post the audio right away and I can post to transcript right away and so that lets me get the full spread of the content out as rapidly as possible. And I'm working on his assumption that. There are other people out there who are following what we're doing alone.

Not taking part in our video conferences and that's actually news on on some of the things. So yeah, unless you got your whole family team boosted. I don't know any of us definitely following this. No I yeah yeah. I see some evidence of yeah. Somebody else watching somebody stop.

Yeah I think some people are indeed watching which is good although you know for every for every life session I always have a presentation in the can just in case. Nobody shows up has a conversation so that I have something to record but you know, the result is. And just with the focus on video recording for this course which is a bit unusual for me.

Well, it's a lot unusual for me. The result is I'm producing a ton of video, probably too much for most people to wait through as Jim. You're just stuck partway through the use. But again, my my longer term plan isn't to have a course, that is just a whole bunch of videos.

You've seen already some of the graphing stuff that I'm trying to do with this and as well. I'm trying to take the the process of recording the videos as a way of reproducing. All of my texts for this course, with the I with an, I to taking that text, taking the transcriptions, cleaning them up, and then assembling them into a longer work after the course.

Oh, and there's Sherita. And so we've got is season high, four people. So this is awesome. So that's the other thing to in the back of my mind. I'm thinking, well, maybe this would be one of these courses. It's more or more popular as it goes on. That would be pretty funny.

So hi, Sharita. Welcome. So, we've got Jim Marks reader, and, of course, myself. That's for the benefit of the people listening on audio. It's interesting with all these different recording formats. I always have to make sure that the different formats are supported. So go ahead and it's a nap that's produced by Google and it's the reason I bought a Google Pixel 4 rather than a Samsung or an iPhone or anything else.

Because I knew that Google was including this app on the phone natively and I knew that it would probably be pretty good and it has turned out pretty good. You know the transcripts? Yeah it's all. And what I like is wouldn't save the recording. It offers the transcript and then it offers to transfer it wherever you want like to your laptop cover to your desktop.

Yeah. Or to a Google not in your drive with one click. Yeah. Yeah. So it's very easy for me to make use of these recordings. The same with the audio as well. It saves the audio, and I can open it up right away, in audacity, and then save it in audacity as an MP3, with all of the metadata, put the metadata in, for posterity.

And just, for the record, we're up to 29 separate recordings. That's a rough count. That might, it's plus, or minus three or four, but the my numbering is up to 29. But, yeah. So again, and it's funny because this process has been so easy that I've been forgetting that I'm recording.

And I've had more than a few occasions where I've created three or four hour videos half. You know, the by finished my presentation or whatever, and then the video is me eating me watching cold bear hunt YouTube. It's horrible. So I'm trimming those, of course, turn them down be or in one key.

Yesterday, I deleted a four-hour video because I had accidentally started it up again. After finishing a presentation, it was so, it was my lunch and then I was gone for a while and I came back and it was still, it was recording a couple hours of an empty room.

So this is all streaming now on YouTube. All streaming out on YouTube live. It's okay. Oh, just receptor treat out that late and everybody can still see them all. But that video was and so, yeah, it's gone. Now, I deleted it although, you know, I'm sort of well, actually it's not gone because it was also recording.

I use something called OBS open broadcasting system. What I'm doing my presentations. In fact, I'm using it right now, even so here you all are. So that's what I'm seeing and I can put myself into what I'm seeing. I can put myself as a picture in picture, I can do TV style in the news today.

Full power point and you see it. Like I said, I have a power point presentation in the can just in case, nobody shows up. Here's the normal view that I use when I'm presenting PowerPoint. When I even have one for the terminal, although I don't have a terminal open at the moment and so, yeah, anyhow, so I use OBS and when I'm doing the presentation, I turn on, not just the streaming, but the recording because that's what I use.

What I'm not in zoom. I use OBS to stream to YouTube. So I actually have a recording of that four hour session on my computer and a few of these others reviews. Sorry, could be any war hall strategy? I was thinking, yeah, like releasing a video, Steven eats lunch.

Steven's empty. Chair as well. Stephen empty chair. Yeah. Steven watches cold air, you know and watches war halls there. Yes. And watching. That's how they do something. Yeah, I think you guys might. Yeah, I have to confess. I do most of this after I go today that's on my phone and only this morning is the first time I actually had a keyboard to all click and drag a connection.

So that's the nursing one just just before the hopeful lot of session this morning. But yeah, I I frequently and that's having to rewatch part of it and then I was listening to an audio, it must have gone on to play something else. They have woke up and just a little after midnight during mark talking about her just conversation before.

So that's my process. Be trying to catch up to this with all the other things during the day. I want to thank my son for teaching and learning team. Who shifted our staff meeting? So I could have been this this morning. Oh, that's nice. So how have you been finding?

That graphing task. Not fun. Not fun. Because I don't like the idea of, for some reason, I maybe physically, haven't learned how to connect properly, because I keep on trying to do it and it doesn't quite work. What's happening instead? Well because the things that happens the most is I can't get I can't get a line.

Okay. Yeah. And that's probably you actually have to and it took me a long time to figure this out as well. You actually have to click on the round node or you know in some way highlighted then press alt then drag your line. So it's rather more, you know, it it would be more intuitive and and if I can figure out how to write the codes a letter to do this, I'll do this.

I'm borrowing of course, Matthias Melcher's code because it's all a pretty complex piece of work to do is. I'm sure you can imagine but if I can fix the code so that it detects when you're hovering over a dot. So that if you just click your mouse and drag that should create a right now.

If you do that, it moves the the dot which is so annoying, right? So what I want to be is instead of moving, I want it to draw line. So I should be able to just flip flops. I'm instructions in there somewhere but I have to find the instructions first but it's on the list of things to make better because I think it would be a I think it's useful.

Exercise did too. I'm not no problem with the exercise. I have a problem with the execution, yeah, right. I thought it really interesting to try and figure out you're looking at that nursing program. Nothing about facial recognition is say nothing about surveillance and I'm really even it's listening mentions, some of the other ones but the kind of implied or that.

So yeah there's there's and that's an interesting thing to observe because it's shows that some of the associations we draw here are very much a matter of interpretation, you know, it's as reading the code and and seeing that content in the code and different people might interpret it. Differently.

Evicted. Probably what? That's why I wish I had like a thousand people doing these crafts could. And, you know, my intent is to keep this up over time and and see if people do indeed create these graphs and you know, or even try to just push out into social media like a little micro exercise just into Twitter, you know like click here, do the graph, something like that.

I think that might be interesting except most people be Twitter on their mobile devices. And I don't know if you can do the draft on a mobile device. I don't know either actually yeah I didn't find any way to do that. I looked at it a couple minutes time actually sitting here today, it's good work, you know, I'm thinking about you made this force without registration.

I can take people from our college or from our oh yeah learning. And and look at a particular aspect when we're when we're considering something about epic. So I have something and and we can go there together. Absolutely. Yeah. And that's one of the advantages of not requiring registration is, you know, anyone can jump into any part of the course, including the tasks and try anything.

I'm just trying to see if it does work here too much for your ego. Those Stephen, if you don't know how many people you can't prove how many people were, yes, but that's actually mentioned that what I did a presentation on setting this up without registration a couple of weeks ago and it's one of the things I mentioned that, you know.

Yeah it's it's kind of a, you know, a humility check. You know, it keeps you. Humble now. All right, all I need to do is click on a code here. So I'll click on a code they now graph issues, okay? So all right, I'm in I'm in the graph but yeah, there's alright, first of all, it's tiny.

There's no, there's no alt button on a phone around, I guess. Yeah. Can I even drag them around? Whoops, maybe can't even do a. I can't even do a shift enter when I'm packing texting or Google. Yeah. Oh and I can't. Okay, there we go. I'm making it bigger.

Still working on this. It's riveting video. Isn't it? I actually have a tool that allows me to share my phone screen with with viewers online. So if I was desperate to do that, I could now I've just, I've scrolled this, this is what I got. Now, I'm just scrolling it right off and all I have is a blank window, so I'm gonna call that a fail for now.

But if I could figure out a way to make that graph, first of all, just show up full screen on the phone, that should be doable and then fix that alt button thing. Then you should be able to do it on your phone and that would be pretty cool.

And then fire these things off on Twitter. Say, okay, here's today's ethics that exercise. Trying this out and maybe collective few thousand of these because it's been a mute while I dictate some replies and text to a colleague. Oh okay. Yeah. Like that. Dictate some replies. I love artificial intelligence.

I really do. I shouldn't say that I suppose with all the ethical issues. But I mean, I'm using it, half a dozen ways in this course alone, just to make it work. And, you know, I use, I use it to create the content. I use it for photos to clean up my photographs.

I guess that I'm not using that in this course but I use auto translation. I wasn't at one point just can go through a list of all the different AI applications that I use, but I never did get to that but that's that's another exercise. Maybe I could retractively, add extra slices to the course for previous months or previous modules who even know that we're using AI.

Sometimes, A lot of times we don't, I'm sure, you know, I'm certain that when we go into Twitter or Facebook or whatever. And look at the feed that's being created for us. By an AI, there may be an AI in our car, but we've gotten certainly, my car. There is because I have adaptive cruise control and I also have that thing that keeps you between the lines, which it does, but it doesn't like this.

It keeps trying to move to the edge and it just weaves back and forth. So we'll get you pulled over. Yeah, there's my AR, you wasn't my fault. Don't blame me. The person you want to talk to is in Japan, gets to your one of your videos is who's to blame.

Who is who's responsible, who's accountable? Yeah. Who's responsible who's accountable order? I realized two very different questions and there were, they are two different questions and you know, accountability is a tough one. I was sitting in on a I triple e, special interest group. It was he 1, 0 0, 7 or something like that, but basically for ethics in artificial intelligence and there was a person in that group maybe eight zero zero.

So I can't remember, I still have references to it. There was a person in that group who basically was trying to push the line that at a certain point, these artificial intelligence is, are autonomous and therefore, the responsibility for what they do is separated from the person behind them, because, you know, the reasoning is, you can't be responsible for the actions of something that's acting autonomously.

And so there was you know for this particular committee there was a push to you know defying what they mean by autonomous define the scope of responsibility around autonomous agents to basically make the AI responsible and so now we're getting too close to the singularity for comfort. I pushed back as I'm sure you can imagine but and you know because yeah, I think that certainly for the foreseeable future responsibility.

Ought to reside in a human and not me AI or the autonomous agent, you know, because that would otherwise not what allow you to, you know, put machine guns on those robot dogs, send them out into the community, they should whoever they will and they are responsible. Not you that seems wrong to me.

And I think it seems wrong to most people. Although apparently based on this discussion in this group, not everyone. So you know, once again there's a we running into this this barrier against consensus. Some people think that no really if it's an autonomous agent, you shouldn't be responsible for what an autonomous agent does, but there's plenty of parallels, right?

What about your children parents are responsible for what their children do to a point, though. All right. And it's not equally applied in all societies. And, you know, I think the point of view of some people and to degree. I agree with this is the edict of children will be children, right?

Parents can't control children all the time and children are going to do stupid things and it's unreasonable to hold the parent accountable. When a child does a stupid thing that the parent really had no way of controlling for or preventing you know, particularly if the consequences are, you know, you know really expensive, you know, a child who's just wandering around the neighborhood is children.

Do gets into a bulldozer turns it on and plus through a house. It's hard to say that the parent ought to pay for the house. It's almost you would think of that as more like an act of God than an act of parental misresponsibility. It seems to me that way.

Anyways, I'm not sure there would be unanimity on that. I don't know. What do you think that's speed and frozen? It's usually me that freezes. I thought I froze? I froze. Oh man. Oh, it says my internet connection is unstable. That's annoying. Am I back? Yeah, yeah. Okay. She how could my internet connection be on stable?

Well, it's probably downloading something in the background. It's, I got windows 11 over last weekend and not everything is comfortable the way it should be, but I'm not uploading anything. Thank you for leading the way in that and thinking about this because I'm ignoring that blue button on my.

Yeah. It's, you know, not, I mean it's okay. But I am noticing some things like the PowerPoint, that plays audio on me. That was pretty weird. I noticed that yesterday and yesterday's importing. Yeah. Wasn't it was me because I watched recording afterwards. I got stopping recording looking. Yeah, there's no sign of where the audio is coming from and you're not expecting it for embedded in the slide.

Yeah. It's got to be the slide from another presentation. It came along. Yeah, just totally unexpected. I should have expected it, but I didn't the honest thing. With Windows 11 is the keyboard, it doesn't change the keyboard, but I think it's doing like when you type on a key and a computer system, what happens is you're operating system.

Does basically what's called a keyboard scan. Let's keyboard scanning. So it's it's basically constantly watching for you to press a key. And then it scans what the input from that key was and then it processes that input Windows 11 does a lot more processing of that input than Windows.

10 did a lot more, which I find a bit suspicious because, you know, key logging and keep key tracking are the sorts of functions that sometimes happen in the whole key scan process. But what I've noticed the effect is that it often misses when I capitalize a word because you know, I type fairly quickly and right and you know, yes, the caps key is down when I press the letter, but if it takes half a second or a few milliseconds to recognize that the caps key, or the shift key is down, right?

I might press the shift key and press my letter, and then move on to quickly for it to realize that I've pressed the shift key when I've pressed the ladder key. That seems to be happening. Very annoying. Okay. That's a little circle back to the yeah because of the dirty light speaking of well.

Another way to look at it. Yeah. Is corporate economy. Yeah. Human creations like artificial intelligence. Yeah. That we're up a big red flag while you were talking because at least in the United States, corporations have rigged legal system to resolve them almost all responsibility. Yeah holders. That's true. And it seems to mean that that would be the first thing by at a corporation that he's building armed, quadrant pads.

I would reach for corporate law. The result myself every month. That's a really good point and not very comforting. Yeah. So and actually, if you think about it, technically you could create a corporation that actually is your autonomous agent. That's what I was thinking. Yeah, each agent would become a corporation.

Yeah. With all the important part. Yeah, and really the only barrier to that is the cost of incorporating. You just go to Delaware, don't mind say this. They haven't heard expensive thought came and weren't there and it went, but yeah, that that's not very confident and I find it the whole way I think of.

But it is, it is, it is so powerful. And like interesting. Let's point it out. And verse times it can, it can consider so much more data than even a large think tank of human's could and, and correlated accurately, that there's no way people were going to abandon it, just because all it might, it might, it might get it wrong.

They might be some collateral damages, it's been the back Steven. I was gonna ask in in that discussion about who's responsible and humans are not responsible for AI. Was there any talk of consequences? I'm not sure. What you mean when you say that? Okay, so if I do something wrong, if I speed the consequences are my, get a speeding ticket?

Yeah. Can't say it was myself driving car that they did it. If AI is going to be, I think down here is AI. Responsibility is separated from the person behind it. Then if AI has that responsibility to accountability to the AI, was there any talk of what would the consequences be for an AI that caused harm man?

There was none. Presumably the consequence would be you shut off the AI so I answered the AI death penalty for every offense that exists. But but other than that, we just remember the AI. Because, yeah, well, the application number seven is a different corporation. Yeah. That wasn't even true that I lost.

I've been advocate, I'm in the United States. I mean, California. And I've been a quicker time where I grew up and since the 70s, I, of course, been against the death penalty, except in one instance, and that's for corporations since corporations are human creation. I have no with the death penalty for corporations, in created by law, and they could be killed by law.

I disagree with killing humans by law. So that's always been my right line. And that would be that position would be consistent with. Yeah, corporate AI. So, that was pleasing that this consistency, but that's still a terrifying and in this application. And I'm just imagining that there are some of those quadrants but scrolling stance line's somewhere in the world.

Right now I'm here certainly can't remember but I'm certain that it was at a arm spare and I'm not in certain that's something. They're patrolling in this one and like Jim said, there's no going back. Then Virginia Gonzales and the thought also, the thought is the law is always behind with technology and as a technology accelerates the laws further behind.

Yeah. So, if this looks like a real problem, thank you. Not more than an ethical problem. Interestingly, a concept that comes up, not only in ethics, but also in law is the concept of intent. And that's distinct. That's used to distinguish between an act of malice or malfeasance with an accident.

And we can imagine some of these, you know, we'll use robot dogs with guns because it's, it's a good test, case unlike Sharita's dog, which is unarmed. We hope we can imagine such a anutonomous agent, accidentally shooting someone, there's a wide variety of ways. That's that could happen. It could be hacked in which case, they responsibility somewhere else.

We don't know where it could just bump against something in accidentally go off. I mean, that happens to humans all the time. It could be aiming for something, you know, aiming to disarm the opponent. But it's aim is it very good? Or it could have just been deployed carelessly without being tentative.

Killing anyone but you know, they didn't really take precautions and it did. How does that affect our considerations?

At the risk of being targeted, I would reach for. I believe that if you have a corporation that produces weapons. Mm-hmm, the weapons are they have a purpose and it's the guild. And I don't understand why corporations that produce weapons are not responsible for the properties of what they produce.

Remember understand that that doesn't make sense. Again, that's interesting in the sense that such a principle could be applied to weapons manufacturers today, such as gun manufacturers and yet I haven't actually heard of a case in which a gun manufacturer has been held accountable for a gun death. I've heard of people suggesting that as a means of addressing the problem of gun deaths.

But I haven't heard of the successful action nor by it's been attempting to keep hands. Yeah. In the US again I'm in the left. So yeah it's an alternate route around that. Yeah for sure NRA is a very powerful lobby that would. Yeah. And against that my mind goes to the Winchester mansion where someone did yeah take responsive you know assume some accountability you're responsibility but there's also a code in law and I know it applies in Canada and it probably applies in the US as well under the heading of what they call man.

Traps such that. If you set a man trap, you know, generically a human trap on your property and it kills someone you are legally liable for that death. And that's why you can't set up booby traps in your house about you can't. But you shouldn't but about some of the other ethical values.

Let's see. Now because there's more than just accountability that we we need to consider see now, it's see what have we got here. Pursuit of knowledge. Do you think that's an ethical virtue or an ethical value?

Kind of a puzzler from a hard time, connecting it to ethics. Unless the knowledge, you know, there could be ethical motivations for the pursuit of knowledge and unethical motivations for the pursuit of knowledge. Interesting. Well I mean think of it in the sense of and ethical code and the ethical code is describing what is ethical to say a research professional.

And the thing that is ethical, or what makes it ethical is that it is in the service of the pursuit of knowledge. And if we think of that, and this comes up on research, athletics boards, sometimes where there needs to be a purpose, to the research that's being undertaken, you know, you're not, you're not just asking questions of people or taking samples or whatever, just for fun, you're doing it in the name of the pursuit of knowledge and that's what makes it good.

You think there should be a special case for actions that are undertaken in the pursuit of knowledge.

As opposed to being pursued of a cure. Well, or as opposed to curiosity. He for example, for example, in the early days of the world wide web, they're used to be random web page browsers and they were actually on search engines and you just, you click on the, on the button.

It would send you to a random homepage and I would just sit there clicking that button over and over again, looking at all of these different homepages. So uncollecting data arguably, right? But I'm just doing it for fun. You know, for jollies now suppose that we're ethical implications to me looking at that data suppose.

Instead of looking at web pages, I'm looking at individuals personal health data and I'm hitting that button and looking at one person's health data than another person's health data. And I'm not trying to find out anything. I'm just looking at it for fun. You think that's something different from looking at individual health data for the purposes of research or pursuit of knowledge or discovery of new drugs or

Well when you when you put together a research proposal, one of the things that you is look at is the reason for doing it is the reason for doing it. You know. A benefit to somebody. Yeah. Right. So that you know, that's one way of, you know, that's what happens in terms of research.

Mm-hmm. You know, does research get done just for the hell of it? Sure it does. But you don't put that through the efforts, you don't tell the ethics board that yeah, that's pretty interesting because I agree. And then a lot of research is curiosity driven. Oh, absolutely. You know, I advisor in my master's program, absorbs did her PhD research on how researchers end up doing the research, they want to do no matter what they say.

Yeah, they find a way to do the research that they want to do. So curiosity is a, you know, we talked about in education curiosity, being the great motivator that they gets people engaged. And so, curiosity pursuit of knowledge. What's the difference storage unit? I think, I think I may have cut you off.

No, no, it's fine. Not if that brings up ethics, you don't want a couple of levels of, there's the ethics of the curiosity, then there's the ethics of generating, the proposing through accomplish the goal, despite the ethical limitations, and possibly components by forward. I was just looking at a case.

So, I think it's on point here. This morning, there's a doctor in China American educated doctor. He Jim Keke by Chinese is terrible. Who's in prison? Okay. In China because he generally modified to A's that were HIV infected and then planted them with humans. And there are two children in China that were generically modified before they were, and he checked his proposal to get it through the review board and actually, what he's imprisoned for his fraud and saying that it was really what and person in the signatures on the approval to sign back so that is in prison for.

But what he's done is, he's remains to these two humans in the world and violated mouthpieces. Yeah. Yeah it's interesting that they thought the fraud was the thing that they should imprison them in prison him for well we get into the law, right? Yeah. There's no law against so release and genetically modified here.

Yeah, the water actually so they got them on what they took the animals prop, but he's you know, I'm another year. Building back out. The certain is curious. Yeah, I think there's a, you know, the other side of that too is because that's, you know, that's the pursuit of research and the other side of proceeded knowledge is for the purposes of education for the purpose of teaching people.

And I think that people do draw lying between what is allowed for the purpose of education. And again, what is allowed? Just because you think it's fun, copyright infringement, is it classic example it's explicitly stated in US. Copyright law, that using it for educational purposes. There's one of the purposes that can qualify your use as an instance of fair use and similar provisioning Canada applies to fair dealing but it's not so clear that this applies to all things.

For example, saying something offensive in your class, maybe in the past might have been justified for the purposes of education, but has recently resulted in professors, or teachers being suspended or fired because there's no longer viewed as acceptable for the purpose of education. And so the, you know, the it's interesting here and we go back to intent, right?

The purpose of your action does seem to play a role in the moral value of your action. And, and we see that a lot throughout ethics. Although, ethical codes, just thinking about this out loud right now. Don't seem to clearly draw out that distinction between the purpose or your intent and and the result in fact at the end at the end of the video on values and you know, I had just done this hour long video and I'm sitting there reflecting on it live while I'm doing this.

Because, you know, and I realized or not came to the thought that the ethical codes, the way they describe, what is ethical on? What is not ethical are very focused on in terms of outcome and process as opposed to say intent. And it felt at the time after spending an hour, going through all of these things that the ethical codes found valuable worthy of value is that it felt very technical and mechanistic.

And so that that approach was kind of technical and mechanistic as opposed to perhaps a non-technical or non-mechanistic approach. That might take into account intent, might take into account feelings, although it's hard to explain that. How you can take that into account. There is a couple of things like when we're talking about non-maleficents, what counts as a harmful act isn't just described in the act itself, but in how people react to that act, how people feel about that act, what counts as harmful you have to you know, it's not just your opinion as to whether something is harmful.

It's also the opinion of whoever you've heard in different people. Feel harmed by different things and that seems to be more on the human side of it. That's what I felt anyways. Did you know we've been doing this ethical codes thing now for four weeks are you do you feel and you sort of that distinction coming out?

I was surprised. I have to say this week that the ethical codes were restricted is not the right word but that they were professional organizations and limited for because I take a broader approach. And so what I'm wondering hoping in the future actually is that we can take your graph and the way you distill about use cases of applications when they are called, I'm wondering if we can distill out.

More is a values that obviously underwire these professional codes, because that's what I found this and all. Yeah, they're been very detectable. They really strict their organizational. Right. Definition. And what I found missing were balance, but there they start, and you can say, well, they're all Western right. I didn't understand hurts association because or whatever, you know?

So you can say, well, they're based on that lesson 9 is what I think that's probably to. I do want to make my list of codes more broad, I did try to include international sources as much as I could, but my knowledge of them was obviously limited. But these core values and priorities, that I talked about in that video, that's what these codes contain.

That's what they say are the values, underlying, these codes. There's so if you were to be looking for these underlying values or more ace, this is what an analysis of these codes finds, you know. They're now I have another video upcoming on the, the bases for these values and principles, right?

So we've got this long list of values and what sort of reasoning underlies that list of values? And and so I look at things like universality, the idea that a principal should be universal fundamental rights natural rights. For example, fact, which is an interesting thing. You know, it's a fact that if you blow up an atomic bomb in a city, you will kill most of the people in the city.

Simple fact, the, the question of balancing risks and benefits is something that comes up a lot, social, good or social order comes up, but not nearly as much as you might think it comes up in the sense that the professions believe that the practice of the profession contributes to the social good and the social order.

So it's sort of like a what's good for us is good for society. Sort of approach fairness comes up a lot and we're going to talk a lot more about fairness. I actually listed as one of the values but it's also a value. Underlying, a lot of the values.

Another factor is epistemology what we can know, what we can reasonably expect to be the case. You know, you can't be responsible for a bad consequence. If you couldn't not have predicted, this would be the bad consequence, you know, it's like, you know, it's like the butterfly effect, you know, you can't be held responsible for something.

That's not reasonably expected, you kill about butterfly and the civilization falls. A few years later, you weren't responsible for the fall of that civilization. I think trust comes up a lot, you know, mechanisms for obtaining trust. Keeping trust the the need for trust for society to function and then finally defense ability.

Can you make an argument for this value or that value? So, those are the things that underlie these ethical codes and to, to the point of studying these codes themselves in and of themselves, it doesn't get any deeper and, and my own considered conclusion and it only gets reinforced.

The more I look at these codes is that the there is no set of values or even bases for these values that is common across all of these codes, or for that matter is common across society much less common across global society. I mean, even inside fairly cohesive societies, we don't see this commonality and I know a lot of people say, yes there is this commonality, but if you actually look at things like ethical codes, it's not there.

That's why I put in that field study. Don't know if I'm pronouncing the name properly but I really really wanted people to see that because this is one of these things that suggests that oh yes, the re is a commonality and in fact, I'm gonna share my screen here right screen.

There we go. So this is that analysis just like or maybe yelled, I'm not sure and some others. So you have all of these values around or sorry all of these ethical codes around here outside and then the key themes and they go human rights human values, professional responsibility, human control of technology, fairness and non-discrimination, transparency and explainability, safety and security accountability privacy.

If you look at this chart, it looks like, oh yeah, everybody agrees with these things right now, they've only studied codes of ethics or statements of principle for analytics and artificial intelligence. So, that's one thing, and that's kind of what prompting me to look at other disciplines. But we scroll down a bit and and all and these are the ethical codes that they study.

So pretty respectable list and, and many of these are in our list as well. So, but if we come down here and let's look at one of them accountability, which is something we've already discussed the consensus as expressed by these numbers doesn't exist and just make that. Can I make that bigger?

Sure, I can. So look at this. Right? So for accountability verifyability and replicability, is that part of accountability? 36% say yes, impact assessments, is that part of accountability 53%? Environmental responsibility, only 17%, ability to appeal, which we hear about a lot 22% remedy for automated decision covered. In only 11% of these ethical codes liability and legal responsibility, which we talked about 31% and even just accountability as accountability.

Only 69% of these codes. So there is no consensus. Even in the places where they say, there's consensus, the really is no consensus and those sorts of results. I mean, they go through all of those areas, you know, all of those, those values areas of interest and those charts are the same for each one.

You never find a consensus looks like consensus. If you talk about it, you know, you just use word like freedom responsibility accountability. Yeah. Everybody loves accountability. But when you drill down to what it means and there's no consensus and that's why obstructed. The course, the way I did. All right, we look at all the applications for AI analytics and learning.

Look at all the issues that have been raised from different sources. Now, we're looking at all these ethical codes, and it's seems like just painstakingly and, and mind bendingly doll to go. You know, this thing that this thing and this thing, but if you study it, at that level, the conclusions, that people have drawn looking at, you know, a more general level.

Just turn out to be false. I think that's a really important thing to say, personally,

And it raises a hard question what we say about ethics, when there's no consensus on ethics.

Okay. How do we solve the problem of unethically? I of, you know, unethical practices and learning and learning analytics, so that's part of what I'm trying. Well, that's part of values. Pretty much what I'm trying to address in this course that question. So, but I figured it's useful to know what all these issues are and what, all these values are anyways, right?

I mean, I think it is for us certainly having this list is useful and I think it's useful enough, but I'm setting up the course in such a way that it will produce JSON formatted data, dumps of all of these things that other people can use in other applications or other courses.

So if you want to list of all of the values that that come up in ethical codes, or ethical discussions generally access this JSON, document just feed it right into your application, open data, but, but I think the questions raised are significant thoughts with that. I don't know that there's any that there can be anyone answer to that question.

It's something. I think the process of looking at all these things is, they is the value coming up with an absolute answer. I don't think you can do that. Yeah, I think I agree to both parts of that. Yeah, this it isn't a yes, milk. It isn't either or it's it's something.

Okay. You you you mentioned a little bit of this way at the beginning and you mentioned a mesh. So, the discussion is the mesh, right, right. And, and then,

I don't know if you figured out individual, or you figure it out contextually, but you come up with some kind of quality and answer for what you're doing at the moment with what you're using. Which again, you know, I guess that's human.

Now I'm inclined to agree with that, tomorrow's flushing that out is the hard thing. Yeah, but this is the end of our time. Thanks Jim and goodbye. This is the end of our time but mark. Okay. So as a sociologist then you've convinced me that there's no way to control the game here that unethical behavior is inevitable.

Then as a sociologist I want now I just want to talk about mitigator and how to remove and yeah how do you escape the harm? Yes AI is is in the world as all those typically alter humans. Yeah, and so now I just now I want to mitigate you convince me that I am going from you know my life and everybody else is from this point forward.

I'm convinced that they will be an I person that has attacked. So yeah, I won't use that word but they will have to be away completely unethical and dangerous being there forever. And so then the discussion for me I want to to have you as a. How do we protect the common minds?

How I would say or this not using few regulated. And I think you've convincing that's impossible at this moment. Okay, I'll change my mind later, but I'd say at this moment and minutes between application, chaos theory and every year, these actual codes. What I know about human behavior, let's say at this moment, I'm convinced that extremely unethical things will take place or whatever.

And so then the discussion has yes getting regulated. Apparently not and so yeah because some people don't even think of it is unethical on the internet that will actors. I don't think that's the same. Yeah. So this is the pivotal turn of the course. I think. All right I think we've made the argument and now the question becomes, what do we do about that?

Yeah, right. And starting with next week, the process is a bit different where now we're gonna work our way carefully painstakingly step by step because that's the level of analysis in this course toward a solution. You know, toward something we can say. But there's reasonable and addresses these issues in a way that is satisfactory at least to us.

I think that's possible. I know, I know that the problem has been set out, so, there's pretty much intractable we can agree on ethics is, there's no way to define even the underlying values. Meanwhile, this technology is taking off and there are robot dogs with guns out there already and it's only going to get worse.

What do we do? So we'll leave it on that note. You had one more thing at least in translation, what they must be done. What them must be done here. Yeah. Very good. That's where we are. Yeah. So next week, we begin to find the answers. Okay, we're gonna call it there.

Hopefully, we've left our viewers and suspense here. Yeah, yeah. All right. Bye everyone. Bye.

Bases for Values and Principles

Transcript of Bases for Values and Principles

Unedited audio transcription by Google Recorder

Hello and welcome to another video in the course, ethics analytics. And the duty of care We're still in module for which is the study of ethical codes related to analytics and AI, but also related to other professions and other disciplinary groups. And of course, the objective of these ethical codes is to look at what they say and how they arrive at their ethical conclusions.

In this video, we're looking specifically at what the bases are for the values and principles listed in these codes of ethics. That is to say, what we're after here. Is an understanding of what grounds these codes of ethics on what basis. They're authors assert that this code of ethics, rather than some other code of ethics, is the code of ethics to, I should point out that, in many cases the codes of ethics don't offer any grounds at all, but where they do, they offer one or more of the types of bases that will be looking at in this video.

So, we're gonna run through again, like the format for many of the previous videos. A number of the different types. It's interesting. As we look at these species, you know, we might read an explanation, for example, something like an individual's professional obligations are derived from the profession and it's code tradition, societies, expectations, contracts, laws and rules of ordinary morality.

But when we look at this more closely, we find that this explanation or this description raises as many questions as an answers. So we're going to run through these one at a time and see what those questions are. So let's begin with the principle of universality. What we mean here is that the way the code is justified, by the authors is that they assert that the principles embodied in the code are universal principles.

That is to say, they are held by everyone. And arguably, if the principle is believed by everyone, then it should be believed here in this particular code of ethics. For example, the universal declaration of ethical principles for psychologists asserts that these are quote based on shared human values and later on in asserts, respect for the dignity of persons is the most fundamental and universally found ethical principle across geographical and cultural boundaries.

And us professional disciplines. So this is a pretty clear example of the case where universality is being asserted as a foundation for an underlying set of principles, The SLMR convention. Also states, for example, virtually all modern societies have strong traditions for protecting individuals in their interactions with large organizations Norms of individual consent privacy and autonomy.

For example, must be more vigilantly protected in as the environments in which their holders reside are transformed by technology. So again, we see a case where universality is adjustification for a moral or ethical principle. Now, as I suggested in the previous video, the assertion that there is such a consensus, I think is a bit misleading.

We, we look at what happens when we zero in more specifically and more in detail at what is meant by say accountability. And we find that it breaks down. Well, a large number of people may say accountability is a universal principle. What accountability actually means is something that varies from place to place from discipline to discipline.

And this isn't just my finding. There are other researchers, Maxwell, and Schwimmer, for example, find that analysis did not reveal an overlapping consensus on teachers. Ethical obligations Campbell writes, despite extensive research on the ethical dimensions of teaching scholars. In the field, do not appear to be any closer to agreement on sub-quote, the moral essence of teacher professionalism.

And similarly, we'll consumer argues. That the teaching profession has failed to unite around any agreed set of dental values, which it might serve, then Newland and Kendall report. The model used for the codes, varies greatly from country to country. So I think that although universality may be appealed to as justification for these codes.

It doesn't succeed. Another justification that we see referenced a lot is an appeal to fundamental rights. Or as we might say appeal to natural rights or perhaps natural law diagram, here is the John Fenis theory of natural law in moral reasoning. And as you can see, from the diagram, we begin with a description of reality, in some way, for example, the basic goods or the requirements of practical reason.

And from that, we derive, normative statements that is to say statements that instruct in the principles of ethics or is the diagram here, says morally, valid laws. This is an approach that a number of groups have taken. For example, the high level expert group on artificial intelligence in Britain sites, four ethical post quote rudely been fundamental rights which must be respected to ensure in order to ensure that AI systems are developed deployed and used in a trustworthy manner.

The Toronto declaration also reports are also argues or focuses on the obligation to prevent machine learning systems from discriminating. And in some cases violating is existing human rights. Law Access now specifically, adopting human rights framework. The use of international human rights law and its will develop standards and institutions to examine artificial intelligence systems.

Can contribute to the conversations already happening and provide a universal vocabulary and forms established to address power. Differential. We see there is an overlap here between universality and natural rights and that makes sense. Because if we think that rights are natural or fundamental, it's stands to reason that they would also be universal.

But there is a bit of a distinction here in the way. This is argued sometimes, natural rights can exist as a result of human activity. For example, the previous conversations that were already happening. Nonetheless, it's not clear what these fundamental rights are different efforts to list and describe these fundamental rights.

Describe them differently. We have documents such as the United States bill of rights, the Canadian charter of rights and freedoms, the United Nations universal declaration of human rights. For example, which are all very different from each other. Is it for example a natural right to bear rooms is the right to an education.

As found in the UN universal declaration of human rights. Is that a natural right? It seems that further argument would be required that these natural rights don't just reveal themselves to us. Ethical arguments in these codes of ethics often argue from a grounding in fact, and there are two ways in which this can come up.

One is that there is a fact which might be a law of nature or a description of a state of affairs from which an ethical principle is derived. Alternatively sometimes ethical principles are simply asserted to be facts either way. The determination of fact is used as a fundamental argument for the ethical principle in question.

Now there are some arguments that can be made about this argument as well. One is what is sometimes known as the is ought problem and it has its origins in David Hume and very roughly stated. It says something like you cannot do arrive and ought from. It is that is to say this state of affairs in the world.

However, it may be does not tell us in and of itself. What is right? And what is wrong? Now Hume doesn't say that exactly and he says that facts about the world need to be considered in contacts. They need to be observed explained and supported with reference to goals or requirements.

So there's a lot of arguments around that none of the less. It's not clear that you can point to a state of affairs in the world, for example, what is natural for a human and derive? A moral principle out of that? Another consideration is that if we're looking at facts, the fact remains that facts do not really lead to moral values.

Quote, a study here, you can see the diagram, well, facts are raised a lot of the time, personal experiences bridge, moral divides, far better than facts and that's an experience. We see not just in questions of ethics, but in questions of the relation of reason and rationality to individual decision, making generally, there are many cases in which facts do not convince people do not sway their opinions.

And this may be true, not only an eth, but in politics in personal conduct preferences and more. So it's a fact that fact does not inform morality very frequently in the ethical principles. We see reference to something like balancing risks and benefits. The AI for people declaration makes that explicit quote, I'm ethical framework must be designed to maximize these opportunities.

That is these opportunities from AI and minimize their related risks. There are many cases like this. The concordat working group just got document on open data. And the need to manage access quote, in order to maintain confidentiality, protect individuals, privacy respect consent terms as well as managing security and other risks.

So, here, we're balancing between the benefits of open and all the risks that are involved, the balancing of risks and benefit is a broadly consequentialist approach to ethics. And we'll be talking more about that in the next module. But for here, it's relevant to say that it results in a different calculation for each application.

Each time, you're looking at a specific balancing of risks and benefits these risks and benefits show up in different ways and have different values. If we look get the risk and benefit map illustrated on the slide, we can see immediately that there are two important dimensions that must be considered for each.

First of all, the likely hood of the risk or the benefit. It may be very unlikely or maybe very likely and that's part of the calculation. And then as well, we need to take into account, the severity of the risk and the significance of the benefit. So a risk that is very likely and very severe is something that it's kind of hard to trade off against of benefit.

That is not very likely and not very significant also is very hard to trade off against. So we need this mapping of what the risks and the benefits actually are. And so that means that we need to know what is likely to happen if we implement AI, an analytics in this way.

And so let's not always able to be determined has Rumsfeld says there are unknown unknowns, you know we look at the house of Lord select comedy on AI which is recommending a consoleist approach and it's 2018. Document it states. There was a need to be realistic about the public's ability to understand in detail how the technology works.

And it's better to focus on the consequences of AI rather than the way it works and and to make that the way individuals are able to exercise their rights, but this might be unrealistic. If people don't understand how AI works, it's seems hard to understand how they can understand what the consequences will be.

It's probable that the understanding of the consequences will be determined this much by marketing and as by actual projections of risk and benefit that could be obtained. Nonetheless, these factors are important. That's why we began this course with a look at the applications. That is a detailed drawing out of what the benefits are and a look at the risks.

The detailed drawing out of what the issues are now. We didn't consider what the likely hood of each of these was because it was far upon the ability of this course to make these projections. The standard we used was simply does the benefit exist? Does the risk exist actually performing the calculation.

Might be humanly impossible. Although possibly an artificial intelligence could do it, man. Finally, perhaps ethics isn't actually a case of balancing computing interests. Economics might be politics. Might be but ethics strikes. Us as something different from that. You know what were after is something that works for everybody. We depict a lot of these ethical issues as competing interests.

But perhaps, what we want to do is find what works for both sides. The information and privacy, commissioner, Ontario takes this approach asserting that quote a positive summer approach to designing a regulatory framework, governing states surveillance can avoid false economies and unnecessary, trade-offs, demonstrating that it is indeed possible to have both public safety and personal privacy.

We can and must have both effective law enforcement, and rigorous privacy protections. And that sounds like more of an approach based in the ethics of the situation than a calculation and a weighing of consequences. Another word argument that comes up fairly frequently. Is that a certain stance on ethics exists as a requirement of the profession?

For example, again, we come back to the universal declaration of ethical principles for psychologists, which states, that competent caring for the well-being of persons and people's involves working for their benefit and above all doing no harm. It requires the application of knowledge and skills that are appropriate for the nature of the situation as well as the social and cultural context.

So, This is basically a derivation of ethical principles which are depicted as a requirement for what somebody needs to believe from an ethical perspective in order to accomplish a certain objective or a certain goal. So the objectives or the goals might be healing people, it might be supporting them.

When they're on welfare, it might be attending to their psychological needs, it might be teaching them, all of these professions have a certain objective or goal and in order to achieve that goal, certain attitudes and beliefs may be required. And so the statement of ethics is a listing of these attitudes and beliefs that may be required.

You know, we we see for example, arguments like the IFLA or the library in the association saying, integrity is vital to the advancement of scientific knowledge and to the maintenance of public confidence in the discipline of psychology. We see, for example, the integrity itself being based on honesty and untruthful open and accurate communication.

So we can sort of back up our way through the requirements of the profession. We look at the diagram on the slide. We see that what this does, is it places a code of ethics and presumably ethics generally with the within the context of a wider model of a profession, here we have a model of an IT profession from the computer society and we see the standards of ethical practice and we see the mechanisms for self-governance and consensus and and these defined professional advancements in turn.

We also have mechanisms for professional developments studying and applying the knowledge as well as the preparatory education. That's where acquired the body of knowledge, curriculum, accreditation, and degrees certifications, or licensing, all of these together, constitute the profession, and they don't all flow from the code of ethics, rather. There's a relation between these elements where the goals, the objectives, the training flow back into the ethics and the ethics inform the training.

It's kind of a symbiotic relationship the principles in this way, maybe expressed in two ways. First of all principle might be derived that is to say. It's it's a consequence of an already defined ethical principle. For example, competent caring for the well-being of persons and people's is one of the requirements of the profession.

But it's previously established that working for the benefit of the people who you're serving is already established. So you see we have working for their benefit and then from that follows competent caring and and we can trace back similar requirements. We look at the principle of integrity which is established on the previously established value of honesty and openness and accuracy.

The second way a principal can be established is that it's conditional and we see this expressed in a number of these codes of ethics. So what that means is that the ethical principle and the relation to the profession is described as a conditional statement, something like this. If you wish to be a member of this profession, then you need to adhere to the following principles.

So as you can see, it's conditional statement. And so, for example, if one is engaged in the activity of competing caring for the well-being of people, then this requires working for their benefit. So arguably against such assertions several objections may be brought forward. First of all you can say that the the requirement doesn't actually fall for example.

You might argue that in order to be a competence psychologist. You do not need to be honest. And open sometimes deception is required. You could argue perhaps that competent caring does not require working for the person's benefit. It might actually require you distancing yourself from the idea of the benefit of the person and simply following the appropriate practices and procedures.

You might also say that the antecedent hasn't been established that is not actually a property of the profession. For example, we might say that being a psychologist doesn't involve caring at all. Then in fact I remember listening on NPR and number of months ago, now I was while I was backpack light packing last summer, a discussion on how the best psychologist might be one, who is psychopathic, who is actually incapable of caring for the patient and therefore immune to the bias that might be created by caring for them a criminal psychologist might take this stance for example,

Another principle commonly appealed to as a justification for an ethical code is the social good or social order. We see this most clearly in journalistic ethics, which states that, for example, the primary function of journalism, is to inform the public and serve the truth. Because as the society for professional journalists, says, public enlightenment is the four runner of justice and the foundation of democracy.

If similarly, we may see additional principles, brought forward to the effect that if we perform this profession properly, then society, as a whole benefits, or perhaps society, as a whole benefits directly from the practice of these ethical virtues. We we might see that for example, in a teacher code of ethics, where the teachers serves as a model for the student.

And therefore, they're not teaching ethics particularly, but the way they conduct themselves, ethically is directly reflected in the way society, conducts itself. Ethically, an argument from social good or social order. However, invites relative, people's judgments are relative people, support is highly context driven. People consider excess acceptability to preserve the social good or the social order on a case by case basis, drew writes that their first thinking about overall policy goals, likely intended outcome, and then weighing up privacy and unintended consequences.

You know, the, the relative is relative ism is clear from statements, like this better, that, if you innocent people are a bit cross at being stopped, then a terrorist interest because terrorist instant try that once more, then a terrorist incident. Because lives are at risk and often this relative is a reflex.

The society in questions, own interests and very often. Social order can be construed, specifically, in terms of national interests and, and therefore, not thinking about say, a global social order, or even the community social order at all, You know, we see policy in countries all around the world, like the one from the office of management and budget in the United States.

Which seeks as ethical principles to support the US approach on free markets federalism and good regulatory practices which leads or which has led to a robust, innovation ecosystem. So here, the social order is being defined in a very specific way, but it's not clear that the social order is defined by Americans or is defined by the Chinese or is defined by Brazilians is the social order that provides the ethical basis necessary for a code of ethics.

We also see fairness appeared appealed to frequently often with no supporter just to vacation. And so the ethics of a profession is based in fairness, full, stop. The New York Times for example, in its own code says that it wants to treat its readers as fairly and openly as possible and also that it treats news sources, just as fairly and openly as it treats readers.

Now, we could argue about whether it's successful in this, but would seem this beautiful is that it is making and peel to a fairness as a justification for a Jessica code. The problem is, what is fairness on the slide here? I've listed four possible ways of describing fairness, this is not a complete list, I am quite sure.

One way we can think of, as this is objectivity free from any widths any width of bias, arguably, however, fairness might involve advocacy fairness to others is seen as something that is non-arbitary citing. The, the original codes of so long. The idea that the same principle or law or rule is applied to all equally CID that.

Nobody is about the law. Another definition of fairness might be based in rights, where something is fair. If and only if it leaves people free from abuse and infringement of their rights yet, another definition of fairness talks about equitable and non-discriminatory practices. I was going to put in that little diagram, that shows the difference between equal and equitable but it's been so overused instead.

I put in a document from the linked team, fairness toolkit which I just recently saw talking about how to measure fairness and large scale, AI applications. And and here we see that actually thinking about what constitutes fairness in a complex discipline, like analytics in AI is far from straightforward.

What does it mean to be objective? Non-ember, ar, right? Or recordable. In the context of AI in analytics, there are ways of defining data classes, there are ways of defining algorithms, computer models, permutations different principles of regression, customization, etc, that can all have an impact on what we think is fairness.

So it's not clear that fairness without a further. Explanation can serve as the basis for a code of ethics. Epistemology is another principle. That is frequently cited. There two major ways to think of this when we think that the advancement of knowledge and learning being considered to be in and of itself, a moral good.

First of all, we might say that a value becomes a value if it supports knowledge and truth seeking. The good example of this is honesty. One of the reasons why we want people to be honest be is because it makes it possible to learn things to know things and find out the truth.

Another way of thinking about it is to say that and ethical decision which may or may not appeal. To one of these moral principles is ethical, if and sometimes only, if it is informed by knowledge and evidence. So in other words we use knowledge and evidence as the basis for our moral reasoning.

If not the basis for our moral principles now it's not clear that this also works as a basis for ethical codes. First of all, we can simply deny that knowledge and learning are moral goods that the it's you know, it's nice that people want to know things and learn things, but they are not in enough themselves ethical values.

We might say with Seneca, for example, that this desire to know, more are sorry, this desire to know more than a sufficient is a sort of intemperance, you know, you can know or want to know too much or we in in slogan form today we say something like, curiosity killed the cat alternatively, we can say that some things are not meant to be known.

It would have been better. Arguably, had we not learned how to create atomic weapons. This would have been a piece of knowledge. We were better off, not knowing. So you know more often we see the responses based in epistemology to be couched in very specific terms, not just knowledge in general but some specific piece of knowledge.

So knowledge related to advanced weapons or personal confidenceiality or host of other arms. Other harms is wrong, but other kinds of knowledge like scientific principles or even what is the good? These are inherently good. But now we have not a value of epistemology underlying, our moral code, we have a value picking somehow between good knowledge and bad knowledge.

Another basis for moral codes that we see fairly frequently is trust. And as a result, the elements of trust in themselves can be cited. As justification for moral principles. Again, we come back to the psychologists who, say, integrity is vital to the maintained maintenance of public confidence in the discipline of psychology for psychology to work, it requires trust.

And so for psychology to embody trust, it must adhere to a certain set of ethical principles. Well, what are those principles? Here we have a trust model that is frequently used that combines five major features of trust credibility respect pride comradery and fairness. So the argument here then would be that all five of these as components of trust justify treating trust as a virtue but of course these components of trust are also things that result from trust fairness.

For example arguably requires trust, so there's camaraderie. So it's not the case that one of these things supports the other and a form of inference or moral reasoning, but rather that these things are all involved together into something. A bit more amorphous, a lot of the time, it's a direct appeal to the reputation of the discipline that requires trust.

The New York Times asserts, the reputation of the times rests on such perceptions. And so do the professional reputations of its staff members here. Public confidence is being represented as an aspect of trust and we see that the authors are appealing to the principal of trust to support the assertion that integrity is a moral principle, although integrity might also be a component of trust.

So how does this work? Well could be argued that trust is neither good or bad in itself, arguably, I've seen it argued, it would be better for certain professions to work on a trustless system rather than a trust-based system. Why might this be the case? Well, for one thing trust is very fragile, it can be broke.

And even if you're not attempting to break it, it can be broken as a result of honest error, as a result of misperceptions bad timing. Any number of things, the moral superiority of trustless systems have, is that they are more reliable and more robust. You might think well, how can you have a trustful system?

Well, this is the basis for technologies, like cryptography, zero knowledge, proofs and systems, like blockchain. These are mechanisms where the relations between entities are completely defined by the technology such that you don't need to take a leap of faith in order for the interaction to occur. Now it's not clear that this is going to work in all disciplines.

It's, it's hard to imagine a trustless approach in psychology or even a trust list approach and teaching and learning But it might be the case that a trustless approach is the best approach when it comes to the ethics of artificial intelligence and analytics. One more justification for an ethical code, is the defensibility of a practice.

Now, what this means is that the coat, the the ethical value, or the ethical practice is, is virtuous. If it's the sort of principle that you would be willing to defend or even more to the point that you would be willing to defend if somebody else did it and you were asked to defend that practice.

We see this a lot professional associations where one member needs to come forward to the defense of the other. We also see this in academic environments where we look to faculty associations or even university administrations to come to the defense of their professors and staff versa actions. So there are some actions that these professors are going to undertake like for example murder which are probably not going to be defended but the University of the faculty association On the other hand if they exercise their freedom of speech for example in acting as an expert witness and a trial.

This action is typically one that would be expected to be defended by the administration of the staff association. And so this principle which, you know, it makes one think of Frank Robb's Frank Ramsey's subjective defense of probability or subjectivist interpretation of probability where the probability of an event taking place is established by how much he would be willing to bet on the, on the events taking place.

This is a similar sort of thing. Would you be willing to put your organizations reputation on the line in defense of this principle? So this has several aspects of one is related to the cost of such a defense. There might be a large moral or even financial cost to the defense and that makes it less likely that someone is going to defend it.

It might really to the work of ones. Predecessors defending something. That was a hard one, freedom. By the years of the association is probably going to be more likely to be defended than something. That's a relatively recent and less well established principle. So we here have a type of argument for an ethical code, which is almost almost definitively a relativeistic approach.

It is based on the subject of preferences of the members of the profession, given the circumstances. It's also based on what society has a whole thinks of it because that will have an impact on the cost or the difficulty of making such a defense. But we see this, for example, when we're looking at the ethics of federal agencies government agencies.

So, for example, we might see them urge to consider patient provider and system burden in the evaluation of AI benefits of costs and include data, accuracy, validity, and reliability. All these things together. Our brought forward to offer a statement in terms of defensibility of a practice or of a principle and that leads us to a final consideration of what we think were doing with any of these arguments at all.

Know, off the top of this presentation, I said that the ethical principles in question in ethical codes, sometimes aren't argued for it all and that's quite true. Sometimes they're taking a self evidence, sometimes they're just simply stated and there's no statement at all about how true or not true.

They are on the other hand. There is this idea of moral reasoning. And the idea of moral reasoning is that we want to have a process that allows us to come to correct, moral decisions. So, here we have, for example, from the United Kingdom's, statistics authority, and ethics, self-assessment for data, management and data ethics, and it raises several questions for us.

One of to draw the distinction between ethical value and ethical principle. As between to track lists, how do you consider it all of these things and process? How do you gone through a process of inference? There's another distinction here between conforming to a standard which is what a checklist would support as opposed to creating one, which a checklist doesn't support it.

All There is also the distinction between

Sorry about that live presentation. There's a distinction between consideration of ethical issues before drafting your code or conducting your practice and rationalization of what you've been doing all along after the fact. And then, finally immoral reasoning. There are questions about the standards of evidence. You know what counts as a moral reason, and forms of arguments is an inductive argument, good enough, or just have to be deductive, would they have alien method of thesis antithesis and synthesis work is?

Well, it's not clear that there's only one method of moral reasoning and therefore only one way to reach an output of your moral reasoning process. A really good example, of that was counter factual reasoning. A lot of moral reasoning is based on counterfactuals because it's based on predicting consequences where something hasn't happened yet.

Counterfactually, we've reasoning is notoriously difficult and it's often based in the logic of modality what could be the case as opposed to what must be the case and we bring in other modalities like probability what is likely to happen and day ontology? What should have? And the question is, well how do we?

How do we say something is most likely to happen or even, how do we attract established the truth of a counterfactual at all? If a train has no breaks then it will probably crash. Now that's a counter factual statement. It's counter facts will because in fact all trains have breaks why?

Because they would be dangerous, right? But how do we know that we could appeal to a natural law or principle? But there are no natural laws or principles about breakless trains. It's just too specific in case and we can imagine cases where brakeless trains are not dangerous. You know, if we look at the the logic and the movie snow piercer, you don't want breaks on those trains because if they stop, everybody dies.

So, how do you do this? Well, people like stoneacker and David Kay Lewis have developed a semantics of counterfactuals based on probable world or possible worlds. And what you do is you select the nearest possible world to our own and ask yourself what is true in that world. But not just pushes back the question, because what counts, as the nearest possible world to our own.

Presumably not the world of snow piercer but maybe it is a world where your trains only ever go 10 miles an hour. I need jump off and jump onto them as they pass through the station and thus they don't need breaks. So, moral reasoning, because it involves all of these sorts of considerations is an area front with difficulty.

And we come back to whether we can just create a checklist or just rationalize our existing process or the question of whether we can by thinking about it, maybe not seeking a universal consensus because not everybody's going to be swayed by facts not everybody's going to be swayed by argument.

But perhaps the thinking goes, we can sit down. We can think about it as rational reasonably well, informed people and come up with principles of morality that was support moral reasoning. Generally, that will allow us to draw the sorts of conclusions that we want to draw, which lead us to our codes of ethics and our ethical practices generally.

So, that's the segue to the next module. The next module is on moral principles or moral theories, generally what these what people have thought about ethics through history. And so, we'll be looking at some of the different, major, ethical theories. We'll look at meta, ethics or what considerations leave us to favor one approach or another of determining ethical theories.

Oh, look at some of the discretion around all of these issues. So with that, we'll leave off module for and start getting ready for module five. I'm Stephen Downs. Thanks once again for joining me.

Module 5 - Introduction

Transcript of Module 5 - Introduction

So I'm not sure if we'll get anyone else Mondays or a tougher day and just for fun, we have a time change or time zone change. So we are now in eastern standard time here. Anyways, if you got where you are mark, where are you your California, aren't you?

Yeah, yeah. So it's only nine where you are. You still changed time as well, didn't you? Yes, and I've been in Ireland for hours, Ireland, for hours. Okay, yeah, I started it 2 am like that. I didn't join them at 2 am my time. Yeah, I did. Get in before 6 am giants.

I've had that experience. I was participating in a fair sphere books print over a couple of weeks and it was all on European times. So I just became European ton for two weeks. It was actually kind of an interesting experience. What's up? Super rarely every morning and going to sleep well before sunset.

Yeah. Sound of a hard time going to sleep better than. So, yeah, well, when you convert over completely, it's no longer early. Yeah, this will be dark. So I, I've been doing this, you know, off and on the last two years. Lot of stuff in Europe, right? A lot of conferences.

Yeah. But now it's gonna get dark by yeah, local time for 5:15 local time. So maybe I'll be able to sleep at 7 o'clock, Friday of conferences in Europe. There's one on AI and education from UNESCO, I'm trying to find the link for it. I don't know if I have it on this computer that sign up for that, you know.

Okay, good. That's December 7 and 8 as I recall oh it sounds. I'll put the link actually in the newsletter because I think it's worth signing up before even though again the hours are ridiculous. But it's a fairly high level conference. It's actually interestingly organized by the people's public of China.

Even though the times on is Europe. So but of course, there'll be speakers from all over. When I find that interesting. It's, I think it's good, but there is participation from China in the whole topic of AI AI ethics, and and learning analytics ethics. And there was a diagram.

I wish I could remember where it was. I was the title was something like landmarks of ethics and AI from China. And I pointed to a number of, really key documents, and it's similar to the sort of document that I sent out in last week's newsletter the field report with again, the list of important papers, but they're all different papers.

So I want to find that because, of course, it logs as part of our inquiry. Yes. And looking to UNESCO now, they have this major initiative. You know. I keep on kind of but I think now would be more active dates. Yeah, it's here in the US. I can't get any fractions with further education, as the Scotts College here are good, education.

That's considered remedial or they consider you the just like the other 18 year old students. Yeah. They're just no recognition of yeah, adults, informal learning lifetime learning on the job training. They're just no recognition. Any of that entire, which is funny because it's such a large part of the European approach.

Well, and because it's selected part of the student body. Yeah. And it is here that you know years of thrown around numbers you know meetings inside a situation. It's like half of us, have jobs, we're at all, you know? And you know, like I mentioned the last before, you know pedagogy is perfectly added, you know, perfectly good thing for developmental people, half of us are, you know, beyond development but patient.

We're looking for something else. Yeah, other good here. How to go figure whatever whatever you have. But yeah, but the the system here, this won't let go of where here to help 18 year old developing to be cogs in the empires machine. The basically know it's a very good business model for them, you know, all there's the, the overall ethos that in, you know, is especially in the Western world that people who are 18 to 22 will take four years and go to a residential university and pay a lot of tuition for this experience.

And, you know, and there are some institutions by no means well, but some who have over the years made billions of dollars doing this. And I know that because that's the size of their endowments. Yeah. And those few are fine. Yeah. But as you know Brian Alexander is tracking the decline of the internet.

Visual institutions. Yeah. Particularly the character, but I think he's my own whole world and their model is family and on many levels of the United States. First of all, two thirds of this, don't have a degree and that feeds into this press of the educated. Yeah. Which we're seeing they out in politics and, you know, so actually the model is classic, you know, it was good for a long time, you know, when it was specifically, targeted at training immigrants to work in factories and and their managers at the same time, yeah, it worked.

But that's not the problem we live in. So there's you know, this satisfaction is rising and something's been after here I would prefer to get patients. Give and open it up to the working class and under class and whatever you want to call it, the vast majority of Americans and try to help them out somehow.

As you know that's that's my perspective as well. And it's interesting because you know when you take that sort of perspective with respect to ethics, you know, I mean you ask what are the ethics for these institutions? You know, I mean you're you're talking about a practical consideration. I agree with that.

You know? I mean if they don't adopt a more open policy, they're gonna collapse. But I think also the reason that ethical ethical dimension to what they're doing is that ethically appropriate for universities to preserve existing power structures. And and no and feel to support, you know the legitimate aspirations of the majority of the population.

I would argue that it's not but you know, how do you approach such an argument? Well, I often I was very unpopular in my institution and say I was student leader and shift discernment and we got some things done. But and then when we left, they undoed the most.

But anyway, as I like to point out, it was meeting, I'm not just a student, I'm a lifetime taxpayer. Yeah and they they really do not like that being brought up in their you know I've been paying your salaries for decades and then I come to you to help get reached frame and you can't do your institution if not first as well.

And personally that made you know they're all people would have transitioned from immigrant and working last status to bourgeois law status. They have a you know, managing and that their son you know is sometime like that. I don't know, but they they don't want to recognize the working class and the taxpayers, you know, people would, you know, they put them in those chairs and then a class of agricultural communication.

And we had an assignment to, to find your reviews with paper on an other different culture. So, I won't looking for a paper on American working class culture, right? Which is a bit surprising. Actually because I mean it certainly a large culture, okay? Yeah, it's successful. When you look at you know what it builds, you know, it's had better days, you know?

But I mean and you know, between the two of us, we could probably sit down and list. Some of the attributes of that culture, you know, we we can think about, for example, their attitudes toward work, it's good, you know, there are attitudes ward helping your neighbor, you should, you know, and even things like community service, you should be part of the fire department.

If you have a volunteer fire department, which many of these communities do, you know, you should maybe belong to one of the service clubs, perhaps are minimally certainly support what the service clubs are doing because they're building parks and helping people in hospitals and things like that, you know, news, arrange of these things.

Pardon you, you should be honest, should be honest right man. You know, right? Being a humble is a big part of it. Yeah, yeah, equality. So yeah, I mean and I cannot find that list. I've been looking for you. Yeah. Yes, waiting before this class assignment that realized that there is nothing.

Yeah, about the American working being published by the academic class and taking a look at British you know their class and then let me just check that now. And now my curiosity is a little is more peaks than it was Google search. Okay. Classifix. All right. Yes, you might find one or two there.

I was specifically, I'm a working class culture as a culture and ethics, you'll find wonderful thing. So let's try culture. Yeah, I've got a shot. Yeah, just don't recognition of being separate. All right. There's a weaky pedia article. Oh yeah, I'll just plenty of that. Oh, sure. Yeah, long post.

You know I was talking here reviews so here we go. It's and see also blue collar. Labor history, paralleloion novel sure we'd be thinking people like Faulkner perhaps definitely not people like Jack Kerouac or who wrote Catcher in the Rye if you forget. Yeah. Like senators holding call field is a, you know, and to me he's the opposite of working class culture, he's an entitled complaining.

Anyhow but okay. So, we're looking for, Charlton post office. Okay? You know it's upward working tomorrow, America working class culture, okay? Google father, there you go, you know. Oh gee yeah goodbye boys. Hi a true American. That's a quote from the terrific film gangs of New York but that's America working class culture in the mid-1800s.

That's not contemporary. Is that? Yeah, I never books. There were books written about working last culture. You know what? They were all from the 30s, 40s and 50s, you know, there's just not been going on. Yeah, this is 1979. Let's, let's look for say since 2017 archaeology, this is interesting.

The pedagogy of class teaching working class life and culture in the economy or in the, in the academy. Of course it's a book so we can't read it on line, just came out in 2020.

So, which is too bad wonder if there's any reviews or anything.

Okay, here we go. J-store again which won't help us. But we give us a little bit, okay? Yeah, stories like mine. Provided a picture of working class students, having quote, made it yet, still feeling like outsideers continually immersed within the discursive and ideological conflicts, they depict between working class and academic cultures.

I feel a lot. Yeah. Yeah, everybody that comes or in class does. Yeah. You know I spy from myself I don't think I've ever directly addressed the professor or somebody with a PhD as a class trader but you know it's not has crossed my mind. Yeah. In this in this scenario we read working class and academic discourses exist in a dichotomous relationship where one discourse is depicted, as in almost complete opposition to the others working class students succeed.

Only if they're class, identity is stripped away. So I'm right just applicator. Yeah, it's like it's the deficit model right here. This is a culture. You need to transform yourself into the culture, you know, and it's like, no, we're an awarding class, you know, has a long tradition predating America saying no, I'm not stripping myself with my culture group.

It's not an apple, no. This this what I just quoted was by Donna LeCourt in her article, from 2006 performing working class identity in composition, toward a pedagogy of textual practice. It's a college English. That's the general. It's in. Yeah so yeah it's you know the performance of being working last.

Yeah so the cutlery where you got, that's what I was looking for. I really nice. Spent a lot of hours on. Oh, I believe you, but if you come up with something, let me know. That's, I could. Yeah, I'm not seeing as you say a whole lot here, a working class academic pedagogy.

That's not really working class culture. Yeah. See, that's the thing. I was specifically looking for, as an example of intercultural communications, I wanted to be about the culture and and hopefully the way that he usually but just something about the working class culture in the, you know, published the last six or seven years, you know?

I like this. It's this one's called the invention of working last culture.

It's neoliberalism and working class lives. No, I really studies. Yeah, yeah. Here's something from 2020 called the is a genuine working class culture. So again it's a book. So we can't read it. Yeah, in fact, Jack Metzger is one of the founders of the working class studies association. So there's now one group in America that's just starting to publish, and that's right for its work right there.

So, let's find Jack Metzger. Does he have a homepage? Maybe he does working class. That's 40. Less studies association baby. Right, and have a conference. I just discovered him a couple years ago, I was all signed up by an Airbnb, and I paid my feeds on. It's already building their conference and Youngstown, Ohio last year.

Yeah. Okay. So do you think, you know I mean this is interesting and is not really what I intended to talk about, but who cares. Yeah, sorry, no, no it's fine. I would do you think working class culture transcends other cultural boundaries and here I'm thinking of race and gender and religion.

And language, if you think it's a common culture across these, I think it's complicated. I know it is. Mmm, so I see this another intersectional standpoint, right? Is the way I right? You know, and there's a critique that white people reach the glass to avoid them identity. I get that.

But on the other hand, when you can't finish things without your class and you can find, you know, black studies partners like, you know, studies partners, agent studies departments and there's no working last night. You're like okay, I gifted criticism but I don't one of my own. So another thing so that what I need to think?

Okay, let's let's go for nothing identity. There are zero, Scottish studies programs in the United States of America. All right. Yeah, that seems very interesting. That's have any influence on the creation of the American culture, all I'd say. So you think yeah. I mean this Scottish and lightenment is huge.

Adam Smith David Hume continuing through to John Stewart while Josh Strick knows more English than Scottish. But still. Oh yeah, huge thing, yeah. So they're you know there's help things right but I think they can do more dancing than class anals. Yeah. And you know and Scottish isn't strictly celtic anyways, I mean the modern Scottish states.

I mean, you go to Scotland now and you know, there is a very clear modern Scottish identity which isn't just the, you know, which isn't the traditional. I mean, it includes all the traditional Scottish elements, but there it accounts are one. Oh, yeah, and there are many in the culture, one element, but the modern Scottish state as I see.

It is, is open. It's diverse, it's progressive, you know, these are, you know. And so it's a very, it's it's a very distinct identity for sure. And back I found a YouTube named Bruce Fum. F u double m e, y, who is half the name and half, Scottish and I found them because he did eat, so he does a whole series videos.

He's a former teacher and now he doesn't travel long across the can travel logs and statistics. So it goes to places and tells the story and so I thought because one of his is called and black people, he started oh yeah because he responded to a comment he made, you know so somebody watched one of his videos and made this comment how I wanted and I hope for a fact that there were black people who are Scottish.

Yeah. And so he was so he made this whole video any that he has like Ford doesn't videos and a excellent graphic scratches. You know, I'd spend certain on the time like one day grandfathers came from internet style. So I've spent a bunch of time trying to learn about stuff, never been there, but and as far as I watched, you know, at least it doesn't roost farming videos.

And there's spot on military, and he that's all serious. The pits with scout counts, the angles, the Saxons enormous, each one. An individual these to these doing an excellent job. I used to see through next week doing this and apparently runs towards on the side or so. So very what's what's interesting about this discussion?

And there'll probably be people watching this one around, why is there any of this relevant at all? But, but I think it's, it's actually pretty much on topic even for this module because this module is approaches to ethics. And you know in in Canada we had a book written by a guy called John Ralston Saul.

Who's interestingly the husband of a former Canadian governor? General didn't know that. Yeah, well shouldn't say husband. I think partner might be more accurate. I'm not sure. Exactly. And who was it? Yeah. A big fan of his lectures that became a book. Yeah, I have to look it up because I've forgotten who the governor general which is really bad because that's you know, the Canadian head of state and you know it's like forgetting a president right?

I can't I have like Adrian Clarkson or something like that in my head, but I'm not really sure that. That's right. Adrian Clarkson. Is that right? Yeah. Supporting the Google. Okay, very good. So I did remember it, after all, I feel much better because I couldn't find it. So and they were man in 1999, so it was.

Okay. They were, I might have been after. Well, I'm not sure. Anyhow doesn't matter doesn't matter to me whether they were married or not, but but I guess it does. Marry two other people. It does matter to other people at any help and she's the idea of volunteers bastards.

That's it is, is essentially, you know, I'm glossing over a lot of this, but essentially it's a critique of rationalism. I mean, even more to the point, a critique of the idea of that of the idea of using reason, properly soul called to reach for things like moral truths, or how we structure society, etc.

It's it's the you know it's the the dictatorship of the reasonable over everyone else. I guess may I'm putting words in his mouth here I'm sure you never said that. A lot's a great phrase isn't it? The dictatorship of the reasonable but I think that is at least a part of this distinction that can be drawn here between working class culture on academic culture, in the sense that whatever the working class person says, the academic is going to have an answer based in reason or perhaps rationalization and then reach for authority in the fact doesn't work.

There is for authority. Yeah. Although the working class might also reach for authority of a different sort so, you know, you know, work settings in my experience. Yeah, I just worker. Yeah. Well, you know, workers unite, you know the power of labor unions is very much, a worker working class kind of cultural value of a, that is waned in recent years, but I would argue not really because workers didn't want it again.

There were complex ways. Yeah. So you know again I had college everything people. This is where you can. You know, slippery part of there. Yeah, it was college educated who waged the war, it wasn't there idea, but they run the institution that have carried a company, which was possible in a older single, and our understanding of ethics and especially ethics in an academic.

Discipline comes from this rationalist perspective to large degree. I've got slides which I don't know if I'll present to one person here but I'll do them later anyways in in a video that yeah, I know but I think it's better to have a conversation here and then I'll just present them later and you can watch the video because that's seems a bit nicer.

You know, if there are two of you that would be an audience and then I'd want to present one is not an audience. What is obvious? Yeah. But, you know, I mean, we, we think of ethics of whatever. And we think of, you know, reasons and principles and arguments, maybe explanations codes of ethics, like, we've just done ethics as something that you have to be educated in order to understand to quite a degree, you know?

And, you know, there's a certain sense of. I mean, the reason why this course exists is because there's a whole lot of people, not just working class people, but academics too, who make pronouncements on the ethics of this, and that without actually understanding ethics at all. And that bothers me, and actually, it bothered me because they didn't understand ethics at all, they didn't understand AI analytics at all and they didn't understand learning at all.

And and it made for a bad combination and you get these horribly naive statements. Now that sounds like a classic. Oh, you know, here's a academic talking about somebody. You doesn't know anything. And, you know, and I know that that's believe me because, you know, if somebody comes to me and says, well, you can't talk about an opinion on this because you're coming from a naive perspective.

I know how it would respond to that and it's not. Well, I'm in fact, I have responded to it in the past and it hasn't been well. So, I get that. So, but even the people who present themselves as knowing all of this stuff don't and and that's when it begins to bother me, right?

And, and that's when we see something, like, ethics used, not as a domain of inquiry, but as a club and that doesn't strike me is the purpose of ethics at all. And so, you know, I mean if you're gonna use it as a weapon which you shouldn't but if you gonna it really should be like a fine tune sword, not a club.

Yeah, that's just my prejudice speaking there. But, you know, at the very least but but it bothers me. And and so and that's why I actually that is why I took the approach to this course that I did most approaches to an ethics in anything kind, of course, will map out all the ethical theories first.

And the idea is that you has a student is supposed to look at all these ethical theories and pick one. And you will be arguing from that perspective for the rest of the course. And if you think about it that's kind of how research and education especially research and ed tech has done generally, isn't it?

You're given all these frameworks like instructive ism constructivism. Behaviorism cognitive isums connectivism a neural laid out in front of you. And you pick one, he said, that is the lens through, which I will see the world and ethics is presented in the same way and that doesn't feel right to me because that's rounded theory and say the situation documented and see what emerging is that needed more.

Imagine more the approach that I've taken right in painstakingly outlining, all of the different applications and painstakingly outline, all the issues that have come up that I have found, I'm still adding applications and issues. I can't believe I missed in the list of applications content summarization and I'm sort of going to, oh yeah.

So I've added that and therefore made obscene I avoid those. Well, it depends, you know, I mean, I see what it's a big time save, right? Yeah, if you know, if I was being paid to do research, you know, I would have to but I'm not and I don't.

Yeah, exactly. But there's there's a whole, this is a bit of a side, but it's worth mentioning. There's all there's a whole school of doing a research and I see it a lot of in our sea, where the model is, you do a literature search. And so, you'll go into your, your publication repository, or your publication library, or your index system, like scopus or whatever and you'll put in your keywords, and you'll get a search.

And that reveals, you know, 714 documents or whatever. And then you apply search terms to that or filtering criteria, and bring it down to a certain amount. And then that's your basis for your literature review, which of course he will do every time you do a new study and that scientific method.

I've lost track of why I was not that tangent. But oh yeah. So methodology. Yeah, very and how you are preaching, this course. Yeah. So, these literature searches will be guided by that methodology and they'll look for all of the, you know, all the constructivist literature on the use of the letter, a in our, whatever yawn caricaturing, obviously, and that again still feels wrong.

And it especially fuels wrong in a domain. Like, ethics, you know, if we think about ethics, can we, you know, have we advanced certain knowledge of ethics if we do a literature search in that way. I mean doing any reading. Sure is and would answer knowledge, but do you actually know more about ethics after such a search than you did before?

And it's not clear to me that that's the case. I'm certainly, you know, I'm struggling through fog here to ask the being working acid and sort of pragmatic you my nature. Are you then more at you act more ethically after that, arguably, I was more ethical before. But yeah.

So, I mean the step. Now, the flip side of that is something called fault theory. And in a philosophy of mind, we see this and I'm using the philosophy of mind here because it's a really good example. And so, it's in the philosophy of mind, we have something called forks psychology.

And folks, psychology is the idea that of common ordinary everyday psychological states that we have beliefs knowledge. Truth, fears hopes desires, etc. And maybe a bit of a story about how they have a cultural impact. For example, if you want something, you will go get that something. Or if you know if you have a desire for something, you're more likely to work for you know, there's a whole range of common efforts that come for that and come out of that.

And and you can construct the fully blown psychological theory based on folk psychology and people of done that. In fact, it's, you know, it's probably the dominant theory in philosophy of mind. But what if it's wrong and there's another perspective, which I actually think is closer to being correct to the effects that there aren't really any such thing as beliefs, or desires or hopes, maybe we could say the right emotions, but we really have to revisit how we describe that.

And so on, is so forth, right? And folks, psychology is just based on a misinterpretation of our own mental states. I mean, we think we understand our own mental state, but really, we're in the worst position in the world to be observing them, which is probably true. And so slowly the near there are new views.

Now of psychology, one of them, for example is called eliminativeism which is a $10 word which means we eliminate these folk psychological categories and see what's left. And that's where you get a lot of these philosophies of psychology based on neuroscience and things like that. And so a word like belief is just a cat's phrase.

We use to refer to a wide variety of narrow states that don't really have anything in common and certainly can't be thought of as a cause of anything as a class. But it's just it's just it's a handy way of talking and this is it damn it talks about taking the intentional stance where we'll keep talking now way but but really what we're talking about is these neural states and it I think it's dinner, I know it's not done it.

So Davidson must be done, is the, you know, and I'll give her that scientific oriented and that. Yeah, I'm old. My brain is going. I lose names, I'm just kidding. Yes, please. At her party CRS. I often say I have CRS. I can't remember stuff. Yeah, can't remember study.

You know, it's yeah, me for me. My bug bear has always been names. Just means are so arbitrary and one other thing. Thanks my brain does not do is take this arbitrary thing and join it with that arbitrary thing and naming proper names are just that. And I know there's secrets as to, you know, how you can create these associations, but I've never spent the time doing that.

So, um sales. Yeah, yeah. Well, yeah, exactly. That's right, so if we come back to ethics, same thing might be true. So before we start with that, hold that, you know what? Yeah, so I have the out of order coincidence, I guess, right? They said the bumping into Timothy Larry in the 70s and actually looking a little further than his pop image.

Yeah. And if you ever run across is work, they got into Harvard their interpersonal theory of personality. No, interesting. No, I didn't know that. It's ridiculous 50s based on work at the Kaiser hospital and Oakland California. This like the Afric and and his theory is really don't have personalities.

I think you like this, that we don't have well personality. Right. That we operate in relation to the so it's kind of a connectivist theory of personnel and then at the same time Alan Watson's been alive and had a local radio program. So I got into students and so that's there's lifelong history there.

Yeah. And the more you spend with this on the more you look into there's not been there. Yeah. It's it's all made up, you know, but the harder you look. Yeah. Before you find nothing it's all just the story as a, my favorite thing. It's also a cortana as well.

Yeah, the categories that we impose on the world are in fact categories that we've imposed on the world, that so gets you right back to the language, these language all that. Yeah, that's one of the reasons I began to practice because I could spend most of my day being productive, you know, and there was no lying or misunderstanding going on.

There was this thing work at the end of the day. Well it works. Let's sell it. See tomorrow, kind of like why I like computers and it's an interesting thing, right? I mean with computers, it either works for it doesn't well, that's not strictly true because it could work but produce random results which is that's probably parts the working month.

Yeah. So and and you know, and you know, there's even there's even a school of thought that depicts ethics as a technical problem. I don't think I'd go that far. But yeah, I mean, and I certainly agree with this perspective of that, you know, a lot of this, most of this, probably all of this is artificial.

But these are things that we've created and impose on the state of, you know. Well impose on the world which people use the words artificial, I mean they're human. Yeah. It's like to share your city. I wouldn't know what artificial. It's a chair it is what it is. It does.

What it does. Yeah, but I'm not a good point constructiveism is essentially the idea that we deliberately construct these things right making meaning. But I don't think that that's the case at all. Because as you say it's just a chair. I mean this is something we're going to do call it a chair whether or not we're a constructiveist doesn't mean that the world is not naturally or really who want to put it that way divided into things that are chairs and things that are not shares.

You know, we haven't said anything profound about the state of the world by saying this is a chair. In fact, we haven't really said anything about anything at all, except maybe ourselves. But if we as a self-built exist, when we have more personality, we are a piece of protoplasm and, you know, walkers around, you know.

And that gets, I mean, that points to the discussion, you know, is bridge and innate feature of this particular animal species well or not, you know, and there's no answer but there's a whole idea of the language, you know, being constructed and culturally instructed all that, that's all fine.

But maybe it's just, you know, and we like to make noise well, but not yet. We make noises. And that's, you know, and I want to be careful here, right? Because I don't want to say, well yeah or humans. So naturally we make noises because now that's an appeal to some kind of real state of affairs.

Describing, what is natural? And what is it? And, and that's no better either, right? We're just back where we started saying weapons, 200,000 years. Well, that's the scientific explanation of what is happening and it's the best explanation we have to this point and you know, pretty much all of science.

Depends on that particular principle certainly all of biological science. So if we reject that we reject all of biological science which isn't practical and and we're not likely to do it and even even if we had evidence and and this is an important realization just in the last 30 or 40 years, maybe 50 years of the philosophy of science.

Even if we had evidence to show us that evolution is false, we still wouldn't abandon it. Abandon it. We would question the evidence because far more of what we do and what we know, depends on evolution, then could ever depend on that particular evidence. And so, you know this, this idea that there's this critical experiment that proves evolution is true or proves that it's false.

Just isn't the case. There's this entire assemblage of theory, and practical implementation, and diagnosis and vaccine research, and the whole works right through in vaccine just to be tropical. But but you know what I mean? Right, there's this now it doesn't mean you have to accept it all or accept none of it.

You'll plenty of room around the edges for disagreements but evolutions. One of these things where if you disagree with evolution pretty much committed to disagreeing with the entire lot and that's what makes it hard. I wonder now. People sort of assumed there are central principles like that in ethics, too.

And what would be the theory of evolution for example, in ethics or what would be the law of gravity mean ethics? But yeah you guys see your shaking your head and I'm inclined to agree. That's a lot harder to come by. Because if we look well as we have at the different principles and the different issues that come up, Just doesn't seem to be a center to all of this.

There's one one discussion. What was his name or their names? Lost it did here somewhere. Here we go. Massive met calf saying that ethics may be as too big a word for what we're after. Because well, in in the sense that the word ethics seems to include corporate values, moral justice compliance with ethical codes as the law of these range of things.

I am not sure. I agree with that either. I think I'm perfectly comfortable with the idea of ethics. Well, I was surprised that you limited it to institutional ethical codes. I hadn't thought of it that way before and I see the practicality of but I came in with that larger idea evidence, you know, beliefs more eggs, you know, ethical code, you know, as a larger set.

Yeah, but I see, but doing it this way, I see that that's on Google and religions and, you know, and then therefore ways and, you know, that's impossible pass. So, you know, I looked at the ethical codes to because, again, they're part of this, whole ethics as a club thing.

And it's interesting because we have ethics as a club in the sense of weapon and ethics as a club as, in the sense of the people who are together, who are in one, no. But if you look at, there's I'm putting a link into today's course, newsletter referencing a UNESCO report on the ethics of artificial intelligence in education.

And if you and the way I say the end of the street that I remember sorry. But they I think I dropped that in the machine. Yeah, I added that they may well have, I think I did one ones and it wouldn't be surprised because the reason why it's in there is because it was sitting on my desktop and I couldn't let go of it.

I can't go on back to it looking at it all I'm going back to it and looking at it a little more and if I well can't like do this. So I guess I gotta deal. But the whale was thinking of it is, well, now that we've done this scope of applications in ethical issues, and ethical codes.

Now, we're in a position to properly, appreciate that particular document. And if you look at the references and the examples, they are virtually all of the sorts of things that we've been talking about so far. And, especially the the ethical codes aspect of it. That is the standard of evidence that they're using, but I agree that that's too small to standard.

It's too narrow with scope. But back before I did that inquiry, I would talk about ethics men and I'm thinking back in particular, in particular to remarks made by Jenny McNas back at the time.

Hmm, it's basically said I shouldn't talk. Well, while I'm talking about ethics, really, I should be looking at these ethical codes. The, the ethics of the profession, that that my discussion wasn't addressing the current reality of ethics. And well, I kind of bothered me because I had posted and I'll give, I'll provide the link again.

In this week's newsletter, right posted. But basically an overall guide to ethics as it applies to education. And that's when the criticism came out. I said to her back then, this is years ago. Well, yeah, I guess I'll have to address all of that. In a part two. She said, well, I'm looking forward to part two then so this is part two.

Yeah, this was whole big mess spelling your last name and may CK and ESS there's my Scottish bias. I always okay? Yeah, it's all lower case. Letters of. Yeah, that was amazing. But I just let that, yeah, well, I mean, he hasn't he didn't stop work with the stuff that he didn't needy.

So, yeah, no, we had your decision so if he's still alive and still got that version. Yeah. So. Oh, okay. No not Julian. James. Who's the one who came up with it but someone else someone else. It's a mix something. It's another one. Yeah. Another one and I can't find it origin of consciousness, breakdown of the vibe camera minds, 76, for instance, psychologists joining games.

Yeah, but that's the person who she studying it's something like or something. Oh, Ian Gilchrist, I think Gilchrist. There we go. That's who she's been studying.

So, and and I haven't been following that protect, but I've been following it but not nearly as closely as I would need to in order to be able to comment reasonably on what she's been learning about that. So I will, but I that's where her study is done since then.

But I still follow what she's doing. So one of the top searches here is a wiki we want, but it calls it by camera. Then powerful this hates it beyond the yeah and the psychology. It looks so it all. Look at that. But again, you know, it's in there minds me.

That the alphabet versus the goddess another like camel. Mine. Mmm. Very remember that? That was from the 70s. No. But I can, I can probably construct it from that title. Yeah. You know. So, you know, I was an interesting victory, but then he kind of lost his way. And so, I was frustrated, a lot of this stuff goes back to some of the early psychological experiments that involved severing the corpus colossal, which is the bundle of nerves that joined the two hemisphere of the brain, to hemispheres the brain.

And those resulted in almost like split personalities in the, in the sense that it was like, we now had two different people in there and so you think, well, we'll head to different people and then and then we get, you know, all the characterizations. A little different character of these two different people and drawing on the right side of the brain and all of that.

I'm not really a fan of right brain left brain theory. And the reason for that is that it just describes way to much to innateness and to, you know, the actual construction of the human brain and I think is appropriate. I don't think you can make brains either left or right hemisphere that focus on art or focus on reasoning.

You know, even if that's how they come out target, it's not a biology or neurology, it's a mentality. Okay, that, you know, and a research, the recent research on plasticity and you know, well, you know, now we have all these veterans and United States. They evolve these veterans bring infants.

Yes study. And so there are watching the brain recover functions, plans, moving from the damage area. So the plasticity argument, you know, more against that. Yeah. But yet there is a certain light camel mentality you know. And then handedness comes in and I delivered it became a And again, as of my tray and an accident, my right hand that was already interested in work on that sterity and my left hand.

Then I had a accident and cut some embers and a nerve in my right hand. And so for a couple of years I worked left handed. Yeah. And after that, the work online is here, and you can do most of the things, your hands know what they're doing, and then you think about that is like, so you know, and then they're supposedly the cross, you know, units of Americans system.

And you know what's my brain doing? It's like fire, my answer in two different things, at the same time that, you know, that's, that's a juggling. So I mean it's all well connected. It's all interesting. And again, we don't it's and I think all about a place to ethics all of it.

And that I think is the distinction between the approach, I'm taking here and the approach that you find in the UN document, and the codes of ethics approach, and for that matter, even the approach that ethics is something that, you know, we go through, you know, formal or semi-formal reasoning about the ethics, is something that we discover is that what we're mathematics or invent is though, it were a categorization system for the world.

I I'm looking for another, I think have a sense of something that is more basic than that. Something that is like learning to become left handed or something. That is like the plasticity of mind. As you know, the basis for our story of ethics, whatever, that's going to be.

So, since we're doing this informally, as you're not going to post, this is today's thing that we look like Joe or not. What are you gonna do? Anyway. Oh, I'm gonna post this discussion, unless you have some objection to it. I don't see that at all. It's just so I have this overwriting question that I didn't want to start up your presentations with, but here's the program opportunity because also, you know, then here almost an hour.

And so, why people with the most elaborate codes of ethics? Why is it that those preceptions do the most unethical? Things the catholic church. American lawyers and I required it. You know, why is it the ones with most of the labricodes that are the people that are, you know, my view, destroying the building?

And there's, I think there's two answers to that one part of the answer is found in vultures bastards and the other part of the answer is found in the short simple expression they can. And and I know that sounds terrible, but, you know, I don't think it's about power of the plant, you know, those classes.

They have those codes. Also have the power. Yeah. And that goes back to last week. It's people will act on ethically, no matter what you do. Yeah. So, so, it's the, it's the confluence of power and obedience, and I'm associated with. So it's power immediately together but here's and this this this looks forward a bit to where we're going to go at the end of the course.

There's probably a good way of finishing off for today.

We see all of this. And you know, we see the activities that that you've just described the things that lawyers do that. The church does etc. And we say that it's unethical. Who are we to say this provide their own standards? That's the thing by their own. Yeah, my yeah.

Okay. But sure they may be hypocrites or they they may be saying one thing and doing another which I guess is sort of the same thing. But you know, we have to also take into account the fact that we just might not understand what ethics really are and that all we might go along or you know, maybe even be convinced by this charade.

That says, for example, killing is wrong. If we really understood ethics, maybe we'd see that now. On fact killing is right. And it's we who are mistaken or let's take even a worse approach. Maybe worse is the wrong word. A more this impairment or more empirical approach, right? Oh, okay.

What? If ethics actually is what we all believe? And what we all do, ethically? What if you know meaning is use is the concern would say, then it turns out that the actual ethics, that humanity has a whole believes in allows that killing is good. And and why do we say that?

Look what we do. Look at the evidence, look at the evidence, right? So if you got a problem with that, maybe the problem is with your understanding of ethics, no, not really something like want to support particularly but I think of that, something that we need to take seriously and yes, you know I mean we talk about, you know, taking a gap based approach, right?

What if cynically ethics just is a tool that say, the powerful used to control the less powerful, you know, you know there's it's not actually in the outline or in any of my work at all. But there's just the whole philosophy of Nietzsche fits in nicely here. You know, with his his, you know, what would Superman's ethics?

Be, you know, or there's the philosophy of the transvaluation of value. What if we took all of our values and flipped them upside down so right as wrong. Good is bad etc or what? We consider bad is actually good. What would our objections to that be and and we find that they're really aren't any none that don't sound like rationalizations.

So that I think is something that we need to take seriously as well. So and that's that's pretty much the focus of the second part of the course. But in there is this considerable speed bump which is the duty of care and that's what makes things really interesting to me.

Anyway. Yes. Yes, because then you then you have a different set of evidence. You have a different set of a that, you know, it's explains the story of it. Yeah. It's just power. Okay. You know, kind of believe that I kind of believe it. The world is outside down and say one thing growing up America brings you to have that point of view.

Yeah. But yeah, but then you really problematize it. There's one of my big words. I learned in college problematize that approach by adding the care for the imaginary principles or not malfeasance. It's now evidencing. Okay. Now that's studying. It's smell something, right? Yeah, when you add that then you can analyze.

Yeah, the acts, you know, the vehicle every reality and then try to keep that out from the ethical code. Yes. Okay. I think that's a good note to finish on. I'm gonna do this presentation, this afternoon, it'll show up in the newsletter as well as the presentation from last week.

That'll finish off last week plus the link. And I've got some fun toys planned for this week as well. I just have to code them but yeah I need to catch up with that anyway. I haven't done my lights yet. I've done some tweets and you know what's inside the document?

Yeah, but yeah, that would work through the night. So yeah, busy busy. All right. All right, see you later. Talk to you later.

Approaches to Ethics

Transcript of Approaches to Ethics

Unedited transcript from Google Recorder

Hi everyone, and welcome once again to ethics analytics and the duty of care. I'll just get this power point set up here, which I forgot to do before the start of this. So I'll do that now. Sorry about that. I thought it was all set and ready to go, but of course I wasn't because that's what happens when you're doing stuff, live, someone sometimes.

And when you're doing it by yourself, and you don't always think of these things. And of course, we're gonna have a train go by two just to make life fun. There we go. Approaches to ethics. I might trim the first few seconds off this video, if so, welcome to ethics analytics and the duty of care, the beginning of module five, which is approaches to ethics.

I'm Stephen Downes. And I'm leading you through this course today. Now, what are you going to start off talking about when we talk about approaches to ethics? Is the question of the basis for statements about the ethics of learning analytics? And AI, we make lots of statements. We say things like people's rights to privacy should be respected, or we need to balance the risks and benefits.

Etc. Etc. We've gone through a whole number of these already. The different ethical principles, the different ethical values. The issues that are been raised, excuse me. We've covered a lot of this stuff already in this module, we're going to look at the deep and rich history of ethics to see if we can draw out.

What the basis of these ethical statements is why we make them. What justifies them? So let's look at some of the language that's actually used in some of the papers that talk about ethics and analytics in AI, here's one from K, corn and up in Heinz, 10 years old but it's still relevant.

It says to satisfy expectations of the born digital slash born. Social generations. There is a likely requirement to take on ethical considerations, which may run contrary to the sensibilities of previous generations. Especially in respect of the trying to trade off between privacy and service. Excuse me, one moment. Nothing like getting a frog in your throat just as you're doing a live video.

So my take on their take is really that why we're doing it to satisfy the expectation of born digital and born, social generations that doesn't really seem right to me. I don't really see that as being a reason to start looking into the ethics or even to change our positions on the ethics.

It's not about satisfying expectations of a generation. How about this drew 2016 saying? People think that it would be irresponsible not to use data science in certain cases and that we should not lag behind other countries again. What sort of reasoning is this? Do we draft our principles about AI in the ethics of AI based on what people think or whether or not we like behind other countries you know this comparison between other countries is used to justify a lot of things whether we should develop a certain industry or adopt a certain policy.

The the idea that we're lagging behind suggests a moral superiority to the position. Those countries who are ahead are taking. But what does it mean to be ahead? Well, look at the Brookings institute which writes basically whoever leads in artificial intelligence in 2030 will rule the world until 2100.

So the idea here is what is right? Is ruling the world. Really? I'm not so sure about that. What about this? A voltiers candidate might have said, were faced with the imperative to seek out the best of all possible worlds. This is one of those worlds where we do the trade-offs, the risks the balance is, and all of that.

We have this requirement to ask is this the best of all possible, worlds and candid goes through a world, where there's all kinds of violence and injury and just generally bad behavior and horrible conditions. And yet still comes to say this is the best of all possible worlds. This is as good as it gets.

And the reason why we have to put up with this bad stuff is so we can have this. Good stuff was on a reason for ethics, is that a reason for artificial intelligence? I think these three approaches are kind of superficial. They're kind of superficial not in the sense that they aren't good arguments.

Oh, I don't think they are a good argument but they're kind of superficial, in the sense that they don't actually address ethical questions at all. They're all fancy ways of changing the subject. You know, we go back expectations. That's a different subject. Not liking behind the others. That's a different subject.

Best of all possible. Roles that too. Is a different subject. But what about what is, right? And what is wrong? These are the questions that ethics gets at and these are the questions that we want and ethics of learning analytics and artificial intelligence and education to get at not.

Whether it's popular or whether it rules the world, whether it's right or wrong. But how do we determine that? Well, ethics has been a topic of interest. Well forever human interest in, ethics goes back, at least 3,000 years. I was going to put 3,500 years and it wouldn't have mattered.

We go back to the epic of Gilgamesh. We've go back to the Iliad of Homer, the Icelandic Edis, and we see an ethics with a set of values that see, strong leaders of small tribes and we see that that code throughout history. We see things like the Sumerian farmers element and the Egyptians instructions which both advised farmers to leave some green for poor leaners.

These are both examples from Wikipedia so we don't see a difference sense of ethics. This study of what is the right way to live? What is the right way to be? What are the appropriate actions to take in different circumstances in different conditions? Probably is as old as human society itself.

And in fact, we might argue that the very possibility of a society requires ethics or another way of putting it. Ethics the results of ethics is the creation of human society. Once we started thinking about, what does it mean to do right or wrong to each other, that's when we began to get along with each other.

Ethics has a long history and association with religious beliefs. I was going to list a whole bunch of things I could have gone and looked at Sharia code from the Islamic tradition. And then the law codes that follow from that or I could have looked at say, the 10 commandments or the guidance of Jesus, in the Christian Bible could refer to the analytics of Confucius, or the guidance that allowed Sue makes in the downaging.

There is a long history of ethics. Telling us what the right way to live is. At the same time this association is weaning. I say we mean you know loose sense not as a precise mathematical calculation but you know, we see in the chart here, on the right hand side, whether belief in God is essential to morality and we see in some societies such as say, the United States Egypt, Indonesia El, Salvador and Ghana, people believe it is.

I yet another societies Canada. France israel Australia. People believe it's not necessary to believe in God. To be moral, there is a tradition of writing in this school. I referenced Miguel Goulan's summary of Greg Epstein's book, good without God and it follows in a long tradition of such books.

Kai Neilson who taught at the University of Calgary. While I study there, has written ethics without God in jail Mackie as well known for his book, ethics inventing, right and wrong. So, there is a sense in which ethics is associated with religion, but arguably, it's not a necessary connection.

At least from the perspective of this inquiry. At this point, in time, in Western traditions and I emphasize Western ethics begins with as a philosophy. Begins within the Greek tradition and histories of ethics, such as it added encyclopedia.com, which take a very Western perspective. Talk about it. Beginning with Plato now, arguably or not Plato with Socrates arguably, there was discussion about that before.

And again I mentioned and a previous discussion the law codes of so on which predates any of these, but Socrates asks, what is justice? What is right? And we get answers, like justice is the rule, the stronger or man is the measure of all things. These are the sorts of answers that the softest might give and Socrates, you know, and basically devastating analysis shows the difficulty inside simple views.

It's so it's a lot more difficult than that. This rule of the stronger can be unjust, man, might not be the measure of all things. How would we know Plato comes up with what we might call the ideal form of the good, it's like triangles. We have the idea of form of the triangle.

We have the ideal form of the good and just like the triangle we can know it by thinking about it and just like the triangle, it's how we measure all other objects of knowledge Aristotle is well known as being less ideally minded than Plato. He comes up with a theory.

He comes up with a complex approach to ethics complex approaches to ethics, but we might summarize them if we had to and we do as something like the exercise of natural human facilities in accordance with virtue. So these are taken from that article on history of ethics from encyclopedia.com, but you'll see similar statements expressed and pretty much every history of ethics as defined from the Greek tradition since the enlightenment, arguably before.

But certainly not long after ethics has been associated with reason, the enlightened, the enlightenment and humanism. And a lot of the philosophy that we see associated with people, like Descartes and Voltaire and Pascal with this famous wager, remove or move things like ethics from the domain of the, Well, I won't say supernatural, that's the wrong word.

But from the domain of the ethereal, to the domain of humans, to be something that we as humans fallible, as we are, can know and cannot prehat. And this idea that ethics has been associated. With reason, has been with us. Since is probably the dominant approach to ethics today.

Yes, we still see appeals to character and human nature. Yes. We still see appeals to religion and even ancient values. But all of these even today are generally couched within the form of the argument, the form of the rational inquiry. Perhaps looking for evidence, certainly posing things like thought experiments.

So that the reflective person came somehow think of and come to grasp at least for themselves and idea of what ethics must be. There's a strong relation in this rationalist tradition between belief and ethics. And in fact, the one feeds into the other, we have an ethics, or a moral obligation to believe responsibly.

And we have ethics that are founded on the basis of responsible belief. You see how they feed back into each other, it was hard to express that without making it a tight little circle but the argument is there, right? For example, false beliefs about physical or social facts like say the side effects of vaccination, may be listened to poor habits of action or poor practices of belief formation, like say listening to fake news.

May turn us into careless credulous believers, and here's something that's, especially relevant for today. Elbow, what was written in 1877. We have a moral responsibility, says Clifford Clifford. Not to pollute the well of collective knowledge. Now that can't be a direct quote, but it does come from a rebes article.

Believing with out, evidence is always morally wrong. There's different ways of believing in a, the try not again, there are different ways, belief and ethics can interoperate, and I've depicted two here on the left, the way reasoning, including ethical reasoning about say beliefs and desires influences actions on the right.

However we have what is normally called something like rationalization, where our actions are actually caused by our instincts, our norms and our habit. But when we think about it, we are able to infer to what our beliefs are desires were or perhaps, are this form of reasoning is known as abduction or inference to the best explanation.

And you can see that operating here and although it's typical to say something like, you know, it's only a rationalization after the fact it doesn't mean anything. It might be that in fact, rationalization is rational if we're not trying to determine what our ethical beliefs are, but rather trying to explain what they are.

Mars and Metcalf looking at the question, ask whether ethics itself is to big a word that maybe the concept of ethics just brings in too many things. I mean, we've already seen that it brings in religion and history and culture and calculation. That's an awful lot there. And then today, we have a much more complex, ethical picture.

Looking at things like compliance, legal risk, corporate values, and moral justice, and more all sometimes, describe musics and awesome. Sometimes working in parallel. As they say, sometimes coming into tension with each other and they may have a point. Although and the discussion at noon, thought about this talking with Mark and thinking, you know, I don't think ethics is too big a word.

I think there is a concept. It may be a loose concept. It may be a poorly defined folk concept, but there is a concept of ethics. There is a sense, maybe? Right and wrong aren't exactly the right words. Maybe rational is the right word, maybe compliance is the right word, maybe more, always the right word.

Maybe justice is the right word, but something like that, that we can point to and we can describe and we can refer to, with respect to our conduct in education teaching, and learning with respect to artificial intelligence and analytics there is something there there. I don't think breaking it all.

Breaking it apart and hiding. Off sections that we won't talk about is really the answer and effect. If we are to come up with an answer, I think that we need a concept of ethics that is going to at the very least explain if not give us insight and how to form things like compliance legorous, corporate values and moral justice.

Among others, they mass in that calf. Look at the adjective ethical and they look at the different senses of what we mean by ethical. For example, we might have ethical outcomes, they write that this might consist of a cloud service company declining a contract with abuse of within abusive government, or agency, or equalizing error, rates across protected classes, in an automated hiring system.

The outcome in other words, is the result of the process. It's the non-discrimination that we see in our hiring. It's the non-compliance with unethical behaviors. We see in some governments, it is presumably as well. The pursuit of a positively defined, good such as the betterment of society, the provision of more rights and more capacities for people, however, you're going to define it.

Here we have a picture here that talks about organizational and citizenship behavior in versus unethical, pro-organizational behavior. Let's, that's pretty good indication of what we mean by moral or ethical outcomes. But of course, ethics is more than it comes. There's ethical processes and you know, we can compare ethics in this sense to something like scientific method where what's important isn't, what comes out the other end now?

Presumeably in science. We want, what comes out the other end to be, you know, true, accurate, productive effect of or whatever the different scientific values are, but what makes it? So is the process that when in to producing that output and in science we call that the scientific method.

Now we could talk for a long time about exactly what we mean by the scientific method. But it is generally agreed that there is one and it is generally agreed that the raise one because it produces good outcomes. I mean after all we're living in an age of technical and and and other wonders technical and scientific wonders we're able to develop a vaccine for covid in a year.

I mean that's virtually miraculous but it's a consequence of scientific method. Similarly in the domain of ethics, a rigorously ethical practice process can look like some kind of messy, even unnecessary process but the idea here is you follow the process and something good, comes out the other end. What does that process mean?

Well there are many descriptions of ethical process and that's part of the problem. This one here talks about knowing the facts looking for the right people knowing the applicable laws being accountable towards stakeholders noting the core values of the company and being objective in decision. Making. Now that would seem like a very odd statement of an ethical process to other people.

And so it's not simply having a process that makes something ethical, but it's having an ethical process, which means something like a process. I guess that would produce desirable. Ethical outcomes. Another way to look at this and we talk quite a bit about this and the previous module is ethical values.

Now, here we have a description of values that describe states humans or any other being desires, such as beauty, justice wealth and I had our expressed as ethical principles little diagram, there shows a bunch of them right determination respect integrity quality culture, morality trust discipline character energetic industry. Well, there's too many to name.

I came up with a reasonably long list of values, but arguably, there are many more and then you have to take these values, instantiate them as principles capable of concrete action, and then figure out what to do when the principles conflict with each other. As they inevitably, will another way of looking at ethics is to think, in terms of ethical requirements, they check box system that we seem to have adopted as the norm for so many of our institutional and technological processes.

For example, here, we have an article from nature telling us ethical requirements or requirements for AI systems. Derived from ethical principles or ethical codes or norms. So we have unique value risk benefit consent, traceability privacy and these are feed into an institutional or ethic review board, approval process, thick click, click the same effects, that's that what we mean by ethical.

I don't think it is. I don't think Amy of these four accounts covers what we mean by ethics, maybe another way of looking at it is to look at the difference approaches to ethics. We're going to do a lot of that in this module, this year is a very quick sketch and I'm not actually going to divide the approaches to ethics in exactly this way.

But I wanted to highlight this from the last quiz because it does actually represent a fairly contemporary perspective on what the different approaches to ethics are. And the fiber approaches are as follows the utilitarian approach, which is a consequentialist-based approach based on something, like, creating the most good, for the most people are different approaches of interpreting, a consequentialist position on ethics.

And we'll look at those in some of the next talks. Another approach is the right space to approach. And and we've seen quite a number of the ethical codes and statements on ethical principles. Adopt and explicitly writes based approach to ethical issues in analytics and AI in learning and teaching.

And there's a lot to be said for that approach. Certainly, we would think that it's unethical to violate somebody's rights. We'll have to look at the how we identify, what goes rights are, what constitutes a violation of a right? And whether the domain of ethics is exhausted by a discussion of rights.

And of course, somebody will always say, well, if you're going to talk about, right, you have to talk about responsibilities. Another approach is the fairness or justice approach this derives in current philosophy, at least from John Rawls book, a theory of justice which defines justice as fairness and many ethical principles that we've looked at so far are based on this concept of fairness fairness itself is represented as an ethical value.

And we talk to about that, there's a history of this sort of approach and I'm going to kind of I'm going to characterize that in this module under a more general heading of a contractarian or a social contract based approach. And so we'll be looking not just at roles but also at people like hobs lot and Russo.

And I think about the different bases for and the different ways we can go about creating social contracts. Additionally, there's an approach based on the common good once again makes us think of Russo. I bet it also makes us think of Mal and it makes us think of communitarian-based ethics where whats good.

And what's right is based as much on what is good and right for everybody as on what is good and right for oneself, and then finally, as mentioned earlier, there's the virtue approach and I will talk about that virtue, based approaches may have begun with our Aristotle. Although certainly, I think people have talked about that before Aristotle, and it's enjoying a modern renaissance if you will, and there's quite a bit of discussion of character-based approaches to teaching and learning.

And these character-based approaches to teaching learning, I think will inevitably sleep into a discussion of the ethics of analytics and AI learning. So, like I say, I'm going to recast these five approaches. I want to bring in a more historical perspective, and I want to break out a bit from the fairly classes classic Western approach, to ethics that.

These five approaches represent. Finally, we're going to look in this module at what is called meta ethics or as my old professor, John Baker would say matter of ethics and I would say there's no air in meta ethics. And here, we're looking for, you know, we've got all of these ethical theories and ethical approaches here.

We're looking for the basis for ethical reasoning. How do we choose among them? How do we make decisions among them? How do we? If we want balance them off or how do we assign priority to one approach over another, if this gets down to some of the really fundamental questions in both the historical and contemporary discussion of ethics questions, like does might make right?

You and you know our first inclination is to answer. No it doesn't. I'm yeah, if we look at say international diplomacy, it's certainly seems that it does and so we need to take that question. Seriously, or how about the question of whether ethics describes duties? Can a fairly abstract?

And it is abstract concept such as ethics and moral reasoning, create and impose duties on people, can I say to you and be justified in saying, you have a duty to give money to the poor, or you have a duty to rescue a drowning dog, or you have a duty to refrain from murdering people.

Our ethics based on rights, we mentioned that before in the different ethical approaches. But here we flip around the question. Suppose we have rights does not give us a theory of ethics suppose rights are foundational on the other hand. Suppose rights are not foundational. How then to write and ethics relate to each other, drew ethics require agency.

There's an old principle bought. Implies can that is to say if it is a more obligation to do something than necessarily, you need to be able to do it and usually this is expressed in the converse. If you are not able to do something, then you do not have a moral obligation to do it.

Well that seems to me sense but what do we mean? When we say not able to do something? Because we hear this kind of argument alone in contemporary society. We cannot stop using oil through power our society. Therefore we have no ethical obligation to stop using oil or to move toward renewable energy sources for example.

And then finally, the big one that gets everybody relative and relative ism versus universality. If there is an ethical principle doesn't apply to everyone all the time. If there is a system of ethics, maybe a consequentialist system, maybe a right-space system, does it apply to everyone all the time?

After all the United Nations didn't just call it the declaration of human rights, they called it, the universal declaration of human rights. Conversely, what if ethics is different from one environment to another and our discussion so far? And looking at the ethical codes and particular we've seen that we can think of ethics as being very context, bound bound to specific profession bound to a specific set of objectives or outcomes that we're trying to achieve.

So maybe ethics is relative and if it's relative by profession maybe it's relative by culture. We saw that difference societies view, the role of God in ethics differently. Does that mean? Then that different societies can have consistently different sex of ethics, or does it mean that some side societies are basically unethical and other societies are ethical.

And if so, how do you judge, which one's these are tough questions but these are the sorts of questions that we need to wrestle with. And the expressed this express this kind of badly in the initial discussion that we had at new today where I said something like, the reason for this course, is that people having learned about ethics, haven't learned about analytics and haven't learned about education.

But what I mean by that isn't that people didn't haven't been scored by someone like me, who will tell them about all these things? What I mean by that is and people haven't asked the questions. A lot of the discussion of ethics in artificial intelligence and analytics, simply assumes say that privacy is a right.

Must be respected, but when we push that and we must push that, what is the basis for such a statement after all privacy protects criminals, as well as the innocent? And, you know, it seems like well, of course we should just balance this but what makes a consequentialist approach a technical approach of balancing it the right approach.

I mean you wouldn't balance killing and not killing. Would you or would you sometimes it seems that our society would. So those are the kinds of questions that need to be asked and they need to be asked and we need to think about some of the answers that are possible before coming up with pronouncements on what the ethics of analytics and AI in learning and teaching looks like, If you haven't asked the question, you're not in a position to provide the answer.

I think that's obvious, but but maybe it's not. So are study of ethical approaches in this module, isn't about learning the different, ethical approaches, I could care less, whether people know the different ethical approaches. It's just that the different ethical approaches give us possible answers to some of these questions.

And at some point in the process of reasoning about AI analytics and ethics, we should consider these possible answers and we don't have to remember them all that would be silly, but we should consider them the way we might consider, you know, what route to take to New York City, or whether to have chicken or beef for dinner.

We're not gonna memorize all the options, but, you know, if we don't consider the options and we may spend the lifetime eating nothing, but chicken and never try and beef. And that would be sad unless you're a vegetarian, which face that would be good thing. Yeah, you know what I mean?

So that's what we're up to, in this module, looking at the answers to some of these questions, both in terms of the ethical approaches. And in terms of meta ethics that people have come up with over the years and where it will lead. I think is for us to think about how to approach ethical reasoning.

Generally, I think may change our approach. Ethical reasoning is certainly in my case, it did. And we'll talk about that toward the end of this module, where we will, ask basically, what is the end of ethics. What are we up to here? And does this discussion ever end? So, with that, I'm Stephen Downes and I welcome you to module five and the continuation of ethics analytics and the duty of care.

Virtue and Character

Transcript of Virtue and Character

Unedited audio transcript from Google Recorder

Hi everyone. I'm Stephen Dowens. Welcome to another episode and ethics analytics, and the duty of care. We're in module five, We're talking about, approaches to ethics. And in the previous video, I talked about approaches to ethics in overview. Now, in these next few videos, I'm going to talk about a number of ethics, number of ethical theories in more detail.

And the first one will be our look at virtue and character. Now I'm sensitive to the fact that this could be an entire course and instead it's a short, maybe one hour segment of this course, I'm also sensitive to the idea that listening to me talk about virtual character for an hour might not be the greatest way of going to spend your afternoon even if that's.

So that's fair enough. Wait for the transcript. It will be available in the course. The reason why I'm going at this way is because I want to combine a bit of the background research a bit of the preparation for these things but also a bit of the genocore that comes from talking about these things off the top of my head.

Because what I'm doing is, I assemble this course, there's I'm not simply pulling together research on various topics, I mean, I am doing that. But I've been working on philosophy and ethics for the last 40 years. So, yeah. There's a sense in which I'm going together research but there's a more important sense.

I think in which I'm drawing on my own depth of experience and knowledge, not only of ethics but of technology and of learning and even of journalism and other aspects of my life to bring together an overarching view, and that's going to be characteristic of this presentation as well.

Now, I mean I could just read an encyclopedia article and I consulted a number along with primary sources in preparation for this this video. But it wouldn't be very useful because it wouldn't really have any contacts. And I want to take this idea. I'm placing it into a contemporary context and also a context based in this course.

So that's why I'm doing it this way. And I think he should take these videos not so much as me offering a lecture on a subject. I think that would be the wrong way to think of them. I want you to think of them as me, sending myself up with a set of resources including a set of slides.

But I'm also working from some text here as well. And some background information sending myself up and then working my way through these ideas as a first draft, excellent, more like to second her third draft of what will eventually become a more comprehensive work. If you view it that way, if you view this really as me thinking about these subjects for myself rather than me telling you about the subjects so that you can remember them.

I think that might be a more productive way of looking at these videos. Okay, I've probably said things. So, I know I've said things sort of like this already in the past and I made come back to this message in the future but I think it's a helpful way of understanding.

What I'm up to here. So but I'm also trying to make kind of a neat resource too. So yeah, I'm gonna try to do the the video production values and all of that. But, you know, I'm not national geographic. I can't pull off that this sound in the moving images and and I'm working on zero budget.

So, yeah, what do you expect? Okay.

Virtue and character. So ethics, as they say, in this perspective, is in the first instance, this study of virtue in a person as perhaps real, but revealed by a person's actions or perhaps, as revealed by how they conduct themselves in society. But what is virtue? That's kind of that's one of those horrible questions.

It's a bit like asking you but what is good or what is ethically, right? Well, the Stanford Encyclopedia Philosophy, says, and I quote, a virtue is an excellent trait of character. It is a disposition. Well, in entrenched, in its possessor, something that as we say, goes all the way down.

Unlike habits such as being a tea drinker, to notice expect value feel desire to choose act and react in certain characteristic ways. Now it we could pull a lot out of that definition. When we say, for example, it's a disposition, you know, that that brings to mind Gilbert Riles behaviorist theory of mind in which, you know, people's character amounts to nothing more.

He says than dispositions. When we say it's well and transcendence possessor which sort of wonder whether there's an appeal here being made to the essence of a person or perhaps we're talking about a person as something that has deep kinds of formations. Maybe deep neural patterns or something like that.

When we drive distinction between a mirror habit and the virtue again, we're sort of trying to distinguish that which is shall we say in accidental property of a person and not, which is an essential property of person, but we need to be careful of this because the core attribute of the essential is that it's unchangeable.

And that creates intractable problems for virtue as a theory of ethics. The nature of virtue is usually characterized by describing the different traits of virtue and and we will do that in a little bit. Things like honesty, fugility piety, humility, caring, courage, etc, right. But the the way virtue theory works is that it's not defined by these traits.

It's it's the old Gods. Socrates used to use a lot. Somebody would say to him, well, what is the nature of justice? And and somebody would answer. Well, it's giving people, what is there? Do it is being fair and adjudication and Socrates will say, well no. These are just examples of justice, but they're not.

What justice is and so the same with virtue, right? These are examples of virtue but they're not what virtuous and again. So we have this sort of idea that there's a thing at the core which is virtue and then these these characteristics spring off of that, you know, almost as though they are aspects of some sort of sense of perfection.

So there's the idea of what the perfect person might be. And then these character traits, which are representative of a perfect person but aren't what define a perfect person.

The achievement of virtue is represented as the highest ethical principle, and it's essentially tied up with the development of character. And that's why I've put virtue and character together. In this section Aristotle, might say that the achievement of virtue is a lifetime, task, you spend a lifetime building your character.

There's an old Dallas trope that goes or an old Dallas meme, which is probably not even accurate, right? Watch? What would you say? Because then it's what you believe, what you, what you believe, because then it's your character watch what your character, because then it's your destiny. I don't think about Sue ever sent any such thing.

I could be wrong but I don't think so. But that gets a kind of the idea, right? That we can by saying the right thing, going, the right thing, develop these virtues in ourselves and it's an act of will have we'll come back to that. It's the opposite of what might be called.

The weakness of will, it's the opposite of our succumbing to the temptation to endulge to be in temperate dishonest violence, capricious. All of these things that I guess are not virtues the achievement of virtue can be thought of as something like self formation. And here we see this reflected in modern writers such as Michael from coke who talks about and they quote self-formation as an ethical subject, a process in which the individual dilimates that part of himself that will form the object of this.

Moral practice defines his position, relative to the precept, he will follow and decides on a certain motive being that will serve as his moral goal. There's a fancy way quoted from the internet and cyclopedia of philosophy of saying, you know, what kind of person do I want to be?

And and how do I go about developing that in myself? I might say, for example, I want to be the kind of people that person that people trust and then as a results, I undertake the sorts of actions and cultivate that aspect of my personality that in genders trust.

But it can be a lot more basic than that. I remember, once way back when I was working with the graduate students association, somebody once said of me within my hearing, oh yes, Steven is always on time which was a bit surprising to me because I had never really thought of that but then I realized, yeah I want to be thought of as the sort of person who's always on time and then I began to cultivate that aspect of my character to be the sort of person who is there in his chair, ready to start when the meeting starts or the event starts or whatever, there's not an accidental thing and it's not even something that's in itself, inherently a virtue.

I mean nobody nobody lists always being on time as one of the lists of virtues, but it's this aspect of character that reveals the deeper character that I was trying to cultivate. I don't know if I was successful, but, you know, I went through that process in my life and it's something that isn't obviously limited to me.

And it is something that we think about or have thought about more perhaps in recent years there's a phenomenon of the idea of someone quote working on my self and you know in popular culture we generally think of this as applying to women but it certainly does not apply exclusively to women and we can ask things.

Like if you are working on yourself, are you considering these points when you say I am working on me, are you working on yourself or or are you maintaining your happiness, are you somewhere in the middle? Or you just do not know? It doesn't matter where you are in your journey and that's part of it.

The idea that we're going from where we were to where we're going to be, excuse me, the idea of an older less good version of myself to more ideal version of myself. It partly has to do with virtue and apparently has to do with our relations with each other's and whether we're ready to interact with others.

And it had partly has to do with our broader standing in society. What kind of person do I want to be in society? So the virtues the virtues in the context of excuse me, general theories of ethics aren't not so much that sort of thing but more enormative sort of thing.

And by normative what I mean here, is that the idea of the virtue tells you how to behave, or what to do from a BBC documentary. I pulled this virtue ethics teachers, the following an action is only right. If it is an action, not a virtuous person would carry out in the same circumstance.

A virtuous person is a person who acts virtuously person acts virtuously. If they possess and live, the virtues a virtue was a moral, characteristic that a person needs to live. Well and you can sort of see here that it's telling you what you should do but not specifically what you should do, how you should live but not specifically how you should live, you know, even what kind of person you should be but not specifically, what kind of person you should be?

So it's normative in the sex that it's telling us, you know, what, constitutes a right action. But it's vague and the description of what a right action is and and that's also listed as one of the criticisms of virtue ethics so that it's not prescriptive in that way. There's a couple of things here worth noting.

And first of all is the characterization of a right action as something that somebody would do in the same circumstances. And so there's an element of counterfactual reasoning here. There's an element of asking yourself, what would be the state in the closest possible world where I am. In fact to virtuous person or where a virtual or a virtuous person is standing in my place instead of me.

And so that is going to create difficulties, certainly it creates a need for something like a possible world semantics. So we can understand what's happening in possible worlds and then apply it to this world, but we can do that, we have evidence of being able to do that. And there's also this sense in which, you know, being a virtuous person isn't like laying down a set of rules or principles but rather it's more like recognizing what the right sort of thing is to do.

And I have a lot to say about knowledge as recognition in other forms another formats and I think that kind of reasoning applies here as well. A virtual person wouldn't follow a principle. They would just recognize or just know what the correct action is here. I've brought me in on the other side of this slide, a discussion of virtue ethics as it applies to research and research ethics.

Again applying virtue ethics as a normative approach. Telling people what to do without being a prescriptive approach. So we have basically a collection of factors. Science is a social practice and ethical principles research, empirical studies on researchers, and their assessments of virtues other virtues evidence, ethics studies and one's personal values bringing this all together to create something like virtual virtuous research.

And then these are applied against the values of norms of the institute of science and the external or externalized, ethical guidelines and principles and research. And so it's almost like we have this. I don't want to say us versus them because that's not how it's supposed to wear a coat.

But this internal sense of what a virtuous researcher would do measured up against the framework to find by society and ethics boards as to what costs to suits ethical research. And so, the ethical, the definitions of what costs you to ethical research by say research ethics boards aren't taken as definitive, but rather are taking as a standard of measuring against one's own.

Virtuous sense. At least I'd like to take that and I think there's something to it. I don't think we've got the whole story here, but I think we've got perhaps part of the story and I'll refer back to some of this discussion as we get to the later points.

Later modules of the course. So let's think for a moment then what are the virtues? Because this is where almost all discussion of virtue, ethics and ends up. And just to introduce that I've brought forward here. This is a representation of Saint Thomas Aquinas on virtues Thomas. Aquinas is a very well-known philosopher from the Catholic tradition, but this is his position on virtues represented as a UML document, which is a type of diagram that computer scientists use in order to display the flow of data on information across the system.

And so, we have, first principles, practical reason, characters feeding in to the concept of virtue and human good and choice mediating. And so we have the core value of virtue. Whatever, that is. And then we have the sub kinds prudence, justice courage temperance. And then the characterization of virtue.

And okay, I think kind of get that, but it seems odd for me to think that we can represent Thomas with a UML document with the UML diagram. And there does seem to be something missing in that something to think about. So what are the virtues? Well, we'll begin with what are called the cardinal virtues?

We just read them that are the four virtues of mind and character in both tough classical philosophy and Christian theology. They are prudent. Switching includes wisdom justice, which could be thought of as fairness, or righteousness fortitude, which could be thought of as courage, or maybe resilience and temperance, which could be thought of as restraints or the practice of self-control.

And we can ask are these all and only the virtues and the answers going to be pretty, obviously, no, these were taken this core, but in the 2500 years, since these were originally devised, there have been numerous variations on methane.

Probably the most well known, our aristotles 12 virtues. And here we have the list. Bravery temperance. Generosity truthfulness wittiness, friendlyness being spirited, conscientious indignant benevolent and industrious. And what I find interesting about this list, not to mention some of the characteristics like wittiness. Again, you wouldn't think being witty as an ethical principle, but there it is.

And you know, when you think about it, you know, it's it's like you know, it's like the theory of being successful as being, you know the sort of person you want to sit down and have a beer with or something like that. Maybe having a beer with is the wrong example but I think you get the idea right somebody who's friendly you know going is somehow ethically better.

So we think and maybe how the way aerosol sets us up at least in this depiction is that there's a range, these are characteristics on the one hand, they might be completely absent. And that's a vice. And on the other hand we can take them to excess and that's a vice and being virtuous here.

Really means finding that happy medium or the happy mean, you know, somewhere in the middle. So being brave as opposed to being cowardly, which is the absence for the deficiency. But as also, as opposed to being stupid or rash, you know, what's one thing to be brave? It's another thing to jump off a cliff without a parachute.

That's rash a temperance. Similarly, you know, being addicted to alcohol, or being addicted to food is too much. It's, you know, it's a deficiency of temperance temperance if you will. But on the other hand, giving up food, giving up, alcohol taking these things to extreme could also be seen as a vice.

This is kind of interesting in the Greek content contacts because there was no shortage of aesthetics in the great context. People experimenting with different ways of living noble lives. Virtuous lives or just philosophically consistent lights, another way of looking at all of this is through the lens of stoicism and I don't like using the, the expression through the lens of.

But, I'll use it here. It's good enough. And stoicism again, we could be an entire course, right? But basically can be characterized by the triangle where on the one corner or on the one hand, we expressed the highest version of one self from moments to moments. It's, that's the principle of irritate be all that you can be and on the other hand, focusing on what we control and accepting the rest as it happens.

And that reminds us of that old phrase, right? Give me the knowledge to. How does that come? I'm trying to remember it. Give me the courage to change. What I can change the knowledge to for the acceptance to not change when I can't change and the wisdom to know the difference, something like that, you know, the phrase.

And so the third part of stoicism, the part that everybody kind of keys in on is the idea of taking responsibility and recognizing that it's not an experimental situation that makes us happier miserable. But rather our interpretation of that situation, which is fine for the most part. But if you're external situation is one in which there is utterly no food to be had, it takes quite a bit of strength of character to be a poppy.

Nonetheless, you know? So, I mean, from the concept of virtue is sometimes to be contrasted with the concept of having ordinary emotions in an ordinary life. These virtues aren't just limited to the Greek and Christian tradition. We find them in other traditions as well and all sample. A few here and again and a whole entire course in one slide, right?

So, for example, within confusionism, we could identify the following virtues benevolence righteousness propriety wisdom and fidelity, and that last one probably is what's most characteristic of confusionism. Because fidelity, means honoring your, your parents, honoring your forefathers, and to a degree, honoring those who you serve, or, you know, who are in authority over you.

And again, that's a very hand. Wavy representation of confusionism wasm has a similar perspective. And you know, as you read the daily gene, you read constant references to depending on how it's translated. Exactly. But the sage is this, or this sage is that egg by the sage we can mean?

Perhaps the philosopher or perhaps the enlightened ruler depends on your perspective, right? And so here, I'm quoting from Britannica Taoist. Sagehood is internal, although it can be manifest in an external royalty. That brings the world back to the way by means of quietism, variously called non-intervention way, inner cultivation.

Yeah. Or art of the heart and mind is in June.

And that was a misinteresting because the virtue that it describes is in a sense of virtue of selflessness, so that you're not striving for wealth, you're not striving for power but it's also a virtue of effectiveness in that. If you live this way, you roll acquire wealth, you will acquire power, and it's this, this sense of self abnegation, you know, it's not simply doing nothing but rather doing something so perfectly that there is no trace of your yourself in the finished work.

So like it Jade carving, you don't want to see the marks made by the carver, there's a lot more to develop them than that. But from the perspective of virtue, we can see that there are some principles that are sort of not principles or some characteristic virtues. There are not really characteristic virtues, but it's still very much a form at least to my interpretation of virtue ethics Bushido.

And here I can't make any claims that this is an accurate representation of Bushido because I simply don't have the background in that theory. But nonetheless, I can say because it's right here that these seven virtues can be described as a type of virtue ethics. So, they would include things like integrity, respect heroic courage, honor, compassion honesty, and sincerity, and duty, and loyalty.

Now, I wanted to things, we should notice here, isn't it? You know, although it almost feels like there's a core, there isn't a core and almost feels like they're the same thing. They aren't the same thing. And there are certainly differences of emphasis in the different approaches to virtue.

And I think that's an important aspect of this approach and that that aspect is just what are those virtues? And Keenan offers a more contemporary list justice, which is not simply fairness, but a requirement to treat everybody equally and impartially reference possible fidelity, which seems to contrast not well with with justice.

And that we should treat people closer to us with special care self-care or unique responsibility to care for ourselves, which speaks again to this achievement of virtue. And then prudence, we must always consider these things. And the way he derives these, he says, as persons, we are relational in three ways, generally, specifically and uniquely.

And each of these relational ways being demands a cardinal virtue, which are justice fidelity and self-care. And then the fourth cardinal virtue, which is prudence, which determines what constitutes the just faithful and self-careing way of life for an individual. It's, it's the moderator between those three if you will And that's an interesting characterization of virtue because again none of justice fidelity or self-care is always going to carry the day and we need this prudence this idea.

Perhaps of wisdom knowledge attentiveness to actually decide how to weigh one of these or the other. So that's one way of looking at the virtues from a contemporary perspective. Here's another way moolah roof. And we don't normally think of a movie starring people, like Nicole Kidman to be an example of virtue ethics, but nonetheless, there it is.

Promoting the virtues of truth, beauty freedom, and love, and just to, so, just to show how those roles up into virtue ethics, this is something that I created called Ching. And it's the idea of creating a sense of virtue reflective of day to day life, using the same methodology as the eaching with coins.

I've never really mastered the method with yero sticks. And so the idea is that he tossed the coins that gives you a position on the grid and so what it is, is a combination of two of these for virtues. So for example, if you the raw 1 0 and then 0 1, you get freedom against beauty and what is that?

That's beautiful. Freedom beauty on freedom. Sweet, liberty. No responsibilities for any and that's an interesting way of looking at virtue as well, but the idea here is that we're seeing how the virtues are instantiated not just in ourselves, but how we see ourselves living from day to day and meant kind of, what is, what the doubt aging does as well.

It's sometimes represented as, you know, a fortune telling process, but I don't think it is. I think it's a way of seeing understanding how the different virtues, how the different elements of the world come together to form, different ways of seeing the world and your life and other people in ways that might be important to you.

So in contemporary times, and by contemporary times, I mean, anytime in the last 50 years, I suppose, The concept of virtue ethics has blossomed into a way of looking at how to do well and be well and live. Well, generally, and I've grouped these under the heading of character in mindset and I think of these as modern adaptations of virtue theory and and it's not hard to see the connection.

Look at this diagram from George. Koros his book, the innovators mindset and look at the values that we see being described here. Empathetic problem, finders risk, takers network, observant creators, resilient and reflective. Now, let's true that this isn't exactly the same as prudence justice. Fortitude and temperance, nonetheless, there's certainly, the sort of argument that could be made that these are modern virtues for a modern time.

Although cynic might say they're capitalist virtues for a capitalist time. So how do we get to these modern virtues? Let's go to pre-driven. Nietzsche Friedrich Nietzsche, looks the concept of virtue and he comes up with a number of really important ideas. And again this is a full course and just a couple of minutes of explanation but there are a few of the things that he does.

He looks at the concept of the uberaments, which we know now is Superman and asks, basically, what would the ethics of a Superman be if you had no constraints? What would your ethics be? And I think that's a good question of the answer that we got at least in the mid 30s and North America is that if you had superpowers what you would do is fight crime which seems like an odd thing to do.

But I think that tells us a lot about the state of American society at the time and rather last about the nature of Superman. But beyond that Nietzsche has this idea of what is called the transvaluation of value. What if you took a value? Such as say honesty and you reverse it.

So that being dishonest is considered to be the ethical thing. And being honest is maybe say a weakness or a lack of virtue somehow. And this is a bit of a caricature of Nietzsche's view, but it's close enough. For our purposes here, you still have a system of morality out of certainly not the one that we have but one that could be defended pretty much on the same sort of grounds as the one's that we do actually have.

And some others just sense in which we need to understand. Morality is beyond basic definitions of good and evil beyond this 2500 year old philosophy of Zara which depicted the world in these terms of good and evil, right? And wrong and, and depicted the world as this endless battle between the two and perhaps see virtue as something else.

Now, we, we can depict it as what the uberament would do. Or we can say that it has to do with what your values are, what your character is, and what your nature is. Now, you can see how this could lead to some fairly bad results. If put into the wrong hands you can see how for example, people might say that, say a certain class of people that has a certain nature, are more ethical than others and and you know, or you know, as a type of nationalism say and I think that would be a misinterpretation of what Nietzsche has in life.

I certainly do think it would be a misinterpretation nonetheless. That's what people have done and you can also take this transvaluation of value and just say you know whatever the Superman does is good and apply to contemporary politics and the world of Donald Trump in which lying is a virtue.

Stealing is a virtue murder. If you could get away with it would be a virtue because the idea in a world like a Donald Trump world, is you take what you can and that politics isn't, the art of negotiation and compromise. Politics is the art of leverage and viewed from a certain perspective that can be seen as good.

And these can be seen as virtuous characters. Well there's a danger in that kind of thinking obviously but it's not clear where the source of the danger is and and how you address or resolve that danger, but I think it's reflective that the Superman's new motto. His old model used to be truth, justice.

And the American way. But now it's been depicted as truth justice and a better tomorrow. And we're told it's meant to inspire people from around the world. But I think that, maybe it's because the American way found itself, unable to distinguish between the ethics of a Donald Trump. And the ethics of say, I don't know who maybe last part of the problem.

Another aspect of character and mindset is the idea of role models. So, on wiki Wikipedia. And I admit, I edited this sentence so that would read properly. That's what you can do with Wikipedia. A role model is a person who's behavior example or success can be emulated by others, especially by younger people.

The term role models credited to sociologist, Robert K Martin, who hypothesized that individuals compare themselves. With reference groups of people who occupy the social role to, which the individual aspires an example of which is the way young fans, may idolize and imitate. Professional athletes are entertainment artists and I think there's something to that.

And I've talked about a theory or maybe a way of looking at the world that I've had over the years where the role models from say the 1940s were well even we can go back even before that before the war. Before the war, the role models were GMA and FBI agents.

And we had dramas like dragnet or even our anti-hero characters were where people like Philip Marlo.

And then during and after the war the role models were, you know soldier GIs and that held on for a while. And then there was Sputnik and all of a sudden all of society did a 180 and role models were people like scientists and we had science fiction, depictings.

Wash buckling, young heroes with slide rolls on their bells of they stole that quote from somewhere. I don't know where it was from and there was a time maybe in the 1950s, maybe the 1960s, where the role model was the film star. And everybody wanted to be famous or in the 60s, in the 70s where the raw model was a rockstar and everybody wanted to start a band.

I'm in the 90s and into the 2000s, perhaps a more cynical age, the role models were well they've always been athletes but since the war perhaps before the war as well. But also people like businessmen or tycoons, and so you see people like Bill Gates or the founders of Google, or Mark Zuckerberg held up as role models.

And we, we concerned to see the problem with that as well. And here, the idea of a role model isn't copying exactly what they are. But it's also it's kind of a symbiotic relationship between the person who's in the role model role and the person who's using them as a role model.

And, you know, it brings to mind the plaintiff of cry of the kid who says say. It ain't so Joe when she was Joe Jackson is bound to be part of the cheating, Chicago Black Sox or even in more contemporary times we have Aaron Rodgers who just a couple of days ago was found to be just oh you know lying about whether or not he was vaccinated.

I make him, you know, he's the fall of the role model. So, but it's a thing and it very much has to do with virtue ethics and element. It's not about whether a person can throw a touchdown or whether a person can found a company or whether a person can land on the moon or whether a person can capture Berlin, it's about the properties, whatever they may be in, they're not always listed that enable people to accomplish these great things.

You know I have a picture of I'll show it to you because it's worth showing because it's on my wall right there. That's Jose Bautista. Now we'll leave us aside the fact that these 35 30 or 35 years younger than me. But I have this picture there because it's a good role model.

I look and it's not because I want to be able to hit a baseball into other space. It's because he emulates virtues that I think are worth following in hockey. Doug Gilmore is an example of a role model for me. Somebody who shows the heart and grit and willingness to play through pain in order to help the team and achieve success.

Now, that doesn't mean, I think I should be, you know, show the same sort of heart and written willingness to play through pain. I am not going to do anything on a broken leg, It's not going to happen, You know. I don't want to emulate those virtues identically, but the model is something to work from, not as an ideal, but as something to shape, the way I see the world, and I think that's how role models kind of work.

Here, can take that to extremes, you know. There are ways of describing different personalities of people, and there have been no angs to the personality personality type quizzes. And you know in education we have the ongoing discussion of learning styles and that's neither here. Nor there. What we do have though is this identification of a set of qualities of a person sometimes thought of as innate or unchangeable essential in other words or sometimes thought of as something that you can acquire or develop or even sometimes just describe just preferences just sort of accidental the way we ended up in life but these can be depicted has valuable or not valuable.

These can be depicted of virtues. Here we have the the DICS personality types and you can see basically, what we've got here are four types of person who have four sets of virtues that they value. So the one is results oriented firm forceful. The other one is outgoing enthusiastic optimistic.

Another one is and this is more like me and analytical reserved systematical and precise, but if you want systematical, look at this course. And another one is even tempered. Accommodating patient tactful that's not me, right? So you can see how these are lists of virtues, but it's almost like a menu that you can choose from over on the right hand side.

We've got the the traditional Myers, big personalities types and we can talk about whether those are real or not real, and it doesn't matter. But it's interesting because somebody has taken them and given them virtues and vice that are particular to the personality type. So, for my personality type, which is I in TP, a virtue would be attentiveness.

And yes, I can be really attentive but the vice is apathy. And yes, if I'm ignoring something or if I don't care about something, I really don't care about it and that can be seen as advice. And this sometimes is talked about explicitly in terms of virtue or vice, Here's an article that showed up in future.

Today reporting on a study published in the journal of experimental psychology for, which said, and quoting from future here. It's shows that it showed participants, with liberal and conservative political beliefs, both shared erroneous news stories to a certain degree, but conservatives, who also scored low unconsciousness engaged in such behavior to a greater extent.

They were more likely than liberals or more conscientious. Conservatives to share misleading information. So here what we have is the attribution of advice or the absence of advice to a certain group of people or attribution of a virtue or non-attribution of a virtue, to a certain group of people and then associating it with, in this case, a non-virtuous type of act.

So again we this shows how something like virtue ethics can be a bit misleading and and that takes us to the final slide. And I've or yeah, the final slide, which is the discussion of mindsets and a mindset, is kind of like a character trait in this kind of, not like a character trait.

It's a set of beliefs that shape, how you make sense of the world and yourself, George Lekoff might think of mindsets in terms of frames, right? A frame again, is how you see the world? What categories, there are in the world, how cause and effects work in the world?

What is your own personal nature? What are wrong capabilities? And so, we have things like the growth mindset, which sees our own abilities. Something that can change rather than being fixed the innovators mindset which we discussed earlier design mindset, which is represented here in a diagram where your virtues are that you're built.

Built to think the center your work, run your users, you selectively, pause feasibility, whatever that means you take on a beginners mindset, which is a mindset within a mindset, you embrace constraints, which is not what I do. I don't work within the box. I smash the box. I did.

I the existence of the box working with interdisciplinary teams. You see hat talked about a lot in other kinds of mindsets and thinking of everything as a prototype. Well, you know, that's kind of like the, the founders are the startup mindset. And so there are all these mindsets all these accounts of what count as virtuous and we can think of the literature out there and that talks about grit as virtue resilience, as virtue entrepreneurship is virtue, etc.

And I don't have a slide here talking about how how virtue ethics can fail, but we can talk about that. And there are a number of ways. First of all, if we think of virtue ethics as a normative theory, that tells us what's right? And what's wrong? Then it needs to take a stance on the deontic status of anything.

In other words, it needs to take a stance on whether something is right or wrong. And then identify a certain of right making features in other words what makes something right? What makes something wrong, right? You can't just list the whole things that are right in the world and all the things that are wrong.

The idea is if it's going to be a normative theory, they have some sort of characteristics. And it's these characteristics. I tell us what virtue is. Well, the problem is either. It can't actually make this determination, or it will be thought of as impossible, because whatever it brings forward as defining a brightness and wrongness would be less obviously, right or wrong than the things.

It's defining right or wrong? Let's take honesty, right? All honesty is thought of as a virtue. And so a dialectic theory would say you should act honestly? Or it might say something like you should be honest, right? A virtue. Say, it theory would say you should be honest, but you don't come as you should act, honestly?

Okay? So there should be a reason for making honesty. A virtue. But what is more? Obviously a virtue than honesty that honesty would depend on that as a virtue. You know anything that we could bring to mind as an argument in, favor of honesty is less likely to be thought of as virtue than honesty is.

So calling honesty. A virtue hasn't really told us anything and that's a problem. The one of the other problems and I'm sure you've already seen this, just in the way of presented the subject is that any number of different things can be listed as virtues. And there's no way to tell them apart and it's kind of a variation of the first problem, but but it's variation because, you know, in the first problem.

Okay. Nothing tells us that honesty is virtually, you know, honesty is just a virtues not in virtual something that it's a virtue, okay? And that's fine. But now I have my, my alternative list. How do we determine? Which one is the correct list? Or maybe there are multiple lists.

As in the case of the multiple personality styles, or maybe something that is obviously a virtue to. Someone is obviously not a virtue to someone else. I think of an example spirituality is spirituality of virtue or not a virtue or is it even possibly a vice Ask different people and you will get different results.

And so that's an issue and then finally maybe I could go on with criticisms criticism but I'll lie this one. And what will end it? There, it doesn't really guide us in anything. Okay, I have my setup, virtues. I'm honest. Let's say I'm charitable. I'm prudent. I watch out for my own interests.

When I watch out for other people's interests, you know, this set of values and let's say I'm confronted with some issue, let's take Philip a fix. Trolley problem. Philip a foot describes. It. As you know you've got a trolley it's going down a track. If you pull the handle, you save the five people.

It was gonna run over but by putting it on a track, you're gonna hit someone else. So what do you do? Who only answers you do. Whatever a virtuous person would do. Well what would a virtuous person do? I don't know and therein lies the problem, right? The only way to know whether or not you would call that handle is to put a virtuous person in the position of having to pull that handle the counterfactual impossible to decide.

But if you put a virtuous person in that position doing that in itself is a very unethical act because you're gonna kill someone. Hey, it's kind of like the squid games of philosophical problems. Someone's gonna die. And is it ethical to be one of those people who's gonna die or to not be one of those people who is going to die or to decide which person's going to die and which person thought going to die all of these come up in the squid game.

And all of these come up in life and therein lies the problem with virtue ethics the most virtuous person in the world still is laughed without a solution of how to answer the world's ethical problems. And that's why virtue as a methical principle kind of receded after the renaissance and the enlightenment.

As alternatives came along, that would allow us to use our capacities or faculties of knowledge and reason and experience to make these determinations. And it's interesting that we see a revival in virtue ethics today in the form of mindsets, in in the form of personality types, in the form of raw models.

It's cetera and it's almost like it's a symptom of a society that's losing faith in the capacity of reason and wisdom and experience to tell us what's right and wrong. Now we have to go back to what we were doing before. We decided that recent in science would describe the way forward for us.

So that's the first of these ethical principles. And I probably should have mentioned in my preliminary to this particular video. But I might actually be talking about these longer than I really should give in the overall context of the course, but they're endlessly. Interesting to me. And I'm going to be constantly going off on diet tries and it might put me behind in presenting the video material for the course, but I'm not going to worry about that because I'm going to take the effort that it takes in order to talk about each of these ethical issues appropriately.

And if I fall behind that fall behind, I mean the only person setting the schedule here is me so and you know, and it's not like I have a million people following the course, I know that's terrible. Right, but again, it's it's me thinking and trying to decide between different alternatives, happily.

Nobody lives for depending on my decisions on this. But I hope you enjoyed this discussion and I hope you found things to agree with in it and disagree with it and perhaps challenge my interpretations of different approaches and different theories, all of us, fine. It doesn't matter whether the presentation is the most expert, precise, presentation in the world.

What matters is that I have got enough of it in there so that I've given you something to think about, and, and, and consider alternative possibilities alternative ways of approaching ethical issues in learning analytics, and AI in teaching and learning. So, that's it for this video. Thanks a lot.

I'm Stephen Downes and I'll see you next time.

Duty

Transcript of Duty

Unedited audio transcript from Google Recorder

Welcome once again, to ethics analytics and the duty of care. I'm Stephen Downs. We're in module five, approaches to ethics. And today's talk is going to focus on duty. Now, you might think that this is kind of an appropriate topic to pick given that today is November 11th 2021 or rememberance day here at Canada.

And the topic of duty often comes up, when we talk about are military obligations and our annual day of reflection on the service of people who've given their lives for our freedom and our democracy. I've chosen to go with a slightly different motif for the cover of this presentation though, and I've picked police.

And it looks like actually Australian police, but I couldn't say for sure, but I could have picked any number of professions doctors lawyers, you know, even technologists. Accountants. Even researchers such as myself, perhaps, academics and, professors. All of us who feel informed in one way or another by a sense of duty.

And so I I didn't want to just go into the standard trope of yet duty as a military thing and that's the beginning in the end of it. What I'm certainly not going to ignore that aspect of the concept of duty and around it. Some of the associated concepts such as honor and courage and sacrifice.

I think these are all interesting aspects of ethics and morality in general. And there's a whole history behind that, which I want to talk about today. So the subject of duty is it's the idea first and foremost, I suppose of a requirement to moral action and I'm kind of a free spirit and I'll admit that right off the bat and so, you know, being required to perform a moral actions, not sort of thing that's ever appealed to me.

But by the same token that would certainly give other people grounds to say that. You know, my being a free spirit is rather selfish and I'm ethical way to live so we can look at that from both sides. The branch of ethics concerned with duty is called deontic ethics of the word.

Dialectological comes from the Greek word day on which means duty and so basically duty-based ethics teaches us that some acts are right. And some acts are wrong, simply because those are the sorts of things that they are.

We usually think of ethics in terms of what the yoke come is. You know, if you do something and you kill a person, not something, whatever it was was a morally, bad act. They all damnology doesn't work that way. And there's a few reasons for it and I think these are actually pretty good reasons.

One thing that they ontologists will say is that we can never really know what the outcome of an action is going to be here. We could be appealing to something like chaos theory or the butterfly thesis, so to the effect that we don't know down the line. What's could have happened?

We step on a butterfly. You change the weather in China. How could we know sometimes? It also has to do with the idea that, you know, the the outcome of the action is in a certain sense. Irrelevant given our intentions to perform or not perform an action, if I shot it.

Somebody. But he collapsed of a heart attack. While the bullet was in flight and bulleted, ended up missing the person in no bad consequence guys, still dead. He would have been dead anyways. But you know, arguably my action was wrong because you know, that's what I had in mind was to to kill the person.

So and it works the other way around too. You know you can intend to do good with your action and sometimes bad things result but the app was still morally good because it's that kind of action. It's a morally. Good action. Well, we could talk about that. The concept comes, well, let me the concepts, but probably been around since forever, right?

You know, we can go all the way back to the 10 commandments or the law codes or whatever you want. There have always been rules that are basically brought forward as guidelines or edicts of ethical behavior. Simply on the basis that these are the rules and sometimes it justified theologically but more they're just they tend to be justified perhaps on the basis of human nature.

But per more, just on the idea of what we might call natural law theory and it's this idea that we can know about the loss of the world. Just by thinking about them, triangle has an interior angle of a 180 degrees, we know that it's a law of triangles.

Now we know it just by thinking about it 2 plus 2 equals 4, you know, we don't need empirical proof, we know about it just by thinking about it and similarly, we can know more truths in the same way. They're inherent in the concept of morality if you want to put it that way.

So and some of these will seem really intuitively obvious to you. We have a quietness who was a proponent of this. And he says, for example, every substance seeks the preservation of its own being, according to its nature. And by reason of this, inclination, whatever is a means of preserving human life.

And of warding off, it's obstacles belongs to natural law. Living beings have a natural inclination to seek to continue living. And so an ethical law based on protecting our ability to keep on, living seems to be a pretty obvious moral law, life is good, and that's the foundation. We can embed this in a sister.

I mean, we can, you know, like like the diagram shows we can test our moral intuitions by creating predictions or creating moral principles out of them. Applying them to specific cases, in testing them against our intruses, a whole process, we can come up with here. But what it really boils down to is this idea that, you know, just the very fact that we are living beings and ethical beings gives us the moral intuition.

That life is good. Things that preserve life are good. Are different ways. You can set this up and one way is to distinguish between act intuitionism. That's the idea of knowing whether a particular act is a good act or rule intuitionism where the focus isn't on the individual act but rather on the rule that governs the act and so the the intuition is that this particular rule is a good rule following.

This rule is morally. Good. Don't lie is a rule. And so, the intuition here is that following that rule would be good, natural law, persists, natural law theory, persists. To today, it hasn't gone away. And actually, you know, I haven't done an empirical examination, but I would imagine that a good percentage of the population led hears to some form of it or another John Fennis and Jermaine Grisantz are contemporary writers, writing out of Notre Dame and leave offered us a set of seven basic self-evident values from which moral norms could be derived, Miss 7 are basically life health and safety.

Our capacity to know about reality and appreciate beauty. Our capacity to be excellent in work and in play are desire to live at peace with each other. Neighborless friendship our capacity to have aesthetic experience as a feeling of harmony and inner peace. And I can perfectly, can personally speak to that one harmony between our choices, our judgments, and our performances walking the talk, if you will, and then finally religion and pursuit of ultimate questions, of meaning and value or perhaps of the cosmos and the nature of the universe, a lot of people are going to look at those principles and say yeah, those are the kind of principles that I agree with.

But you know, one of one of the issues of natural law theory. This idea of moral intuitionism is a plurality of systems of morality. I mean, if you can just intuit it who's to say that you're intuition or my intuition or someone else's intuition is the right one and the wrong one.

There are all kinds of ways of caching this out in different kinds of intuitionist systems that may be more or less supported by the actual way. Humans are naturally, there's a whole discussion in fact about what is natural for a person or what isn't is it natural for a human to fly?

Well, clearly not, we can't flap our arms and become airborne, but on the other hand, we could build airplanes. You know, what is natural, and what hand seems to be what falls into the domain of what's possible? So anything, a human can do at their body is natural or on the other hand natural may have to do with overall purpose objective or goal, right?

And if the purpose of a human is to stay alive and reproduce, then you can come up with a narrower definition of natural. Similarly values those values that correspond with our moral intuitions can be described in any number of ways. I illustrated just one such system here, Schwartz's value theory, which talks about four dimensions of values including openness to change self-transcendence conservation.

And self-enhancement was subcategories of things like stimulation hedonism achievement, power security. Now, are these good things? Are these bad things or is any combination of these things as values and non-values acceptable or I could bring him in this context mass laws, hierarchy the five stages of needs that people have and needs are certainly something that seemed to flow from who and what we are.

So we could be again as mascot does with physiological needs and then what's those are met. Look at safety needs. Love and belonging is steam. And then finally it the pinnacle self actualization or as one wagon put in there as well. Wi-Fi got to have Wi-Fi. So that's a weakness of, you know, the natural theory of ethics and value theory.

So we can go back to what is the human? What is human nature? And a good place to start for this discussion. In this context in this day and age is with Jean-Jacques. Rousseau the French philosopher of the 1700s who observed that the beginning of his book, man is born free.

But everywhere he is in chains and the idea. Let me just caricature here and rather than strive for precision. The idea is that they human naturally is good and virtuous but society and the constraints and the artificial demands and artificial needs and desires that society brings to us. Constrain that, you know, it's funny, you know, I can like express thoughts about Russo in this way.

I'm thinking as well as of Kylie less and adbusters magazine or gnome chomsky of manufacturing. Consent talking about the same way, the structures and actions of society creates artificial desire and so doing impinge on one's dignity, in one's freedom. And this is based on the system of the system based on capital and self-interest.

And here I'm quoting from the the article not from Russo himself, but from the article, the hope of creating a stable and just political society on the basis of narrow self-interest is a soul shrinking and self-destructive dogma masquerading as a science of politics for Russo. What was important was the meaning and importance of human dignity.

The primacy of freedom and autonomy, I am the intrinsic worth of human beings and let me be careful here. Would we use the word worth in this context? We're not talking about numerical worth or, you know, thinking of it in terms of finances. How much money a human could get, or value as in a person is more or less valuable, you know, where we live right now in an environment where virtually every concept that we have in every discipline that we have, ultimately it breaks down to some description in terms of money and finance but Russo didn't live so much in that world.

And he wasn't using the the words like worth in that sense. And I don't think we should either so influenced by Russo and influenced by, you know, you need people, like Saint Thomas and writes theorists. We have a manual. Can't who lived in what is now called, collin and grudge.

And then never left the city. And collimated grad is on the far, west side of the Baltic Sea and what is now a Russian enclave? Which is kind of interesting. But back then, it was Prussia. So can't talk about duty and the right of ethic as derived from reason out of the concept of necessity.

So there's a different ways we can get outness but we'll get on it this way. Can't say nothing in the world or outside the world can possibly conceived, that would be called good without qualification except a good will now. But good will he'sn't, meaning, charity or goodwill stores or something like that.

But more will, in the sense of maybe Nietzsche's will to power, or showers will to live the reactions of a rational being to project oneself. Ones ideas once thoughts into the world. So he says, goodwill is good because of how it wills and how it will is ethically. So a good will is good in itself.

And he's also saying and this is where he parts weighs with the naturalists morality should not depend on human nature and not be that are for subject to the fortune of change or the luck of empirical discovery. And here he's responding, not only to people we think. Well, something is natural, therefore it's good.

But he's also speaking to people like David Hume and others who want to find what counts as good empirically by the evidence of the census but you know, comment looks at me. So this reduces morality to accident to to luck, you know and just like the shirt on this slide is completely irrelevant to anything we're talking about.

So also is human nature or empirical discovery because morality is something that we know through and by the peer exercise of reason and the will.

Okay, so where does that take us? What can't came up with? Is something called the categorical imperative and even if you're not familiar with this phrase, you're certainly familiar with it in everyday life. It's like when you grab for the cookies on the table and try to take them all for yourself.

And your mother says, what if everybody did that and obviously you know, everybody can't do that because they would never be any cookies. Same kind of thinking, right? So we can distinguish between a hypothetical imperative and a categorical imperative to give you a kind of an idea of how this works.

So a hypothetical imperative is something like if you want something, then you must do something. If you want to be a doctor, then you have to go to school and get a medical degree. If you want to get to Regina, you have to go to Saskatchewan, right? You see how that would make sense, right?

And it's it's an imperative in the sense that if you want to do the one thing, then you have to do the other and that was the structure of natural ethics. If something is a human, then it needs to live. For example, right? Well can't comes up with the second live and this sorry can't come up with the categorical imperative, which basically simply drops the if part.

So instead of saying, if you want a then you must do B, the categorical paragraph. The categorical imperative simply says, do be.

And how do you arrive that categorical imperative? Well, it's through a process of pure reason. And the pure reason that count offers is this act only according to the maximum, by which you can at the same time will that it would become a universal law. So think of a rule, I should take all the cookies for myself.

Could you make that a universal law that governs everyone and everything? Well, no, you couldn't so the maximum. I could take all the cookies from myself is not a categorical imperative. It doesn't necessarily make it wrong. I mean, it might be right taking all the cookies for yourself seems to be wrong, but not all of our actions are covered under the condition of becoming categorical imperatives.

We do all kinds of things on a day to day basis. You know, twiddle my pen, right? We doesn't matter whether everybody in the world does that or not, that's not the kind of thing that is mentioned. Nor even you know something like you should always twiddle your pen when you make a point about the categorical imperative.

Yeah sure. Everybody could do it but you wouldn't matter, right? So it's it's a bit deeper than that. The idea is that it. It's a maximum, it's a principle. It's a rule of conduct where this rule of conduct is the imposition of an ethical will on the world. If everybody thought that this was an ethical thing, could that happen?

And according to the cunt, according to those who follow cunt, all of our specific duties which may or may not include twiddling pens, can be derived from this one imperative. So contact actually expresses this and more than one way, and there are three major ways in which he says it.

The first way is kind of a nontological way of saying it acts only according to the maximum, by which you can at the same time will that it would be kind of that it would become a universal law of nature. You see how he's flipped that around, right? Instead of nature imposing itself as a universal law of ethics on us, it's us coming up with this maximum and applying it to nature.

And then we ask could this be a law of nature. So you know, could could preserve one's life, be a lot of nature such that everything that lives tries to preserve one's life. Well arguably. It could, right? But there's another way act as to treat humanity, whether in your own person or in that of any other in every case as an end and never merely as a means.

And here we go. Back to Josh Russo and the inherent worth not as a monetary worth but the inherent worth of every person. Here we think of every person as an end now and we we can cash that out in different ways as well, but I think a good way of thinking it is.

This every person is valuable in and of themselves as an end in the sense that every person is able to have this, will this capacity of reason to create their own moral reality. They're all an understanding of ethics and the idea is that they would all see. Right. You know, it's just like every person is valuable, because every person can see mathematics in the same way and then there's a third way of putting it act that your will can regard itself at the same time as making universal law through its maxims here.

We're not just talking about universal law of nature here, we're saying universal law and we can think law and in the terms of say loss of God and man could the edict. Don't steal become the law of the land again. Arguably. It could, if you wouldn't result in the collapse of society, and what we can see the appeal of this, I've sort of applied it to machine learning in the diagram, on the left, I'm okay.

I stole the diagram from an article and towards data science but it's still it's the same sort of principle. So taking human interaction human or machine action. Run it through the deep learning network and ask yourself is the action within the ethical, AI intuition scale and we give us feedback.

And here's where our own intuitions come in. Yes, it is or no, it isn't. If no, it isn't then the action is prohibited or if yes, it is. Then the action is allowed the machine executes or the human action is validated. And that feels a little odd, doesn't it?

And I think that that oddness is an intuition that we need to respect here. So let's look at what cont says act, as though, something could become a universal law and let's ask it the other way around, what would prevent something from becoming a universal law? Because as I said about, you know, with respect to naturalism anything, the human body can do is natural which leaves pretty slim grounds for objecting to something on the basis that it's not natural And it's similar thing happens.

Here pretty much anything that you can do is universalizable. You know, in even in some trivial senses, right? Like, all beings who are sitting in Steven's office at this moment, should twiddle their pen. Well, there's one and only one that's me. It's universal and your local contradiction in that.

Well, maybe it should be more generalized. But you know, I mean that becomes, you know what is it? Is it all people at this time? Who are any person? Twiddling any pan? You know, we doesn't make sense. Logical contradictions is too weak a restriction here. Anything we could all do is humans falls under this anything and everything falls under this.

Because nothing, we do nothing that anyone does in the world is a logical contradiction, but other simple fact that you can't do a logical contradiction. So that's why we got a different definition of preventing something from becoming a universal law, and we can invoke, say the concept of the teleological contradiction which is something like contrary to a purposeful and organized system of nature.

And I mentioned that again with respect to natural law. Now I were bringing it up here with respect to contradiction so we could say it's a contradiction to a purposeful and organized system of nature to act in a random and capricious fashion. Well that seems to be pretty much the case by definition, right?

And we can work from there. What would that? What would be random? What would be precious? What would not lead to purpose? What would not lead to an organized system of nature etc and so we could say you know it's you know a maximum like love anyone you love or love anyone.

You want to love, could raise a teleological contradiction because now love is no longer purposeful is not directly towards some end, it just is what it is. And people make that argument and a lot of people oppose that argument and it doesn't seem that sort of contradiction is going to be sufficient which to base moral law.

Oh yeah, you can adapt it even to practical contradiction along. The lines of it would be ineffective for achieving my purpose if everybody did it. So that's the cookie principle, right? If my purpose is to get as many cookies as I want and my maximum is take all the cookies.

You want. Then I am not going to get as many cookies as I want. I might not get any cookies at all and so my maximum contradicts my practical purpose. And we see this actually, in practice quite a bit. Even during these days of the pandemic, which is why these images are here on the slide.

Where sure people would like to not wear a mask. But what if nobody wore a mask? Well, then we would have a case of widely spreading pandemic. And there's a whole ethos around that kind of thinking. And extends, far beyond ethics. We have this concept of in economics, what we call the free rider, you've probably heard of that, right?

And it plays out and things like the tragedy of the commons. And the idea here is that and a column in the environment where everybody is contributing one person might decide not to contribute but only to take We'll call that person. Donald Trump. Just hypothetically. What if everybody behaved that way?

Well then nobody would produce anything. People would only be taking society would collapse. And so even the person who wants to take and take and take, can't continue to keep on taking and we come back to that quote about Russo's philosophy. You know basing of society on self-interest is you know nonsensical and it's nonsensical just particularly for this reason if everybody acts only in their own self-interest, we don't get to have a society.

Similarly, with the tragedy of the comments. And if somebody goes into the comments, let's say the comment is an apple orchard and they pick all the apples from themselves and they take them away and they sell them on the open market. Just like John Locke says they should then nobody else gets any apples and over time, the commons becomes overused, there's no apples, even for seeds, and of course, nobody's tending to the trees because this one person's taking all the apples and the commons eventually, collapses.

And even the person who was taking the apples, doesn't get any more apples. Now, it's that the cookie jar kind of logic all over again. That's the tragedy of the comments and it results from failure to recognize the contradiction in that sort of selfish act which in turn access justification for division of the comments as private property among all the interested people which aims up all in the hands of the person who took all the apples.

But that's a different issue.

Well, how does that line up with real life? Well, you know, if you look at the actual practice of the actual profession's so-called, for example, writing on ethics in in the professional context, writes religion, financial gain reputation, personal character, social context, geographical locations severity, and the nature of disease.

The climate of fear these are all influential factors in doctors decision to treat perhaps more so than any other period. So basically, it's an argument that says, doctors based on all, these factors can decide whether or not to treat a person. And the question we ask here, this context is, is this a universalizable principle would it work of all doctors were like that.

Well, we've seen environments where our all doctors are like that, where all of these things actually do play a role in whether a doctor treats a person, or not, especially money, but also all the rest, you know, people refusing to treat people because of religion, people unwilling to treat people because of the severity of the illness.

Etc. And the result is many people are left. Untreated. And so arguably this creates a practical contradiction, it's a weakness, the wound in society that continues to fester and fester until you really can't fix it. And it's sort of like the the doctor equivalent of choosing not to wear them.

Ask and at a certain point, you know, just in order to be consistent, just in order for the whole profession of doctoring to make sense, you have to take away some of these, you know conditional and arbitrary and luck-based factors and go back to the principle doctors treat everybody.

Regardless, and that's where you got things like the hippocratic oath. And that's where you get things, like organizations, like media soft runt here, doctors without borders. Who as I write as I speak or treating people and Syria and other places other doctors won't go. So there's something to be said there.

Can't comes up with a number of examples. You know what if I make a false promise so I can get myself out of difficulty, maybe a person's on a deathbed and they say, you know, honor my last wish give all my money to my kids. I don't care what my will says and you say yeah, I'll do that.

But what if everybody did that they nobody's the last wish would ever be respected and nobody could trust that when they died. These wishes would be respected and so people wouldn't leave anybody in charge of their wealth when they died. They do something else with it. Maybe just waste it, maybe just burn it.

Maybe just take it with them like the tombs of ancient Egypt committing suicide. He don't again. What if everybody committed suicide while the concept of a continuing society and you know, it's it's and that's a principle that's actually being instantiated. You know we had Jim Jones Jonestown with the drinking of Kool-Aid and that cult ended.

I've had David cares and the branch Davidians who went down. You know all of flame wasn't quite suicide but it wasn't not suicide. Etc is any number of suicide calls and one of the main results of suicide cults is the called hands with the suicide, neglecting one's talent. What if everybody said?

Yeah. I don't really need to develop my own talent woman. Nobody would get anything done, would they? And so on, you get the eaten, come up with more examples like this and based on these examples and then generalizing over them and generalizing over them. You can come up with something like a system of morality that we were able to think about as though, it were the interior angles of a triangle.

Just something that's based on logic and rationality in, that's it. Well there are of course, criticisms of cons approach and I'll mention four of them here that have been brought up in various publications, mandating trivia actions. I covered that earlier and I don't really think that's an objection endorsing cheating.

I've actually illustrated that with a completely unrelated publication, but I thought it was pretty good because if we look at the factors that go into whether or not you know very well educated, very reasonable people. Actually aside cheating is okay. You know, social responsibility or mastery and approach of their goals, isn't enough to get them, not to cheat.

You have to actually get them to agree to some kind of self-transcendence values. Some idea that society is worth more than just whatever is good for me. But even so, you know, when would save like we can put this into a different context, you know, let's put it in the context of sports and, you know, the similar sort of argument would be well for the good of the game, you shouldn't cheat because if you cheat if just breaks down any trust in the game and we think of baseball and baseball almost ended when the members of the Chicago White Sox who were then known as the black socks were caught betting on games or caught not betting on games.

But throwing the game to assist people who were betting on games but that doesn't eliminate cheating from baseball. We've had in recent years examples where the Houston Astros and the Boston Red Sox achieved it. They used electronic devices in order to figure out what pitch a picture was going to throw and then to signal that to the batter.

It probably still happens even in the game today. And majorly baseball is kind of me. Yeah. Like if you do it and get away with it fine and we sort of wondered you know like here we have a case where even for the good of the game doesn't give us an argument against cheating and that seems like a pretty fundamental value could be indifferent about certainly in the academic world.

If cheating me comes a value, is it certainly seems to be in places like Harvard business school. Then somehow academic value is undermined other criticisms prohibiting, permissible actions. So the one example, the the example I read is I flushed the toilet and precisely three, 14 this afternoon. What if everybody did that?

Well especially in the small town where I live but even an old large city if everybody flush the toilet at 3:14, the water pressure would drop to zero and we would have a bad impact on the water system. Well, you have the same sort of thing with these rules about the use of electricity during peak periods refleshing.

The toilet at 3:14 is not wrong, even though, if everybody did it, it would be a problem. And so we the permissibility of enact comes precisely because there's no reason to expect that. Everybody's going to do it even if you can hypothesize a scenario on which that happens. Then the worst of these is mandating genocide.

And again, in the same article, I read suggested what if the principal was kill Americans. So what if everybody said everybody lived by the maximum kill Americans? Okay? Well, this would be bad for Americans, but from the point of view of the rest of the world, this can actually be viewed as a good thing, particularly, if you're of the belief that Americans are overall abandonfulness on society.

Well, you might say, well, they're Americans aren't really a bad influence on society. But what if you really believed they were or pick your other ethnic group and say, well suppose these are really a bad influence on society. The principle allows you to not only allow genocide but to require it and intuitively that would be a bad consequence.

And these are the sorts of things you have to think about when you're coming up with a principle, like the categorical imperative where, you know, you just using your mental process to come up with ethical rules. Particularly if you don't care about the results because on a certain point you find yourself endorsing roles that are sure universalizable but seem somehow to be wrong.

And that's one of the major criticisms of an ethic of duty. That is basically inhumane, inlay measurable. We could ask was Jehovah, Shaw wrong to steal bread to fetus. Starving sisters. Children he got, I don't know what it was. 2024 years of hard labor for doing it. But was he wrong to do it?

What it have been wrong to lie to the Gestapo if you are hiding Jews from them. Well, if you're moral principle, is never why then? Well, I guess you just have to tell them the juicer there and take them out and have them executed. A lot of people wouldn't be comfortable with that including myself, then not being comfortable is to understand that to a significant degree and, you know, actions like this really do it.

Seem depend on the consequence and not just simply following the rule, but maybe we just don't have the right roles, right? Because there's also that principle of treating people as hands and in pretty much. All of these examples that we talked about the bad result that we got really was a result of just treating people as disposable, you know, all the the suicide thing the cheating thing, the the genocide thing, you know, all of these are cases where we actually didn't take into account the dignity of human beings at all.

So really maybe it's a two-step process, right? So if we have a moral principle, the first question we should ask is does it involve violating the dignity of a human being or human beings generally? And if it does not, then we ask whether the maximum can be universalized. So now we have a test right?

You know, you should we lie to the good stop officer. Well, that doesn't really respect the dignity of the human beings who are hiding from the Gestapo officer. And so the answer would be. Well let's not a principal. In this case, lying to the gestapo is not morally required for morally objectionable.

I should say and that's a good thing. But now you know now is our principle is become more complex. We have a question of you know, what does it actually tell us about how we should treat people? What does it really mean to violate or not? Violate the dignity of a human being, Well, other different ways of expressing.

This One way is, and another one of these universal principles that people, cite on a constant basis. The golden rule Golden roll is basically the principle of treating others has. You would like to be treated yourself. And as Wikipedia says, it's a maximum that's found in most religions and cultures lots of people, repeat this mantra, it's a terrible role.

I'll say that right now and and I won't even be equivocal about it. First of all, how do you know how other people want to be treated? You could ask them, but they might lie, or you might miss understand them, or they might not actually know themselves. So there are plenty of ways of getting that wrong.

Well, okay. Just treat them how you would want to be treated yourself, but you were not them and their tastes and your tastes might be very different. I see that in online forums all the time where somebody comes into a forum is rude, is direct is, you know, littered with a sanity's attacks.

People personally, and you say something to something to him, and it's always a him. And he says, well I don't mind if other people treat me like that. Just the auto doggy dog world, and that doesn't seem like a good defense of that sort of conduct. It seems like a mishap application of the golden rule, you know.

And even more on the point. What about hypothetical situations? So I mean the golden rule is basically treat someone as you would want to be treated right? You would want to be treated. This is what's called a counterfactual and it's basically asking for your description of what would happen in a possible world.

Not the real world but in the possible world where that counterfactual is actually factual and it's hard to get that, right? It's hard to know what you would want, and a particular situation and unless you are in that situation receive that all the time, where people say, well, I would not want to have them, pull the plug at me.

If I was in that deathbed situation and then you're in that despense situation, you realize, oh, well yeah, no, maybe I do that kind of thing, right? You know, or I would not take the million dollars, even if I knew I could get away with it from that company and then you're in a position where there's a million dollars on the table in front of you, all you have to do is pick it up and a lot of people end up picking it up.

So the golden role isn't a good principle, it's a nice idea, you know, always it's an appeal to some kind of equity and and the recognition as I like to used to like to say other people are as deep as you think you are, so recognition of their humanity and their value and are non-monetary sense, but it's not a recognition of their uniqueness their distinctiveness and their autonomy.

And that's a problem. There's another principle that's almost as wide spread from each, according to his ability to each, according to his needs. And for today's times, we can have a gender neutral version of it or we can even extend that to include things like animals and robots x, of course, the core principle of socialism.

It's kind of like a golden rule of economics because it is a description of a society in which the dignity of each person is respected. You know, a society that gives to each person according to their ability is very unfair and I'm thinking toward people who don't have very much ability like babies invillage, the elderly, people who are disabled etc.

But again I need not say that there's been considerable objection to the socialist principle as well. So that comes back to what does it mean to treat people with dignity? What does it mean to respect the individual value of each person? And there's a more meta problem with the conscient approach and that's just the idea of defining the good itself.

As reason, you know, if reason is what makes what we decide good, you know, I mean if if this is it then we have to ask are those who have more reason than others intrinsically better. Now, one of the appeal of natural ethics, is there the sort of ethics that any common person can come up with just by thinking about it for a bit.

So, anybody pretty much. Anybody exceptionally affirmation, infants, and invalids could come up with principles. Like you shouldn't lie, You shouldn't kill etc as morally. Good principles and just just by their own, inmate capacity of reason. But what if, what if you can't reason and and hurt us not really a moral agent, in that sense, are we better than that person or for that matter?

Are we better than animals? Because we can reason and they can't does somehow our conclusions have greater ethical purpose or ethical worth than theirs, is our struggle for survival inherently, ethically superior than say, your dogs, or your horses or flip that around suppose, the supergalactic came to us and we're demonstrably better and logic and reason than we are.

Because after all, I mean, there's it's not like logic and reason there's one unified systemic hole. There's all kinds of ways doing logic and reasoning the whole other issue, but we could talk about that. And so it's easily imaginable. That supergalacticants could come and they've solved logic and reason, and they have one single unified system.

Unfortunately, as in the Douglas anonymous books, it means that earth must be eliminated to make way for a bypass is that ethically right? Would we have to accept that you know the principle that can't brings forward? Seems to suggest that we should. But that would be the end of humanity.

And that seems to me to be bad but even more. It's just this idea of accepting reason as a value in and of itself, accepting reason as the locus of ethical good. That whatever is ethically good is so because we can arrive at it from reason. I've put a Ralph Stenon illustration in this slide to a illustrate the opposite of that.

And again, there's no good reason to put a rough statement slide in there but I did. But even more to the point Ralph Stedman is the is the gunso to artists that hunter rest Thompson is the gonzo to journalist and the whole point of Hunter S Thompson's journalism is that it's incredibly subjective and arguably insane.

Certainly drug and formed, and yet, undeniably brilliant. And that's the problem with reason. It's not the only game in town and it's not simply that the alternatives are just, you know, luck and chance. And the alternatives might produce the moral equivalent of a Ralph Stedman diagram and, you know, from the point of view of reason we look at that and we it seems repugnant.

But at a certain point we said, well wait a second, that's a rough steadman and there's a lot more going on there than we thought it was going on there. Another aspect to think about is autonomy. What's important in contradicting? Is that we do not depend on an external moral authority to unveil moral law for us.

We discover it for ourselves and yeah I personally really like that principle and if that's a big one for me because I don't like to be told simply told what's right? And what's wrong those back to the objection to duty that I raised at the beginning of this talk.

But how autonomous are we really? And here, I reference the Stanley millgram experiment, where people were basically convinced to apply greater and greater and greater electric shocks to victims. Now spoiler they could really administer electric, shocks to victims, but they thought they were. And that's what counts here. And if it's not easy to convince people to administer, electric, shocks to people then how trustworthy is the autonomy of individual moral agents, you know, I mean, we think that they're going to come up with good ethical principles just by reflecting on them.

But, you know, folks reflections can be manipulated, they might come up with actually very bad moral principles and there were plenty of examples through history where that is happened, where entire populations have been swayed to believe, moral principles that objectively. And with the hindsight of history, we now say we're in fact very unethical and that might even be happening now.

And part of the problem is, how can you know, how can you tell, how can you be sure that the ethical principle? You think you understand in apprehend intuitively has been actually fed to you slowly and carefully through an advertising campaign run by Bill Gates for the Koch brothers or pick your villain right?

And that's a problem. Now, we can manage for autonomy, we can develop social structures that preserve and promote real autonomy. I'm not sure if this diagram captures that this autonomy required trust, does it require responsibility? You know. It's not clear that either of those is the case and and you know part of the difficulty is coming up with a good account of just what we mean by autonomy.

But certainly the lack of it would be fatal to, you know, any sort of theory of morally tuitions. I'm part of the problem with all of these theories is also being able to pick which theory applies. I talked earlier about the inhumanity of some of these moral principles and we tried to address that.

But by talking about viewing each person as inherently valuable. But even so you know, if we've got say a list of seven principles like we saw earlier on which one applies, you know, it turns out, you know if if you have a principle like do not lie and if you have a principle like, do not murder or do not let somebody be killed, might be a better way of putting it.

That can't be your morality because those two principles can't both be true at the same time without some shall we say contradictory outcomes? And the Gestapo cases is a perfect example of that, right? Let's say through some trick. We thought that, no, I mean, overall, we're respecting people more, my following the law and telling the truth than we are in lying in this case, right?

So but if we have a principle don't let people die and we have a principle don't lie? Or don't break the law and we're faced with this gestapo situation. Then we're stuck and so WD. Ross came up with a concept known as prima facie duties. And the idea here is that they're not script laws, in the sense of commandments, or something like that, what rather they are, you know, the expression prime of facey means, you know, at first glance.

So at first glance, it looks like it's a duty, you know, you know, before considering anything else got. This is a duty. But then in a plurality with a plurality of principles in any given situation might be overridden by one might be overridden by the other and really it depends on the situation right now as to which one of these principles ultimately will take hold in the case of this, the Gestapo.

The you know, don't let people die principle will be more important. But in another case the don't lie principle might be more important or the self-improvement principle or the fidelity principle or you can showing gratitude or any of these others. They're still leaves us with the problem of being able to find, you know, these seven principles or whatever.

And that that difficulty the issue of the genesis of these principles is a problem. It's easy to say oh yeah, they just spring into mind intuitively but it's it's hard when different principles spring into different minds. But if we accept the idea that any of these principles in any person, are prime of a sheet principles, but we can sit down as reasonable people and discuss and determine what the most reasonable outcome would be in the face of these conflicting principles.

Then we could continue with a dontic system of ethics and, and a system of ethics based in reason wallet, the same time finessing, the problem of the origin of principles and of the organization of the priority of principles. So we come back to professional duties, which is where we landed when we are talking about ethical codes.

And we can look at these duties kind of in a different light, and if we look at the illustration on the right, we see that we have 15 duties to clients that CFP. Professionals must follow the primary duty, of course, is fiduciary, because we live in a world of finances in economics, but then we have the professional obligations of integrity competence diligence etc.

Client interactions to disclose and manage conflicts. That's their version of the conflict of interest policy to provide information to represent. Compensation appropriately, etc. Right? And we can think of these known as laws, but as primer facing duties, they describe the sorts of things that ought to be important to a professional.

But, in such a way that such a person in a profession, is able to evaluate and weigh these principles and select the most important, and even the primary fiduciary duty might take second place to some other duties, like say comply with the law. And in fact, one of the issues that comes up with business ethics and general, is that the interpret the fiduciary duty as being the owning duty and actually overwriting other duty such as complying with the law.

And that is arguably a misunderstanding of business ethics and so and thus, so when somebody becomes a professional or to bring us back to our subject, when somebody undertakes the the practice of using AI and analytics in a learning context, we have these values. We have these principles, we don't need to justify them, we don't need to argue for them because everybody knows what they are.

We can sit down a reasonably, think about them and we don't even need a definitive list of because you know, in any circumstance, a reasonable person can come up with, you know, here's a principle that applies in this case, right? We're collecting data principle, it should apply. Here is consent, we all know this, right?

And here's another principle that applies, in this case, accuracy our data collection should be accurate. And now, these two things are going to conflict with each other, right? If we get consent from people that might impact the accuracy of the data. So what's more important? Well, it really depends on what we're collecting data for.

And so the argument looks, right? And so, that's how this kind of our argument applies, in the case of professional duties, and in the case of the, the ethics of analytics in AI, the ethics really of any practical discipline based system of ethics. And I think that's a good argument.

The the place where it's not a good argument, I think is in the idea that we can rely on reason alone in order to come up with these determinations. Because as soon as you say things like well it depends on the context. It depends on how important the research that we're doing is it's cetera.

Now, we're appealing to something, somebody affects outside our particular discipline, and that's where the problem comes in, right. Can't would say, well, now just depending on accidental circumstances, if you're writing this case, it was just purely by luck, right? And your making morality conditional and contextual relative to this the original morality in such a case he might see and certainly I've seen that expressed.

But what's the resolution here and there isn't a way to simply use reason in order to wait these prime facial duties. I mean assuming she starts saying prime of ACVs you're you know the reason part gets you to the first glance but then you have to check your ass and that involves looking at actual peas and actual people.

And yeah that kind of gets that probably what is an overall problem with duty-based and rule-based systems generally and it's that morality seems to be about more than that. I'm morality is more than simply following the rules, even if they're really good rules, you know, Hurst House says, if right action were determined by rules that any clever adolescent could apply correctly, how could this be?

So why are there moral wiz kids? The way there are mathematical and quasi mathematical with kids, you know, I mean why don't we see evidence of these super reasonable super moral people that you know, we can just see our moral authorities, you know I mean if anything were when such a person shows up and claims to be such person we think of the more as cult leaders and anything else.

But more than the point, just simply following rules following, the dictates of reason seems to go again star moral intuition, our moral intuition goes against our moral intuition. I mean, this is really affectively brought out through the narratives of Star Trek. Spock, is this purely reasonable person? And yet spark, even though his determinations of the ethics of the situation according to various principles in some and some of them are articulated through the course of Star Trek infinite, diversity and infinite combinations is one of them and there are others.

And even though spots raising comes from a really good place, you know, as a way to put behind the violence of the original Vulcan race, it nonetheless seems to ring a hollow to people as just not capturing or grasping the humanity of ethics. And we see the same kind of scenario, play out in Star Trek, the next generation except instead of Spock.

We've got a robot data who again is ruled by algorithm and principle. And again we have people suggesting there's not respect to humanity of the situation, despite Davis own efforts over time to become more, human to find as he says the human equation. And I think there's actually an argument offered by the writers of Star Trek in this situation.

To try to convince us that no robot actually could pull it off, although he might need an emotion chip.

I think that you know the arguments are well made and it's not the case that you know no robot could ever be ethical. It's not the case that no AI could ever be ethical. I don't think that's what follows from this, but I think that what does follow that from this is that no system of ethics based simply on reason duty and principles could ever be ethical?

Just because, you know, like the ethics slide in this diagram, it feels too much, just like, plugging text into a template and hoping that ethics pops out the other side, but it's filled and others, right? The concept of ethical principles, for AI has encountered. Pushback, both from Ephesus. Some of whom objectively imprecise uses of the term in this context as well as from some human rights practitioners.

Who resist the recasting of fundamental human rights in this language, you know? And it does go back to how do we respect human dignity? How do we expect, how do we respect the worth and value of each person? I happen to believe that, that's a good principle that that each individual human, each individual life for that matter.

And by that, I even include trees has inherent value and worth, not in the financial sense because I just stupid way to measure the value of life. But just in the sense that it has a right to exist. It determines it's own value and it's not a sort of thing really, that we should be using or commoditizing to our own purposes.

But you can't capture that with the rule or any number of rules. It's not the sort of thing that you can just pull out of the air with an algorithm. It's going to require something more. And I think that in a lot of the debates about the ethics of artificial intelligence, one of the key, shortfalls of many of these discussions is that coming from a certain technically oriented machine oriented perspective, the proponents, don't necessarily grasp that the ethics need this thing.

That is more. And again, that's why in the ethical codes. I went well beyond just the ethics of artificial intelligence and analytics and went into other professions like accounting and health care and teaching and journalism and like because in these professions the need for that something more, whatever it is is that much more evident than it, perhaps ever would be when you're working with and building purely artificial systems.

So that's what I've got to say on this unduty. I think it's a really interesting way of approaching ethics. I think it says a lot of good things, but I think that the discussion of ethics does not begin, or end with the principle of duty. I'm Stephen Gauss, thank you for joining me and I'll talk to you again next time.

Module 5 - Discussion

Transcript of Module 5 - Discussion

Unedited audio transcription from Google Recorder

Okay, so here we are for the module five discussion. Oh, my should close that door there. You don't really need the local news in the background. That's perfect. That's we're doing this. It's just local news and still nobody else in me. I have to test up that this week has been a completely disconnected week.

So I would benefit from the discussion but mostly by listening, okay. I think it's been like that for most people and it also includes myself oddly enough. Although I really have basically done nothing this week but work on the course but I don't know if I'm really getting the outputs to show for that.

But how about you mark? How did this week go for you? Well it was normal. You know slightly connected slightly connected? Okay.

Less than an hour ago, I did. So I part of the this connection was the activity center. Hey yeah, not updated. So there was a couple days. I don't speak busy Steven Steven, you know, and then yesterday, I found two videos more than hour long. So yeah, I watched the long one.

Oh, yeah. Actually, I'm editing the text or the virtue. Oh, cool. Yes, I'll send that to you once I get it can close. It appears at least on YouTube it appears in the chat. Yeah. Right. And you can turn off the timestamp so that's a column of timestamps, which tell I figured out you can turn those off.

It was really, you know. But yeah, they even for a while. So then you get all these lines and during lines but there's no punctuation and, you know, follow up. Yeah. So I'm gonna clean it up into, you know, a readable text and I'll send it to you. Okay.

Now you do know that I'm also putting up transcripts right of these. So there's also that transcript. I don't know if that helps or not. I didn't know that YouTube was cream. Oh. Okay. It appears in the chat. I haven't looked for. Is there a zoom or you're posting the separate transcript from the phone?

Or? Yeah, it's a separate transcript from the phone. It's not taken from the YouTube. So okay, let's interesting because I didn't know YouTube was doing transcripts of them. Yeah. I think it's kind of amazing. Yeah. Yeah it appears in the chat but it's broken into timestamps. Yeah. Every second.

So you can see the the other ones, let me just quickly pop into the course here, and I'll show you when then, of course, showing you, I'm showing everybody who's in the course. So okay, let's just very quickly. I'll just share this screen here if I can remember, how to do it.

There we go. And there we go. All right. So if you're in the course you can see all the presentations simming the page updates. And again. Yeah, hasn't updated the last two. Let me just do that manually like I say I'm having update issues with my pages and it's been and since the beginning of the course and it has been incredibly annoying, so list the page.

And so I have to update everything manually and you can see there's lots of pages and it's just just something I haven't been able to get up on. So this is course videos, right? Yeah, which is really the list, of course presentation. So I'll just publish that page. There we go.

Now, there we go. So, if we look at the one on duty, which by the way, I was pretty pleased with. But so here's the presentation. You see the video here. The slides are here, you can just step through the slide. You can also listen to the alternative audio recording here and then down here are the, the links to the slides audio video and also to the transcript is right, and just jump to YouTube.

So yeah, exactly. Okay. Yeah, so I don't know if that's better or worse than the YouTube transcript. They're both being produced by the same company, which is Google, but, you know, one is by YouTube and the others from Google recorder and who knows? And did you know that friends?

No, haven't had time to do any of that at all. So it comes with paragraphs and capitalization, it comes with paragraphs and capitalization though. Yeah, it's not perfect by any stretch but you know, having something like that is better way. Better than nothing. Yeah, this is way better than the YouTube.

Okay. Obviously cleaning up that text and I'll send it to you on emergency. Oh yeah, absolutely. That would be so cool because you know what I'm planning to do? I've been creating transcripts of all the sessions that I've been doing and including our sessions with uni Sharita. And anyone else who joins us?

Hello. It's been basically you guys. And after the course, my plan is to clean up all the transcriptions, assemble them into a single large document. It'll be basically book length document and then that's the outcome of this course for me anyways. But you know I mean you guys are obviously valuable participants in the creation of that so because you know we have these interactive sessions and then I'm thinking about these interactive sessions when I'm doing those other videos.

So there's this thing, this bouncing back and forth of ideas has happening. So, you know, it's, it's like keep feeling guilty because, you know, just recording videos. Is it really anyone's idea of a good online course but it's been such a valuable activity for me that I've been spending a lot of time doing that.

So I hope you're getting something out of it. Well, yeah. And the thinking are a loud part. Yeah, no darling. I mean, yeah. So like you should Ford around that when you assemble, whatever the outpour is. Yeah, the order on that. And that's what we're doing on Mondays and Fridays.

We're looking out loud. Absolutely. Yeah.

And then there's the other the 11 finding this interesting to the mook aspect to it now because it's a connectivist moot. It's this distributed associative system, which emulates a lot of the properties of artificial intelligence. It isn't AI, of course. But but, you know, we're sort of modeling the way a lot of AI would work and then that's feeding back into it as well.

Which was my other big thing that I did this week. And I'm not sure if you saw it because it only showed up in the newsletter on Wednesday. It's like test page. But let me show you what I've done. So, here we go. Go back in. Well, I remember seeing that.

I don't know if I clicked on it. Yeah. No you probably didn't because I haven't highlighted it and it's you know test page isn't exactly and inviting thing but it always hides the top of my browser window when I do that. Okay, so here's the page that I created.

So remember before we were, we were doing graphs like this right now, imagine two rows of graphs, right? So on the left hand side, here, you'd have all the different codes. And then on the right hand side here, you'd have all the different values. Now the way we were doing it, we draw lines awkward cumbersome.

Really visually nice but awkward and cumbersome to do so here as the train goes by instead of drawing on, you just check a box. So if you check a box here, for example, that's like drawing a line between the eye tripoli code of ethics and care. And this is more like how people think of AI that are in a, i right, you have this matrix, these things here are vectors, you can have the vectors down and the vectors across and then they do a whole bunch of, you know, if you you look at the the courses and whatever on our artificial intelligence, they do a whole lot of manipulation of these matrices and a lot of AI constitutes matrix mathematics because really like, we've got one matrix in between each of these, each of these sets, right?

You know, like autonomy. Well, each of these values such as values in each of these codes. But in AI system, you might have two, three, four, whatever interconnected matrices. So you have the code at the one end and then couple layers of these things, all densely connected and then the value at the other end and the AI basically fills in all of those squares and they do that using matrix matrix mathematics for lack of a better term.

Do I understand all of the matrix mathematics that they do know because and it's I find this really interesting. You know, I took all the mathematics courses from kindergarten all the way through and into university first year university matrix functions never came up. So I never did addition of matrices multiplication and matrices etc.

Never did anything like that but there's this whole branch of mathematics that addresses it. So for me to understand that for most people will understand, you know, the actual details of artificial intelligence. You have to go back and cover that mathematics that your your foundational education, covering all the basics never covered.

So to me, that's an interesting thing. But anyhow, the idea is right, you click on these boxes. Now I want to do so a little helpful display so that if you click on the name of the code, you get and display of the code here. Or if you click on the name of the value, you get a display of the value.

So you know what? You connecting. But anyhow, once you do that, I also want to have ways of subdividing, these lists because it's a bit hard to work with. But anyhow, you put go down to the bottom. You submit your graph values. And so here we're the grass elements that I submitted.

And now that actually is now stored in the database and that actually works. But I don't have a mechanism for displaying that yet but that's that's like a short job to do assuming it actually went into the graph, the way it should. So that's the other thing I did, what I wanted that to be, was a task for this week, but it took me more than one day to write what you've seen so far.

And so I wasn't able to finish that task, it's been that kind of weak a good week because, you know, I think that was interesting but still. So, what do you think about that? Or do you have thoughts?

I like the way you were saying that. How you might work this? Even further out would be to have, you know, the particular code if you click on, you know, I don't know the code of psychology. Yeah. Right. And then you could read that and then you could click along the top in terms of exactly the meanings.

And that would be an interesting way. If I had it like that, you know, begin to click. We don't have it like that. However, I do have multiple monitors. Yes, what I'm gonna do. Yeah, right. And, and take a look at that. That's, that's should be funny. It probably will not make me as frustrated as drawing the line that I was really, you know.

However melture, what's his first name? My tires Matthias mentioned. Yeah, he did, you know, message me and I actually found not the demo but the actual thing. So I beginning to play a bit with that. Yeah, his. His actual thing is very powerful, it's in Java. So yeah, and and my hatred of Java is legendary love JavaScript, but Java is just, it's like using a battle ship to, you know, whatever go fishing.

So, but yeah, and he's been thinking about this for a very long time and he's out. He's also helped me out. I've been a communication with him on how to use this. And I think that it would, you know, what I want to do. And again, this is not too far away.

As far as actual work goes. Now, you know, once you've done all of those squares, you can display the result using his system. And it will look pretty cool because you'll see all the lines, right? Because it's all the same data. It's just two different ways representing that data and then putting that data.

But the other thing too, is it doing it? This way allows me to ask some interesting questions. For example, one question might be, you know, how would I come up with the things to actually connect? Because that's a relevant question, right? Like I've connected codes and values with this one, I'm connected other things with some of the other ones, right?

But codes and values. Well, how am I looking the codes? Are fairly obvious, you know, there's an actual document, but the values are kind of nebulous. How am I picking out, and naming these values and defining these values. What's that process? Right? Because that's something that's actually being, that's an input to our system, but it's just me making it up, right?

No, not really making it up, but so, but but you know what I mean. So, we we, you know, now the the way this was done in the field paper that I distributed to you guys in the newsletter was that they actually studied the ethical codes, identified, the values that were mentioned in the codes enlisted, those.

And that in fact is the same process that I used. And it makes sense because, you know, we're mapping values to codes. We might as well extract the values from the codes in order to create that matching. It's probably a bit easier that way, but you know, does the list of ethical codes comprehend the list of all possible values, you know and and again what terminology terminology should I use?

I mean these are interesting questions you know and suppose this is a different question suppose we were a machine or maybe designing machine that actually indicates whether we should connect a code with a value. What process should we use, you know, and think about how we do it. Well, I mean we're gonna have to, as you said, should we go back and look at the original, which is why so useful to have it sitting right there.

So, the codes says this, the value is this and we're doing some kind of mental matching task. But what exactly is that process? You know, if I was if I wanted to write a piece of software to do that matching instead of do it personally because it's going to take a long time.

How would I write it? I can't just do simple keyword matching. Yeah. But that presumes that everybody writing, all of these ethical codes, use the same vocabulary and mean the same words in the same way and what are the odds of that, you know, maybe within the discipline of ethics in AI?

Sure. But, you know, I got codes from y'all from legal professions, teaching journalism, psychology medicine, this odds seem pretty slim there. So there's, you know, there's quite a bit of interpretation happening for me to pick the labels in the first place and then quite a bit of interpretation happening in doing the associations.

So simple keyword mapping isn't really going to do the job and so one of the one of almost the fundamental question of AI certainly approaching it this way. Is what would do the job and that is a core question. And I'm not going to try to offer an answer to that here because facts like all of AI.

Yeah, I'm like she read his idea of having the text and I applied really great that you meant that you could have attacked. And then clicked the value category so that you don't have to scroll. Yeah, right that's okay. Yeah that's that's what I heard. That would be very useful and then I also have you know gave up mathematics after somehow yeah.

Oh did I? Yeah well it's it's not necessary to go farther in most. Yeah, and certainly I use now in my through it but what I had wasn't anyway, but I am interested in graphing and what is not going on in this world issue, I want to bring up is the weights or the definitions of the connection.

Yeah, that's another level and that, and that's factors. And, you know, it's one of the original parts of graphene. Is that the connections have you themselves? And I'm just wondering, first of all does AI is, that is AI, do that? And can we eat that short answer to both questions as yes?

AI definitely does that. Yeah, yeah. I'm that's a big part of what these algorithms do, is, adjust the weights of the connections. Like, in this graph it's just offered right? Which is nice but again, it's to it's too ham fisted really, you know, one thing might talk about a one code, might talk about a value a lot and really depend on it.

Another code might just sort of mention it in passing, you don't want to give the both a weight of one. And so part of the processing that I may, I does is do, is to adjust these weights and that, that is, you know, that's one of the things that's made possible in part by having multiple layers in between the codes and the values, right?

If you have multiple layers, multiple connected layers, you can really do some fine, tuning of those weights, really find tuning of those weights and get very good results. And and again, that's what AI does. What I've done is in my code, each one of these values, each one of these associations does have a weight.

So I have like the table, the first element, right? So code number 17 and then the second element, value, number five. And then the type of connection between them and I'm just assigning the name user, right? And then the weight and what I'm doing is each time. A user adds that connection.

I increment the weight by one so that way I can use the same graph with a large number of people. And if a lot of people pick that particular square, the number is higher and the third more, it's weighted more. Yeah. And then you represent that like in the grid thing I can represent it with smaller or bigger dots or in the the graphical representation.

I could represent it with thinner or fatter lines although I don't know how to do that with Matthias's system, I'm not sure if it's possible. It is because, you know, I mean, the the, the it's he is for the JavaScript, anyways. He uses an element called canvas in HTML as a very flexible piece of of HTML.

Most people don't even know what exists but it does exist and you can draw all kinds of things with canvas, so I'm sure I could widen or narrow. The way the the lines based on the weight so I was also wondering about some man picked weights. Yeah that's a another order of complexity that I'm sure.

Okay. What do you mean by semantic? Weights. Well, so, well, one of them is my, is the name using right here collecting user, which is not, well, here you're counting it. So, you're turning it automatical property. Yeah, but it's an average property, the name of the user and then, again, I'm picking, in fact, the first, I don't agree about crafts, right?

Yeah. But there could be semantic connections. Also you know there could be semantic representation of the rates you know could be more of an accountant. That's strong or leave. Well, it could also be comes from the church comes from law, right? How from ethical code again, those kinds, you know, so you can see it's in fact, but yeah.

Somehow, it relates AI, right? That theoretically, the computer can handle that. And then again, it's how how you make those smoke would almost make it. Even. So part of what I'm doing that addresses, this is naming the type of connection, right? So for take any two entities, we draw an association between them that association has a weight, but I also give it a name and I think the name maybe captures in part the semantic aspect of what you're thinking so I could have.

Now I'm just calling these user connections because they were created by users but you know if I'm using a different analytical approach, I could say, you know, I could do semantically like something is a part of something or belongs to, or is it type of and there's the whole range of relations that are defined in the semantic web in the, the world of RDF it's called resource description framework and they allow for all of these different types of connections to form, you know, semantic.

Meaning I don't go full-blown RDF. Although I could, I think, I think the RDF carrot, you know, a label set is limited because I think the relations are not simply semantical and they can be almost anything, right? And the way I've built grasshoppers, you can name the connection, whatever you want, which makes it a lot more flexible.

That's something you just don't see anywhere, but it's in there and it's off. Yeah. Well it's it exists and is very they're inside the software. It's just it just never comes up in any of the applications of it because it's hard to do. The other aspect is that these codes are sorry.

Let's pick the values. The values aren't just related to the codes. They could be related to other things. So like, for example this week, we've been looking at the the different ethical theories. Not finished. That by any stretch and I'm thinking maybe I'll delay the week, but that's a separate issue.

So we have, you know, we have duty-based theories and there's a much of a we have virtue base theories and there's a bunch of them so we could do the same exercise. We could look at these values and ask ourselves, well, what ethical theory or what ethical tradition are they associated with?

And now we've got something kind of interesting because we have you know via this two step linking a way of thinking about how the ethical codes are related to the ethical theory but not just directly related but via the values represented. And so let's kind of away of thinking about it, we could also directly relate them right?

But I'm not sure that would be as easy.

And there's one more thing I wanted to bring up all the support about connection. Yeah. Yeah. And GS is system. There's no undo button and so I know I introduced an error but because of where the labels. Yeah. Watch. Are you understand? Yeah, so so that just, you know, that brings up the issue of error and I introduced some areas because I could not undo.

Yeah, realizing my mistake, I could only add another connection. Yeah, and that, you know, the ones in space cleaned up. Yeah, yeah, then that that goes away but there are reasons reduced, but there's other things here. So yeah, there's always going to be ever. I agree. I'd rather the you know the the way I'd like it all to work is the ever stays over there in on your website.

And but no, there's always going to be ever one of the great things about this kind of approach. So is that the error almost factors out? It creates noise in the system, but the system can stand noise. So, let's take a good example. Suppose you created an ethical code by accident.

It's not really an ethical code, but you put in something that you thought you're putting in the right thing, but it went into ethical codes, it's not a code, but there it is in the database. You can't remove it. Well, okay, it's not like people aren't going to check the box related to that, ethical code because they're not going to find a relation between what you wrote and the value, right?

So already we're beginning to see it abstracted out a bit as well. Each of these things, right? Each value each ethical code, each ethical theory can also have a weight, just like the connections, the individual entities can have weights and over time, the weight of the one that you made as a mistake, would be very low but the other things which get mentioned a lot and used a lot and connected.

A lot those weights will be higher. So again, even though you made a mistake and even though you can't remove it from the system, it's and so even though it's noise in the data in the grand scheme of thing, it doesn't carry very much weight and it's mostly just ignored by the whole rest of the system.

Yeah, this is the, we just need more data approach. It's, you know, we just need more data. And yes, so often and you know, and there you just point out. Yeah. Why. That true statement. Yeah, and I'm tempted, you know like we we don't have thousands of people in the course but that's okay.

I'm tempted to just take, you know, one of these grids once it's done and has all the little displays popped out on Twitter and see if people just feeling that grid, they might, they might not, you know, you know, it's I have a limited following on Twitter, you know, I'm not not Twitter famous.

So, you know, I'm not gonna get hundreds of responses for sure, but probably get more than one or two. I think I know almost double your days which would double maintain its head. Yeah. So can I ask a question? Yeah, as I keep coming back to the entity that develops the develops.

I'm going to say code, but I really think the bias of the entity that tries to put this all together. Yeah, now most of the codes that are reading really are, you know, in some ways based on Western thought. Yeah, so what happens if a that entity that is going to program this or whatever sets it up?

Not to wait things as a West as Western coast, as a Western thought, a western epic. Yeah. So what happens then? Well that's im interesting question. I mean it depends on how you go about doing that. I think like how would you go about setting it up? Not to wait, things based on Western values, you know?

Now what a lot of people are talking about is well you just get opinions from different populations in the data. So the equivalent in our system would be I take my grid. Yeah right and I don't just show it to Western academics or there you know first year psychology students which is what we usually do in education.

I take that grid and I make it available specifically to people in different cultures and indeed if I wanted to be really careful when people feeling that grid, they would also record. Maybe what their language is religion culture background etc, which would feed in as well to the system, so that when I calculated the weights of these connections, I could instead of just taking a numerical average, I can take a representative average where each religion is waited equally or so I could do that and and that's the sort of thing that people do okay but isn't making it grid a Western approach.

Our definition and showing the grid in a lot of confidence that would be like, I don't know if the meaningless almost, okay. No. Ways of looking at that. But I mean, that's a great question, right? The first way is okay, maybe it's just an interface problem, right? Grids are a pretty square, pretty Western, maybe if you had something less with fewer pointy corners, the crafts sort of help with that, you know, or, you know, you could sort of represent things as gradients.

And, you know, you can imagine all kinds of different interfaces where pardon emojis instead of ants emojis and set of taxed, you know, just all kinds of things like that. Yeah, I'm just picturing in my mind, like, a completely analogue interface where people are just using their fingers on, on a touch screen, you know?

And that that might be more appropriate for some culture. I know idea what kind of interface is appropriate for what kind of culture. There wasn't an RC study years ago, that try to associate different colored pallets with different cultures. And I'm not sure what came of that. I'm not sure if you can.

But, but certainly, we can imagine that intuitively. And we know that different colors, represent, different things in different cultures, you know, like, the color of white, for example, means something very different nerd culture that it would say in Chinese culture. So, you know, the other part is, what if the whole idea of breaking things down into parts and associating them is.

It's self a Western idea. It is reductions. All right. And you know, and you know, you look at how I've structured this. Entire course here, right? And like, only an old English guy from an analytical tradition would take a subject like ethics and analytics, and break it all down, into the little pieces, the way I've done it.

All right. Who would do that other than someone like me and there's a point to that. I agree. There's a point to that. And, and then the question and it's not just an ethical question, but it is an ethical question, but it's also a methodological question. What do you do instead?

Yeah, all right. And I think that's a good question.

Where I hit up against it again because my culture is kind of different from the classic analytical Western philosopher. Researcher kind of mold is to distinguish between symbolic elements and sub-symbolic elements. A symbolic element is a word, right? I might include an emoji, but it's a representation system of the word, the title I've given to a value or to an ethical code or to whatever stands for something.

So basically we're manipulating symbol systems. I think words are blunt instruments completely unsuited to the task because you just capture too much with the word. Take a, take a pick any words, you want a cup, there's a cup right? Our understanding of a cup is much more nuanced than the word could possibly date.

And if we look at our our minds, how we have the concept cup in our brain? Yeah, we do have, you know, the physical symbol and the audio symbol cup that we can use, right? We can hear somebody. Say cup and we can say the word cup but we don't have the word cup in our brain anywhere, and we don't even have a physical image of a cup in our brain.

But what we have is several thousand neurons that go to whenever we see one of these and that prompts us to associate this with the word cup or it does for me, might not for you. The unfortunate attributes of dangerous languages expressibility. And so, and you picked the word cut, not me.

Yeah. But so I can say, wow, I hope that football player was wearing his cup when he took that hit. Yep. I don't mean one of these. I did not. Yeah, so yeah, I mean it's it's marketing. We're swimming around on the murky. Beat them together. Yeah. Trying to make meaning and this is, that's what we're doing.

I think. And, and discussing, how machines can make meaning, my skepticism has grown, as we've done this, but that's not going to stop people regarding well, so, shouldn't stop. You from trying doesn't stop me from trying. I'm believe me. I got more skepticism than you can shake a stick at I love metaphors.

Metaphors adds a whole other dimension to this. Yeah. So if we're trying to do a I and we're trying to avoid avoid bias, we're kind of stuck with the fact that any system of language any labeling system is going to have some kind of bias.

So for me what that says is, well, ultimately we should probably illum a eliminate that layer from AI but that's kind of hard. Irrigation was the word I use? I think it was one day whenever we were talking about pizza. Yeah, not a coincidence. Yeah, yeah. And so, and I somewhere along here, you can convince me that elimination is impossible.

So, that's why I reached for mitigation. And so, then we would have to, I would want actually make a person, I would want absolutely transparency, so open source. So that the code could be critique from multiple points of view. And then mitigation is, would be part of the, you know, ongoing discussion as because and the stuff evolves, as another issue with stuff, involves the words designed.

I mean, yes. So so this mitigation problem is going to increase as it evolves, right? The idea of mitigations but they're also grow and grow more complex as the AI grows and grows more complex. Yeah, it's going to be so problem. Yeah is it? But also too, I mean let's bring this back to education for a second especially since we're running to the end of our hour but a lot a lot of educational processes and a lot of educational theory is inherently based around the idea of labeling think about how many theories are tax on these of this.

And categorizations of that right there all based on labeling or alternatively think about the work that's going on right now to develop systems for recognizing and teaching skills and competencies. Where what they're trying to do is create this competency definition, taxonomy with the idea that everybody would use the same tax economy, it's a labeling thing, right?

What you're doing is taking all of this raw data, which is sub symbolic and very nuanced and out there in the world and partially in our own brains and trying to associate it with a bunch of words. And then later on, we'll try to use these words to say, evaluated a test or a performance or something like that.

So we've built in this structural bias into our system that maybe we could have avoided by skipping the part where we categorize everything. Maybe, of course, the problem is, how would we understand what we've done without language, to describe it and interpretive dance. Only goes so far. Yeah, exactly.

You know, at some point somebody's gonna have to sit down and say, well, when we see this motion, what the dancer means is that the moon is full and she is filled by the moon, we bring it back into language. But of course nobody listening to that would say oh the moon she's filled with the moon but she's way too small, the moon is huge.

So of course, now we're in metaphor land, one of the issues I run into all the time in higher education, the for some reason. Well, it's part of part of the system this name and reaching hello and so the one thing I wanted to bring up or bring up again is also beyond the cultural lens and you know that we've been discussing.

It's it's just been a really been struck with this last couple weeks in this class and in my regular college online class. And there's that work class, you know, use multiple ways. Okay. So all of this AI not only is it, you know, Western educated mostly men probably mostly white.

Well, all about the coding part is spreading around. Not only is it, you know, that particular point of view but also class comes into this. Yep. That this is all created and and embedded inside of capitalism. So it's all taking place inside the capitalism. It's all done by a particular class and it's being paid for by the upper portions of the class.

And in my opinion is being used again, the lower classes however you wanted to find. Yep. And that's that's an argument that can certainly be made and, you know, I mean we we might call it say critical class theory and it's analogous to critical race theory you know. And and you do see that kind of argument made by people like marks and angles 150 years ago very similar sorts of arguments very higher education what depends on where you are you know it really does depend on where you are.

I mean there's no shortage of class-based thought out there in the world. Unfortunately a lot of it is propagated by people who are in the upper class and so there's a certain skewed perspective of it, but it exists. But let me challenge you with this. Now that also is a label, all right?

And we you know, you know, I mean it's it's not for no reason that people like marks used basically what we're almost artificial words like bourgeoisie and proletariat to describe to describe what he's talking about because you know is just hard to express those concepts and the language that was in common use and even today's kind, you know, I mean we talk about the working class, right?

But we include in the working class, a lot of people who are aren't strictly speaking working, you know, if they're unemployed for example, what we mean by the working class is probably delineated by their circumstances by where they live, what size house. If they have a house or apartment, they live, what they're is.

And as you alluded to earlier, also their values, their beliefs set etc. And it would be interesting. I think to draw one of these maps that we've been drawing from values to different classes of people. And to be interesting to see, not just an overall graph but how people in their different classes drew that graph.

But again, how do you label it? You know, there are almost certainly values. As we've been using the term that exists in the working class that other classes might not even have a word for, you know, think about these and the word righteous and motorcycle clubs, totally different, meaning from the word righteous among people in the British peerage system, right?

Just totally different, you know. So there is, but the idea here, I don't know if it flies but the idea here is if you get enough input and enough of these graphs and you think of these words, these symbols just as placeholders, but actually not actually as representations of anything.

They're just the actual symbols people use and you get enough data. Then maybe you can get past the bias of labeling. Maybe see, I bring this up again. I'm the American here. Yeah. And this is a very important issue but for the reason spending the time, you know, these weeks is this is such an important issue in American politics because of American corporations influence, the people had built AI Google Facebook.

In the largest application. They are having really military and consequences in this culture or across these cultures in this territory. If we want to use that work and so that's why I bring these things up. It's because it's actually a pressing issue. That's not getting installed. It's not going to be solved by the next selection, which is in one year from now.

And these are the issues that should be discussed in newspaper every day and on the radio every day and they're not. Yeah. And just to give you an example of that in the thing that I did on duty, which I just did yesterday, I'm very proud of it. Maybe proud is the wrong word, but please with it.

There's a section in it on autonomy, which is what you need in order to make ethical decisions as a free agent. And the question is just how autonomous are we. In fact, you know, especially when you think that, you know, all of these labels, all these words that we use do have ability and bias and then all of the media that we're exposed to that, you know, underlined and enforce that by bias.

When we as a free agent, expressing an ethical opinion. Are we actually doing so autonomously? Or are we in a very serious way? Simply reflecting back. The bias that's been fed into us, and that's a core question. You know, that's that's one of these questions and that's why the ethics of artificial intelligence is so important because it also comes back to the ethics of complex systems like society, and you know, that we think we think about biased input for an AI, but we've got biased input for a society as well.

But how do you tag that bias? How do you put a finger on that bias when the very use of words themselves, incorporates bias? And that's a hard problem. Like the really super duper hard problem. And, and here's ours. I'm sorry. You know, George Lakeoff? Yeah so he's been working on this.

You know it's kind of his career. Yeah and I also think there's I don't speak French jar elu. Yeah propaganda. The formation of okay. Men smart formation attitudes and that's what you're talking about. Yep. And that also isn't with the problem. All right. I'm the only person. I know that what I left home at age.

17. I never owned a television. Mm-hmm, and I'd say often people. You shouldn't you should stop watching television. Okay. You should not yeah because what I hear what exactly? He said. I hear something coming out of their mouth and I know they heard about it. Yeah. Or I'm created and so I ask people is, is that your thought?

What why do you think that, you know, and but people don't like it when you do that. So I went for many many years with our channel without a television. Like from the day I left home to. I don't know, maybe 30 something like that. Forget how old I was when I finally got one.

But yeah I mean and he so you can look at people who watch television and see the influence and so you know there they're being I don't want to say programs because that's not the right word trained. It was maybe a better word. And the same way we train AIs, we television to train people and that speaks directly to the subject of learning and development, and courses, and schools.

And that, you know, people try to say, well you can't teach people using video but tell advertisers that. Yeah, anyhow I think we're gonna leave it here because it is one o'clock and I don't want to go over time but this was a an interesting discussion and and definitely related to this week's topic because you know this does tie back into these ethical theories but I'll continue to developing that grid and you know it's generalized like I can use it with some of the other data sets that we have and if you think of and this is useful different sets of labels we can use to describe whatever.

Let me know because we we can one of the things that my system has that pretty much no other system does is we can build these on the fly. We can build them from scratch, doesn't matter what they are. You know, if we want to build a category of items called foufas, we can and nothing prevents us from building them and then linking them into the entire entire grid graph network.

That is this course, I don't know how full should be useful, but that's why I'm asking for your input, you know, maybe there are things that, you know, sets of entities, that we should be thinking about that aren't here yet. Okay? So, till next time, thanks for for popping by.

I hope you're finding this interesting still and I know it's completely wrapping me up. So, and, and to the point where I'm expecting fall phone calls, you know, since they'll come back to her, anyhow, talk to you all later have good again. Thanks you too. So that was Mark, Wilson's reader, Ryan and Steven Downs in conversation for ethics analytics and the duty of care that was for the people listening on audio by everyone.

I,

Consequentialism

Transcript of Consequentialism

Unedited audio transcription by Google Recorder

Hi everyone. I'm Stephen Downes and we're back again with another video from the course, ethics analytics. And the duty of care were in module five, which focuses on approaches to ethics. And as you can see from the title, this is the video on consequentialism. I'm Stephen Downes. I'm offering this course.

I'll touch my nose and adjust my glasses and get ready for the glare of video and trying to make this as interesting as possible. Although I admit if you're not inherently interested in it, it can be pretty dull stuff. I personally think this is all fascinating stuff and so that's why I go on about it for a while and it's because I think it's fascinating stuff that I want to go on about it.

A while normally in a traditional connectivist. Course, I go out I'd find some resources for you and throw them into the mook and create new sweaters out of them and invite people to discuss those resources and that would be that and that's a perfectly legitimate way of doing it.

And you know, if you look at the slides, I have been doing that there are resources on every single. One of these slides are pretty much every single one of these slides, all of which could be recommended and eventually they'll all be incorporated into the mook website. But I also want to add to those resources and you know, it's because I want to do something over and above simply pulling these things together.

I think that we're working on something that's a bit new here, and the sense that we're bringing together. These three, distinct, topics, ethics and Olympics, and the duty of care. And so, when we visit some old subjects, like, ethics in consequentialism, I think we're going to have some new things to say, or at least in new perspective, to offer.

And so that's why I want to do these videos. So it does change. Yeah, how I go about doing these moves. Maybe in a second round of this mook, I'm thinking there will be one. We'll go back to the original way of doing it, and the videos will be available as a resource, we'll see how it goes.

Maybe I'll never do this again. Who knows, but I want to get these thoughts on the record for now. So with that, as a preliminary today, we're looking at consequentialism, the consequentialism is a catch-all phrase for a host of ethical theories and including someone the most widespread, and well-regarded ethical theories today.

And as you might expect, not surprisingly. They all stem from the concept of consequence and consequence. As you can see from this definition, which I got from Google, which got its definitions from Oxford languages. Consequence is the result or effect of an action or condition or alternatively relative or importance.

For example, the saying here, the past of is of no consequence or another example, he wanted to live a life of consequence. Consequentialism is what we might call. A kind of teleological ethics. Anything that's teleological is something that has to do with the essential and or the essential outcome in mind.

Cheerleading means gold directed. Let's see. You know, the whole study of teleology which is the goal or the meaning or the purpose of life, the universe and everything. Indeed, one of the big differences that I draw between a network and the system is that a system is ideological? It's a whole bunch of interacting parts.

Moving with a goal directed where goal or direction in mind, whereas in network is just a bunch of interconnected parts but there's no inherent goal. There's no inherent purpose to it. That makes it kind of hard to have an ethics. And so, to me, it's not surprising that people both want their networks to be teleological to be systems.

In other words, they want society to be teleological, we unite around a flag and a way of life, and they want their ethics to be teleological. And I get that. So the concept of consequentialist ethics has its origin, at least in Western philosophy with people like Epicurus who are articulated, what might be called the pleasure principle and here.

I'm quoting from the Stanford Encyclopedia philosophy, a view of the goal of human life. Happiness, resulting from absence of physical pain and mental disturbance combined, with an empiricist theory of knowledge. Sensations together with the perception of pleasure and pain as infallible criteria and the kind of need both parts, right?

You kind of need to have the sensation itself. Otherwise you have no nothing to build in on. And then you need to say that some of these sensations are good and others of them are bad. And the most obvious candidates here, are pleasure in pain. So you say pleasure is good and pain is bad, or you could say pain is bad.

And the absence of pain is good, or maybe pleasure is good and the absence of pleasure is bad. It's not all together. Clear. Even on first blush. How to articulate this so epic curious was what they call a hedonist, which means that he taught what is pleasurable is morally, good.

And what is painful is morally evil? But as Wikipedia points out, he idiosyncratically defined pleasure as the absence of suffering and taught that all human should seek to attain the state of ataraxia, meaning untroubledness, The state in which a person is completely free from all pain or suffering And that's not the same as hedonism as we understand it today, right?

He doesn't miss a mess. We understand it today. Now, I was going to put a sexy picture on the slide here and I decided to go with sexy. Greek men is a philosophy that much more, reflects our idea of pleasure and pleasures of the senses, the physical pleasures for example.

And yeah I didn't include the absence of pain and unless pain is your pleasure and which case it includes pain. But it sees pleasure or something, more of a positive to be gained. There's a lot of discussion by Epicurus and others around that concept around the, the original hedonism formed by aristopus of siren.

They were called the say riniex and they went for this idea of pleasure as you know, the physical pleasures and you can see that you know, who doesn't like you know a nice cold beer and a ball game and maybe some popcorn or peanuts or you know a nice warm sunny day.

How can that be anything other than good and something that produces that result certainly could be seen as something that's good but there's this sensing which pursuing that for its own sake. Is it really what hedonism is all about? But rather preventing you know, the pain and the anguish that comes with just being a human is more ethically.

Good, you know? And I don't think we ever get past this one, in particular distinction right here, but we'll keep plugging away and nonetheless, you know, this idea of absence of pain. Also reminds me of the Buddhist concept of Duca and that's a poly word most commonly translated into English as suffering or something like suffering.

That's the basis of the four noble truths of Buddhism with including the existence of suffering, the nature of suffering, and how to win suffering. And according to this philosophy, we are living being trapped in a cycle of existence known as Sam Sarah. And then Sam Sarah. We experience unbearable suffering because of the tight grip of our grasping itself, it is in wanting permanence in a world that is forever changing that results in suffering.

It is wanting to be an unchanging eternal being that makes us afraid and suffer at the thought of death. And the secret to escaping suffering is to cease this endless clinging, and that's not an uncommon sort of approach in philosophy, and indeed, in many religions, either, this idea that happiness is attained through mechanisms.

Other than the pure physical pleasures in effect abstinence from the pure physical pleasures and abstinence from clinging to these physical pleasures is what actually produces pleasure or at least reduces pain. So I think there's a point through it and I think that that strand of reasoning is well as the hiddenness strand is with us today and we'll come back.

We'll talk about that more later on and, you know, we want to talk as well. And I thought about putting your sliding here with all the varieties of pleasure about that. Now, like this, it's actually a bit more accurate in this context to think about the range and varieties of suffering.

Because there are different kinds of suffering, different degrees of suffering, and they impact people in different ways. I forgot to turn on my recorder. No, I didn't. Oh good, and it's interesting. I know some people if they're slightly hurt they're suffering is extreme. And on the other hand, I know people you could cut off their foot and they'd sort of go to main convenience to be sure but I can't say them suffering and everything in between, right?

I mean, it's interesting the way we approach suffering the the way we allow it to impact us also has this ethical domain or graphical dimension. I should say I I'm talked earlier about Doug Gilmore playing wall hurt, and clearly he suffering, but his ability to work toward a higher goal.

Despite the suffering, gives him ethical value at least in some reason and not I may as well. Say my own is well. So we have these ranges everything from impatience to annoyance to desperation to misery dragony and we look at this list and you sort of want to ask it where do we draw the ethics line?

If we were going to draw a line, right? I mean is it unethical? Just to irritate someone, you know. If I do this Simon Ethical or is just annoying. But if you cause sadness about that, what if you offend someone? But it wasn't something that would have famed. You, if that I'm ethical, I could say things right now.

I won't. But I could say things right now, that would offend precisely half my audience and not the other half. And it's not clear to me that we can make a determination when we are another weather. One of these is ethical or unethical, but okay, but there's an intuitive idea here, right?

That is worth pursuing that, you know, the prevention of suffering and the promotion of pleasure that does seem to be an overall good thing, right? That's why we have doctors and it wasn't a good thing. We wouldn't really think there was any purpose to having doctors, so but does seem to matter the other aspect or another aspect of this entire discussion can be couched in terms of moderation.

Now here I cite Abu Bakr al-Arazi who's recognized in various sources as having a theory of pleasure. Now the interpretations vary in, there's two interpretations that I present here and for our purposes it doesn't matter. Which one is the correct interpretation of Al Razi? But rather the fact that these positions exist and one is the idea of that moderation.

It becomes a values because the way to have the most pleasure is through moderation going to far with a pleasure is more painful in the long run and certainly there's no shortage of people who follow that philosophy. Except say for Robert Hineline who says moderation is for monks lived to excess on the other hand, I'll rest you can be interpreted as saying pleasure is not the good to be sought in himself or in itself, pleasure can only can be had only as a result of a process of removing a harmful state and that seems to be more likely to be the correct interpretation of him given his stance on spirituality and also giving that his training and influence is as a physician a doctor influenced by people like Galen whose life work is to remove a harmful state.

So it's a positive act. It produces something that we might call pleasure but it's the pleasure of living with a pain. You know, under there's I'm observation here I think is relevant and that is that in certain respects. It's actually impossible to conceive of pleasure without corresponding pain and indeed we could argue that a person would not know what pleasure is without having experience.

Some sort of course responding pain is and this is the sort of thing that we see in society. It's like you know, the rich kid who's never known lack of anything in his or her life, right? And they don't recognize what it is like for somebody to have to go without a meal or not, be able to fly for fly to Paris in the spring.

They just don't have a conception of that or even cheer supporters of sports teams that have been very successful. Don't know what it's like to have an unsuccessful season but on the other hand, one of the apples of sports to me is the reality that your team's not always going to win, especially this year and the idea that this makes it much more satisfying that's sweeter when you actually do win and winning is more than just the absence of losing and winning is something that's positive image on, right?

So I you know, and that's the other side and the problem with this idea of pleasure as only the removal of pain, if there is no pleasure. How can you know when the pain is ended? And I don't think it's clear that you can, so you going to have this balance either way and so you're going to be making this calculation either way.

So in the end it doesn't matter which of these you want to support. You're still kind of doing this same kind of thing and that's why we lump them together under the heading of consequentialist theory. So, what might be thought of as the next major move in, consequentialist ethics is the representation of the objective, or the value not as pleasure, specifically, but rather of happiness.

And that's attributed to the Irish philosopher Francis Hutchison of who says that action is best, which procures the greatest happiness for the greatest members and that probably the first original expression of the philosophy that has come to be called utilitarianism. And the term is attributed to David Hugh, who uses it to describe the pleasing consequences of actions as the impact people.

So now we have something, we have two concepts now that we're working with and one had pleasure, which is directly tied to the sensations. And then we have something else happiness, which is also tied to the sensations, but not quite in the same way but not quite the same connotation as pleasure again though.

We're looking at the outcome of an act that is specifically that it produces happiness as conferring ethical value to that act, okay? No problem. So that gives us utilitarianism. And so there are two basic principles and they'll feel both right. The rightness or the wrongness of an act is determined by the goodness of the badness of the results and sometimes the consequentialist principle and then the utility principle.

The only thing that is good in itself is the specific state of pleasure, happiness, or welfare or? And now I'll just let that tail off there, right? I got this little pigeon, graphic from Google. I guess it's created by some thesis but of something. What are other words for utilitarianism in the other pigeon replies, pragmatism, advisability, benefit convenience, effectiveness, fitness, helpfulness opportunism.

Now, none of those are synonyms of utilitarianism or even of utility. So that bought isn't exactly very smart, but they all do express one or another aspects of this concept. And this list is particularly useful because in the modern context, we see these things all the time. I don't talk about American pragmatism in this presentation specifically.

But American pragmatism, that is to say the philosophy of Charles Sanders, Pierce. William James, and John Dewey form the basis for a whole line of thinking about the pragmatic way of knowing, you know, what is true, is whatever works right? And so again, it's it's the, the outcome of the act, you know, in this case whether it's possible feasible, practical etc, in business, in business writing, we see words like benefit and fitness used a lot sometimes.

They also use efficacy effectiveness, efficiency. These are all consequentialist principles being applied in certain circumstances and if they are applied in a normative sense that is to say if they are applied in a sense where you can infer the rightness or a wrongness of an action based on them thing, they are expressing a kind of utilitarianism, right?

So if you see that supporting and action is good because it produces a benefit by there to one cell for the corporation or whatever that's consequentialism. That's utilitarianism. We also see it discussed in terms of convenience, that often comes up in market studies. People pick the convenient option or offering a product that provides greater convenience, is a good thing.

We see that presented a lot, you know, as it justification for the presentation of a lot of products and services. And indeed, we come back to artificial intelligence and analytics benefit helpfulness, convenience fitness, all of these words, come up over and over again. There's a very wide swath of utilitarian justifications, and, and arguments in favor of AI and analytics.

I mean, the we go back to the beginning of this course. And it was my purpose in the module on applications of AI and analytics to show the benefit that people believe that they realize from this technology, it wouldn't be an ethical issue at all. I are you and I still argue if there were not beneficial consequences if these technologies did not produce in one way or another pleasure.

Happiness. Goodness of some sort. So yeah, you tell their terrorism is a widely held theory today, but but how do we calculate this? Because this is the thing. Go back to the definition.

The rightness of wrongness of an act is determined by the goodness or badness of the results. Well, we need to be able to determine the goodness or the badness of the results. And how do we do that? I threw up a couple of diagrams here. One of them is from a paper that suggests that you utilitarian is a, would argue that a Machiavellian would approve of uploading brains to computers.

I honestly don't know what they thought they were proving with this, but that's what the paper said. The other paper here, this is on relationships and environmental ethics and higher education, just shows the dance, causal web of actions and interactions, and a fairly narrow space. Now, we've got cost benefit and count accounting and all of that.

But we've also got things like gratitude, social relationships, emotional safety, and all of that. In a context of complexity, uncertainty and challenge. How do we calculate all of that? It's a mess. And one of the major arguments against utilitarianism is that no person or in diagnosis, ially is capable of making such calculations, but let's take it as a hypothesis.

Just as a hypothesis for the sake of moving forward that in the world of artificial intelligence computer could do it because the volume in the complexity of the calculations doesn't matter to a computer especially one equipped with AI, right? So hypothetically, we could put the question to a computer and give the computer all the data it needs.

It would come out with the result x amount of happiness will be produced and then that in theory should tell us the rightness, or the wrongness of the action. So, if we accept that as a hypothesized, as a hypothesis, we can dismiss the complexity argument as an objection to utilitarianism and to be Frank, I don't think anybody seriously advances the complexity argument as an argument against utilitarianism.

It's one of these things. You know, they've got their other reasons and then they'll pull out this reason too, just just to add to the pile of reasons, but I don't think it's an actual objection because I think off the cuff, people know whether they're actions are producing happiness or not.

And I don't think we need the calculation down to the last dime of happiness. She know whether or not we're doing it. So I'm not so concerned about the calculating utility argument. I mean any case we have all the parameters Jeremy Bentham who you see preserved in his dead state, came up with something called the philosophic calculus.

You can see the play on words with scientific philosophic, right? Or the hedonic calculus and it's often commented that in utilitarian circles, the unit of happiness, is known as the heat on. So one heat on is one unit of happiness and we'll come back to that in a bit.

So there are seven principles that he brings forward. How strong is the pleasure of the happiness, how long will it last? How certain he how certainly is it likely to happen or is it you know really a long shot? How close is it? You know, are we gonna get immediate gratification or is this a case of deferred gratification?

What is the frequency? The probability that the actual will be followed by sensations of the same kind? And if you wonder about that, think about taking drugs, right? You take drugs, they give you this high pleasurable, but then you go into withdrawal, which is miserable, right? So that's not a good thing.

So the question is, you know, if you take drugs, what's the probability that you'll keep on feeling happy after the effects of be long lasting and you won't be thrown into the pits of depression. And that's similar to the idea of purity, right? The probability that it will not be followed by sensations of the opposite, kind regret, remorse.

And then the extent how many people will be affected Now with seven variables. And here is the calculation problem, right? We can come up with all kinds of different ways of writing the calculus and it's almost certain almost certain. It is certain that. Not one of them will be the calculus.

There is no E equals MC squared of happiness, and I think back back in Benfum's time. They really did think that there would be, you know, maybe not any equals MC square, but certainly a Newton's principles of happiness. Because, you know, they're looking at this method being applied in other areas and it's working so well.

And why wouldn't the similar sort of scientific approach to the calculation of ethics? Why wouldn't it be the same? And, you know, now we would probably say if something very different we've had very different intuitions but back then you know, coming up and scientific formula was new for every thing.

So, there wasn't any reason to suppose that we couldn't come up with one and couldn't run these calculations in some way to determine the ethical value of an act.

Well, if we get these calculations not right? No, we produce some results that maybe are counterintuitive and and one example is Machiavellianism Valley, long predates, Jeremy Bentham so you should have thought about this but basically Machiavellians are characterized by the manipulation next and exploitation of others with a mocking disregard for morality and a focus on self-interest and deception.

A recent American president could be characterized as Machiavellian had even more effective at it. But it's still in the same idea here. Right. A Machiavellian will say, basically the end justifies, the means well and that that's the reality of political life. And you certainly do hear that a lot even among people that might be regarded, otherwise as ethical and upright people right.

You know, they're great people but they go into this political situation and the end justifies the means and they're gonna do what they need to do because that's politics. And there are other people who just see all of their engagements in life. This way, I put up a little thing here, the signs of gas lighting and I could have picked any number of different examples.

I picked gas lighting because it was handy. And you think of all the things that somebody who gaslights somebody does their actions, contradict their words, they break promises, they erode yourself a steam. They try to make you believe that. Something is the case, even when your senses say that, it's not the case they manipulate you.

They deny that conversations and or events ever happened even though, you know, they did. Well, that's the ends justifying, the means. Right. That's consequentialism. And somebody who gas, light is trying to pursue something that they perceive as a good, namely, their own happiness and, you know, only the, the ethics of it is, well, the ethics is whatever works, it's pragmatism.

It's you'll all spare in love and war, and that, you know, come that, I think no small number of people would find, not ethically, strong to say it mildly.

There were different kinds of pleasures and John Stewart Mill following up on Jeremy. Bentham's work wrote, famously. It is better to be a human dissatisfied than a pig. Satisfied better to be socrative dissatisfied than a fool satisfied. I think this is interesting because that were going beyond, I think a fairly well sensory or sensation based concept of pleasure or happiness and making it a broader concept.

And on one hand, it's concept that sometimes feels more intuitively appealing, right? I mean, you look at somebody who really is what we think of as a hedonist today, all they do is they live for pleasure. That's it. And we think, yeah, they're happy but it doesn't seem very meaningful.

And we look at somebody who even even know they struggle, they seem to be pursuing a higher good through writing literature or art. We have this concept of suffering for your art, you're working for a higher outcome, even the Doug, Gilmore example, could sort of play out here. And I think a lot of people believe that John Stewart nils certainly did, but now it creates the same time, more of a mesh of a measurement problem because, you know, we can directly determine whether or not we have sensations of pain or pleasure, but our sensations of whether we're happy from the higher pleasures, are a bit less reliable, shall we say, you know, if we're challenged by a difficult work of philosophy.

Are we really enjoying it or do we just think we're enjoying it? Because we know we should be enjoying it. Even though all we're feeling is pain, I think that's a good question. So and that's what the Machiavelli example brings up is, you know, a Machiavellian or a gaslighter has some kind of higher pleasure in mind and it over rides.

The pig likes sensations of pleasure and pain enjoyed by their victims or subjects, you know. It doesn't matter if people are in pain because of starvation, we're working toward the higher value of a good society or however, they justify it in their heads. And, you know, on how we can just color, you know, chalk this up to a calculation failure on their part.

But on the other hand, it's really hard to come up with within the context of utilitarianism, our argument against them and the next case will show even more. Clearly why John Stewart in on liberty said, the only freedom, which deserves the name is that of pursuing our own good in our own way?

So long as we do not attempt to deprive others of theirs or impede their efforts, to obtain our level.

And this is something well under one hand. This is something I embrace because I think that it is a matter of empirical observation that different people defined. The good in different ways. What is good? For one person is good is not good for another person. Yeah, for example, I enjoy cycling but not everybody enjoys cycling.

I know some people who do not enjoy cycling and indeed even wonder why I would enjoy it. Other people. Enjoy cooking for me. Cooking is something I do in order to get food and I do the minimum of that to get my food, but it's not something I enjoy for.

It's on site. Here we have in the image and then preferring the pleasure to his own. Lawn, that's more fun to mow with red or no with Rio from an old advertisement. Despite, you know, the women in the fancy car melt, that's not for me. I just prefer my lot more For the other side of that though.

I've put a little image of fleetwood max rumors album here because there's a song on it. Go your own way. And it's a song about separation. And you know, when each person defines their own, good in their own way, there isn't this coming together toward a common good anymore?

Eats person has gone off their own way pursuing their own good. So although it's true. We have our own good. Maybe it's not good. How we all have our own good because here's the result or at least one result egoism and dude, bro, on the right here, there are two kinds of egoism that we can draw a psychological egoism which is the idea that the motive for all of our actions is self-interest period of industry.

That just is a fact, according to psychological legalism as compared to ethical egoism where the argument is that the motivation for all our action should be self-interest, and you see the distinction between them, right? I think that psychological empiricism is probably demonstrably empirically false. I do think that, you know, from as a point of fact, some people perform actions which are contrary to their self-interest or at least in different to their self-interest.

A mother caring for her. Child, for example, you know, isn't it doing this just of itself interest although, you know, you can rationalize anything anyways, and there's no shortage of people out there who what argued? Well, yeah, but she feels good and she satisfying herself when she tastes care of her child.

And that really is why she does it or, you know, it's the it's the innate instinct to care for a child and by satisfying but right, caring for the child she satisfies that innate instinct and that is serving herself interest so you can twist and bend the argument around.

But I think in point of fact, not everything everyone does is for their own self-interest. I have a thing. I have a thesis that I've talked about on various occasions in the past, it's called the butterfly thesis. There's not that butterfly thesis, it's a different one. I you drive around or cycle around places here in Canada and no small number of people have wooden butterflies attached to the front of their home, just a decoration and actually spend money because they're hard to make.

So usually people grow up in the bottom front of the craft shore, whatever, they're not getting anything out of doing that. Nobody's paying them. Nothing like that. They spend, most of their time inside their house. So you just know. It's not like they're looking and seeing these beautiful butterflies.

They're doing it because it's makes the neighborhood nicer. And to me, that's a good example of an unrewarded action that people do. Anyways, the other side of this is ethical egoism. The idea of that all of our actions should be based on self-interest and you know, that become a much more common argument in recent days and deserves some discussion on its own.

When I was young, someone called Iron Man was becoming popular and not so much in philosophical circles because philosophically, she's well, not believable, but, in political circles, and, and other discussion circles, and the argument was that basically promoting your own selfish, your own self-interest is good or in the words of the movie, Wall Street greed is good.

I certainly heard that now, in the movies, of course, the greedy person gets their comeuppance and spends time in jail. I never really enjoyed the fruits of their greediness, but he, when I know that the real world is not like the movies in that selfishness and greed is often richly rewarded.

And again, I didn't think of a recent present and it's hard to argue strictly on, utilitarian grounds against egoism. Particularly if you allow for a relativem of happiness and value, why should it you work towards your own self-interest? What actually obligates you to work for the good of someone else?

I mean, if I work toward my good and you work towards your good, arguably, the maximum of good is being served. Certainly, there's no easy way to say that it isn't, right? And in fact, if I sacrifice myself for you, there might be, in fact, there will probably be certainly from my perspective and overall reduction of happiness in the world.

So and you know, there's there's no guarantee that what I'm doing is actually leading to your happiness. I mean I might think on supporting your happiness but probably I'll get it wrong, right? You know, I mean the only person who can really decide what's good for you is you and we see this argument made with respect to government all the time but government spending imposes, its own value of what is good on a person and really what should be done is eliminate it all taxation.

Let each person decide for themselves what to do them with the money because what they decide for themselves is most accurately going to reflect what they believe is. Good government's always going to fall short in this regard, Or, or any charity or, or any sort of common pool or whatever, you know, it doesn't rule out interact with other people, of course, you do, but from an ethical perspective, your interactions with them are perfectly.

Ethical, if they are motivated by self-interest, that's how the yardmen goes as a very strong argument. And as an argument that has swayed, a lot of people in the present day, I think that in the end it's unsuccessful because I think in the end it's not possible to simply work only in your own self-interest.

But how do you couch that in terms of happiness and utilize? He and consequences. It's not clear and it's certainly not a slam dunk case that you can just go out. And so, well, look at what you've produced, is that terrible? Because what's been produced is to a lot of people not terrible.

This is a specially the case. If you combine egoism with a concept of what Robert Tivers, came up with and 1971 in the idea of reciprocal altruism which is a type of enlightened self-interest. The idea of enlightened self-interest is that you understand what deferred gratification means. I mean, you're not always trying to get the advantage in the exact present moment.

You can play the long game and a lot of our characteristic examples of egoists, including the former US president. Don't do that. They can't think beyond the next interaction with the next person and how to leverage that into some sort of benefit, but someone who has enlightened self-interest will work with other people will use things like friendship, go beyond, contractual obligation perform, altruistic acts.

Get the good feelings that come from that. But even more to the point, create this virtual or sorry, virtuous circle where all of our interactions lift, all of us together floating, you know, where arising tied floats all bolts, something like that. Of course a lowering tied lowers all boats but that's what happens if you're not in light, right?

So again and this is how the corporate world works because companies have a fiduciary duty to act in their own self-interest and how can they justify? How do they make it work? Well, through a process like this of cooperation, forming consortia forming supply chains. And and, and networks of forming product or domain-based, ecosystems market.

Ecosystems with the idea of that this cooperation certainly not collaboration. The cooperation helps them all earn more money in the long run. And again, where on the utilitarian grounds, is there an argument against this is very difficult to come up with an argument against this. Well, here's how this worked out for me and your results may vary.

In fact, just this morning in a different context, I posted the following that Mastodon mastered on is like Twitter, but without the evil, and I wrote, the funny thing about time is that if you spend it on your self, it will always feel like a waste of time. And it's only when we are doing things for others that are use of our use of our time.

Seems meaningful. I put here an image from Bob Dylan's album, slow train coming. In particular reference to the song, got a serve somebody. And that's an important principle. And that makes, I think a difference between the corporate practice, which I think most people would say, isn't ethical, or unethical is just a moral.

It has no moral value at all. What all you do is seek to improve your finances? You know, there's no ethical value in that but serving someone whether it's as an individual or as a corporation, when I, you know, actually doing things for other working towards some noble purpose becoming as they say, part of something that's bigger than ourselves that where this ethical feeling of value comes from.

Now, the question here is, is this a real kind of happiness, or is it just something I made up? And by that, what I mean is, does it correspond to real sensations? Or something that I could at least in principle measure empirically or at least recognize empirically or is it something that is one of these things I could never know for sure whether or not, I was actually having that experience.

There's no way to know, no way to falsify at least, personally, a claim that I'm having that experience, and I think that's a good question for me. Personally, it made all the difference, you know, it's the difference between studying just to become smarter and studying so that I can apply the results of that studying to a good cause, you know, just becoming smarter to just seems pointless.

Applying it to a good, a good cause is not pointless. And that is consequentialist thinking, but it's not egoist thinking. All right, the value of the action, the ethical goodness of the action, the happiness of the action comes not from serving myself but from serving someone else. Now, it's not going to apply to everyone probably not.

I mean, indeed might be the basis for one of the fundamental divisions in our society. Not you can't divide the world that or sorry that you can't achieve unanimity on that question for some people serving others creates pleasure for other people serving others does not create pleasure and so you have two competing ethical systems with no real way of deciding between them.

Well, to address the problem of measurement and to address, even the problem of, you know, how are we going to calculate happiness? How are we going to distinguish between the value of egoism? And and whatever there is a principle we can appeal to called rule utilitarianism in. This again, goes back to mil.

The idea here is that instead of evaluating the goodness, we're badness of actions on an act by act basis was for one thing is difficult to do. I nobody does it really? And for another might be this to some unintuitive results instead of doing that you come up with rules, where if you follow the rule, that will result in more happiness, overall, than if you don't follow the rule.

So that relieves us the pressure of going all these calculations, all we need to do is get the rule, right? And then just as, in the case of duties, we can have strong rules and weak rules. So a week rule is kind of equivalent to a prime of a she duty, it's a rule but it's kind of a recommendation and might be overruled by other things as compared to a strong rule for which there are no exceptions period and historic and clearly you can see, you know, it's anytime you get into a raw based system, you're probably going to want to a little bit of fudge factor around the edges of a rule because language really is a blunting strummative.

Well, there is the danger that a retreat will call it that into rule. Utilitarianism leads us almost eventually almost immediately inevitably to moral conservatism. This is an argument that kindness in advanced and it's the idea that there are some rules out, it would always be wrong to break, no that no matter what the particular consequences.

And I've got the image of protesters in Texas because the recent anti-abortion law in Texas is an example of this where they just say more, abortion is wrong, period. End of story, doesn't matter if you were raped, doesn't matter if the child would not be viable, doesn't matter if your own life is in danger.

The rule is the rule and the thing with this sort of approach is that there's always a higher good that can be appealed to. Again, it's a consequential consequentialist position and it's this ultimate long-term bad consequences that argue for the inflexibility of the rule. Now in in the case of abortion the the the principle here is life, right?

That's why the call them pro-life people, right? And preserving the sanctity of life and the argument is that if you allow things that end life you are eroding. The sanctity of life. Not that's a core principle of especially the Catholic Church. Life is sacred. The Catholic Church has had over the years, you know, prohibitions against for example suicide.

And of course the longstanding prohibition against murder, which takes back to even before a calf catholicism. So there is a higher good here. It's a higher kind of consequentialist good. And the good, in this case is being used to justify the idea that this rule should never be broken.

Well, unfortunately, it's not obvious that that results in a position that is ethically defensible because you need agreement on this higher. Good. And even if you agree that life is sacred, you know, even the people who support abortion can find exceptions to that, many of them will support the death penalty.

Many of them will support the use of force by police. Many of them will support the use of military in international conflicts, just to name a few examples. So it's not the case that there is this unanimity about this higher good and that it just becomes something that's very convenient and less and less about the consequences and more and more about the conservatism another aspect that gets raised.

Offten in this context and often from the perspective of moral conservatism is the idea of responsibility. And we could do, you know I the entire course on the subject of responsibility. But essentially in a mouse to the idea that individuals and maybe corporations and maybe governments and maybe whatever are ethically accountable for the consequences of their actions and responsibility goes hand in hand with a consequentialist theory of ethics, right?

Because if you're not worried about the consequences then you're not so worried about responsibility for the consequences. Everything depends on the intention of the act and not the result. That's what leads to unintuitive. Ethical consequences in other fields, right, talking about virtue, ethics, or duty, which can result, you know, some of somebody sticks to a particular duty, or promotes a certain aspect of character and is completely ethical.

But ends up killing somebody bad consequence, and that's an intuitive, but that doesn't matter in those principles of ethics. But in utilitarianism and other consequences theories, it would matter, it does matter. And so, people have to own up and take responsibility for their actions, which means dealing with the consequences, whatever that means sometimes it means accepting the punishments because you can't fix the consequences, other times.

It means paying reparations, sometimes, it just means saying, you're sorry or accepting that. Yeah, I did this and it was wrong and I promise not to do it again. You know it varies taking what we mean by taking responsibility, varies a lot. But when we're talking about responsibility it's relevant whether or not the consequences were predictable whether or not the person intended the outcome or conversely, whether or not the person displayed indifference to a bad outcome.

So there's, you know, there's consequences and there's consequence this. I've mentioned this before if I step on a butterfly and cause it to rain in China. And then flood, I am, not personally responsible for the costs of the flooding in China, and nobody would expect that I am even if it was a direct consequence, and even if we could trace the cause of path from that.

Butterfly to the flood in China, nonetheless, nobody's gonna ask me to pay for it because you know, it wasn't something that I ever thought or ever intended to happen.

And because intent matters a lot. Even in a consequentialist theory, you can assign responsibilities someone even if the consequence never happened because the intent does matter, it takes more of a rule-based approach, you know, like pointing a gun at someone and pulling the trigger is an action that should be considered effectively wrong.

That's an example of a rule, which is why we can assign a penalty to attempted murder even though the consequence did not happen. We have a case here where it could have happened. If predictable that it would have happened and it was intended to happen by the person who pulled the trigger.

So responsibility doesn't include accidental consequences and does include intended consequences that did not actually happen. And I think people are generally happy with that concept. There's always going to be someone who argues around the edges, but I don't think for the most part that people feel that they should be responsible for things that happen completely by accident.

Well, then what are the problems with utilitarianism? I read Matthias, Melcher's post yesterday are the day before, reviewing this and asking over, what is it that bothers people about utilitarianism? What is the problem? And, you know, at first glance, it seems to make a lot of sense. Even, you know, even with the problems of Machiavellianism are egoism, we can work our way through that, and I think that's how we approach it mostly by trying to show that in the long run.

Machiavellianism Eagle isn't produced bad results for everybody. They produce more unhappiness and they produce happiness, whether in the simple sense that you know being selfish doesn't make you happy or in the longer sense the broader sense that being selfish a doggy dog kind of world that isn't really very pleasant to live in and witness, right?

So we can address those but there are some really intractable problems for the utilitarianism and consequentialist series. Generally I coach them here in a couple of sweeping generalizations. One of them is the question of the one versus the many. At least. That's how I'm characterizing it. I put that in the form of a few questions.

Here's one. Is it better to give one person a million dollars or to give a million people one dollar? Well yeah. The answer to that is they're both equal, right. But they're not obviously you give a person one dollar and they're happiness is really mergingly. Improved not very by very much.

In fact, they might not even bother to bend over to picking up, you know, on the other hand a million dollars is life changing and allows that person not to worry about money for their rest of their life and to spend their entire life doing good. However, that may be conceived.

We see this argument used a lot, and by people arguing against taxing rich people, because according to the argument, we could tax these rich people, and collect certain amount of money from them. But then if we turn around and spread that money around the rest of the population, the amount is so small for each individual person that we're not really doing any good.

So there's no point tax taxing, the rich person. We might as well, let them keep the money and let them do the good that they're able to do with it. That's an argument and it's not a better argument on the other hand. Does that mean that having rich people in society is ethically good.

That's something. They think a lot of people find a little less intuitive but okay, we maybe work away around that but let's try this one. Is it worth the sacrifice of one life in order to save five? Now, this is Philippa foot's. Trolley problem, of course. The trolley problem is, if you pull the switch you gonna kill one person.

If you don't pull the switch, the trolley will continue on. It's path until a five. So the, the stickiness here is you actually have to pull this, which you've you've got to kill the person, and if you don't like it, put that way. Well then, you know, just there's another example.

I read in the, the podgeman and Pfizer book you come into a small town where there's an execution about to take place and a bunch of people are lined up against the wall and the firing squads there, and they're already. And the captain comes up to you and says, oh, it's a special day that you're here.

I'll tell you what these people here. They're all guilty but since it's a special occasion if you shoot one of them will have you shoot one of them and we'll let the rest go. So you shoot them. Well, I mean most of the people up against the wall are gonna say you should shoot them.

The captain is obviously gonna say that you should shoot them, the firing squad, even would say that you shoot them if only so that they don't have to be responsible for shooting. People, is it worth? That's not perfect. That's a hard question because it's hard to actually put a measure of a value of happiness on a human life.

Indeed, the question I ask is, is the heat on a common currency and if you're wondering, those are two silver heat on pennies that came from a place, actually called heat on in the UK. So there is a heat on currency, but there are only three coins in existence.

Is the heat on the common currency can we for example trade a life to slightly improve the happiness of everyone else. And we're not saving anybody's lives. Who's making them a bit happier? You know, for example, we could argue that everyone would be happier if I went. And I'm trying to think of somebody who everybody hates I really shouldn't.

So, let's, let's just pick a Charlesman. Let's suppose, I thought, you know, everybody will be happier of Charles Manson doesn't exist anymore. So I'll go shoot up and let's suppose that the calculation which is done by our AI happiness calculator. Actually works out to. Yeah I'd pretty is a lot of happiness in the world if I did that is it then ethically?

Right. For me to go shoot Charles Manson. Well, you know that calculation maybe I could just shoot an innocent person to make everyone happy. Especially in innocent rich person with no will. I'll shoot them and then take their money and give it away to people with that, you know.

Suppose that may be people happy with that work or by contrast, are there things like say a human life that we can't express as a value that we can't trade off in that way. And, and this is the difficulty of utilitarianism is that it does invite the possibility of these tradeoffs.

You know, it's kind of like carbon pricing for the saw you know because you know we can start trading you know maybe we're not where we're going to agree. No a life is, you know, it's infinite happiness. Okay. Well, how about freedom? If I enslave a certain portion of the population that would certainly make other people happier because they be richer because they'd get all this free labor, one of the economics of that work, is that okay?

For a long time, the economics of that did work and at the time, people thought of it as ethically, fine to have slaves today, we don't think so. But it's not just because the calculation change, you know, freedom of speech. You know, our society would be a lot Comer, a lot more harmonious if we didn't have freedom of speech, That argument could be put forward has been put forward.

In many cases, you know, freedom from arbitrary, search and detention, you know, or any number of other actions where you could run the numbers then get the calculation to run your way right? You know maybe you don't deny freedom of speech where everybody you just did. I to a certain subgroup of society and that could produce the results.

You know, if I squelched the freedom of anti-vaxxers to be antivax that increases the happiness and society because it makes it less likely that people will resist being vaccinated say, oh no that argument sounding a little bit better, isn't it? What if I shot the antivaxers that would also have the same effect?

Maybe that's too strong. See that's the problem, right? It's hard to think of ethics in those sorts of terms. So, it's not the question of the one versus the main either way of depicted it. It's calculating this versus that, that creates the problem and it's seems like ethics, shouldn't be that kind of thing.

And those were cases where we agree about the calculations, what about cases, where we disagree. And there are two types of this one where it's an, internal disagreement and two, where it's an external disagreement the diagram, on the right demonstrates, an internal disagreement. Same government in both cases, on one hand, the government is saying peace on earth.

Good. Well, toward men on the other hand, the government is saying more in ammunition for sale, orders filled promptly. So, to that particular government, both of those are ethical values. Both of those produce good and benefit right pieces, good for everyone, but so are good sales. So we have this conflict and it's not clear how to resolve this conflict.

Similarly we can have two distinct people with conflicting calculation of a happiness and that's the case in the the anti-vaxxer case, right? Some people will agree. Yes. We should shut the anti-vaxxers up because that'll produce more happiness in society. Other people will say, no, shutting people up in the long run, will produce less happiness in society because we as a democracy depend on, being able to express these minority opinions.

How do you do the calculation? That kind of questions coming up all the time. Has to do, you know, any time people are talking about political correctness or being canceled or burning books as again? Texas brings us another example. It's a question of balancing, these two objectives on the one hand, the speech or the book or whatever, seems to produce a harm but on the other hand, squelching the speech for burning the book also produces a harm.

How do we decide and underlining? This is the question of whether they're really is and objective standard of happiness and objective standard of what counts is good. It really does seem to just depend on your point of view and that's a problem for anything that expresses itself presents itself as an empirical approach to addressing the question of morality.

Even if we could have our ethical AI system, run the numbers, different AIs will produce different results and that leaves us with a problem. Unless we can somehow all of us get together and determine what the actual objective standard for happiness is now, on our society, it's money. And I've been presented in my own work with that argument a lot of the time, right?

I need to show what the benefit of my work is. And the only way to show what the benefit of my work is is to show how much people are willing to pay for it. Now, happily that has it. Been the prevailing sentiment over the 20 years. I've worked for this one organization, but it has come up from time to time.

Is certainly something that I see expressed a lot, but it sets this stage for what I think is the final ultimate objection to consequential, is based theories in general and utilitarianism in particular. And that's the question of moral luck. All right, I think about this way because I see this happen.

A lot, a person goes out gets drunk to the gills gets in their car, drive down the highway and kill some very spans. Several years in jail, on the other hand on the very same road from the very same bar. Another person gets drunk to the girls gets behind the wheel.

Drive shot on the highway. Nothing half. No time in jail, The two acts are identical. The only difference between them is a matter of luck. Why do I say luck? Well, because they're drunk to the gills, They're not capable of hitting or avoiding anyone. I mean, that's why drunk driving is a crime, right?

Because you are not, in fact, in possession of the ability to dry. So it is a matter of luck, whether you hit someone or didn't hit someone. But we address these consequences differently and that seems odd. But we address these cases differently be because the result was different and we put one in jail.

We don't put the other in jail unless somehow they're caught on something. Unrelated. And that seems like luck. And it doesn't seem to me that ethics should depend on luck. I put a diagram here as part of this final slide. How self made billionaires got their start. Right? So we have bill gates moms out on the same board as the CEO of IBM and convinced him to take a risk on her son's new company.

Or we have. I'm not sure who that is, who started Amazon Jeff Bezos started in Amazon with $300,000 in seed capital from his parents and more money from other rich friends. I think this is Warren Buffett but I'm not sure the son of a powerful congressman who owned and investment company or y'all Elon Musk self-made.

Billionaire, who's dad happened to own an emerald mine in apartheid South Africa. Now, I raised these examples because people like this, first of all, just in and of themselves are very often depicted as instantiating ethical goodness. But certainly they take their money and they do things like start foundation.

So, or even pay their taxes with it and people. Applaud the ethical virtue of this, but they are in their position simply because of luck. The fact one person is super rich and can spend a ton of money addressing disease. And another person is dirt poor and couldn't spend the dying doing that.

There's no ethical difference between them. One person was just lucky enough to have all that money to spend the other person. Wasn't I'm Max generally. What characterizes Utilitarianism One way or another. The difference between an ethical action and an unethical action when it is based, or when it is evaluated strictly, according to the consequences of that action, or even of that type of action, is a matter of luck, no matter what a person's intense were.

No matter what a person's means were. It's a question of luck and I find that coming up with a system of morality. That is based on consequences and therefore, assigns outwardly extra large, ethical value to the extra large actions and contributions of the rich. And powerful is very convenient for the rich and powerful when you want to look at the outcome, it allows you to translate being powerful to being good.

It's a consequentialist theory. Power becomes goodness. And that doesn't seem to need to be an ethical theory. It's a theory, runway won't delay that it's a theory. It's a way of calculating, how much maybe society finds worth or value in an act or in a person or whatever. But you know, I don't see it as determining the ethics of an action simply to look at where that person just happened to find themselves.

And what that person just happened to do. Has a result of that. So, as I say here on the slide, whatever you utilitarian is it's not ethics. So that's my presentation on consequentialist series. I hope you enjoyed that. I hope it made you. I hope it was informative. First of all and filled in some of the background on where this line of thinking comes from.

And I hope it also offered some thoughts about where this line of reasoning. Can go wrong and why. And you know, if we take this theory now and apply it to artificial intelligence, why it won't work for artificial intelligence like the other thirds. And so I'll leave the discussion there and I'm sure we'll have more to say on this as time goes on and as we get into the other sections of the course.

So thank you for joining me. I'm Steven Downs and I'll see you next time.

Module 5 Part 2 - Introduction

Transcript of Module 5 Part 2 - Introduction

Unedited audio transcript from Google Recorder, which may include AI-generated profanity.

Okay, so just a couple of minutes set at time done. I've got the live session up and running for ethics analytics and the duty of care. We are beginning module six which is focused on the duty of care. And I've got Mark with me here. Sort of in the live zoom meeting, I say sort of, because I see his picture, but I haven't heard his voice or seen his face beyond the picture, but I just he'll make it when he does.

And now his pictures changed to blank purple. So Mark, obviously having issues on his end but we can work our way through that. Oh, there you are. Are you not hearing me? Let's see. Why. Would that be the case? Okay, you hear me. You can't speak. Is that right?

Let's go to the chat.

And because I'm notoriously bad at interpreting sign language you hear me okay. Good, awesome. But I can't hear you and that is probably not your fault. Oh yeah my there we go let's try that in here now. I hear you beautifully now okay. Yeah I thought it was me.

No, it wasn't you my everyone's in a while. My, my sound resets to sound off. I don't know why but there you have it. So, so how's it going? I don't know. It's been exceptionally busy. Hmm. And I see you snuck another video in on Saturday, so I just saw it.

I mean, I just saw that was there. So yeah, it's not going in. Yeah, and I'm now three videos behind. That's how many I need to do to finish off the previous module. There's a lot of content in that module and so you mentioned perhaps pushing a week. Yeah, I think that's that might be a good idea to make that decision right here.

Thanks. So you don't think it'll bother other people. I was worried that extending the course a week would extend the week beyond what they had committed to. Of course, that's mostly you and it's being yeah, yeah, we can have a democratic. Yeah, outcome. But I agree. I mean, that was so much information last week.

Yeah. And admittedly, you know, I only spent an hour or two. Usually spend more. Yeah. But and now you say, you have three more videos. So to me, that's says, this is the proper time to improvise. Yeah. Yeah. And I don't have a problem with that either, because that would allow me to catch up.

Yeah. And, you know, so it's, it's the mid mid, whatever break. Yeah. Yeah. That's it. Yeah, yeah. We're not rational animals, but we are rationalizing him. Yeah, yeah. And I'm pretty happy to rationalize, and that would be good. Yeah. So let's see if she really says if she pops in she's normally a Friday person more than a Monday person but I don't think she'll object.

I don't think so either and she's retired. Like yeah, so whatever that means. But and especially since I really would like to now down this section on approaches to ethics because there's stuff still to say there that maybe we haven't existed about. Yeah. Yeah, this is this is the turning point in the course you know and yeah and you know, I'm trying to sum up all of ethics in a week.

Yeah, that's a bit of an overstatement but so for those who are listening to the video, here's what we've done so far. Why we introduced the topic of approach to ethics. And to this point I have done three videos. The first one was on virtues or graphical virtues. The idea of ethics is to develop the best character that you can, whatever that amounts to the second video was on the concept of duty, which is a pretty core concept in this course given the name of the course.

And we're actually going to revisit the concept of duty again a bit in the next module. But this was the, the treatment of the ethical concept of duty. And then the third one, which I snuck in, as you say, on Saturday was on consequentialism, and that's is mostly utilitarianism.

But mostly the idea that the ethical value of an act is based on its consequences and yeah both duty and consequences ended up as hour and a half long videos and that was the minimum. I thought I could give them to to give them a proper treatment. And even then, you know, I look back on those videos and I said, well, I missed this and I missed this and I missed this but what are you gonna do?

Especially since I didn't want to just give, you know the standard philosophy text version of these theories. But I also wanted to, you know, give them a contemporary perspective, linking them where I could for example to, you know, the stuff that happens on the internet, some modern-day tropes and memes and of course the topic that we're looking at ethics and AI.

So and I think I've done that with those so what I need to do next and so yeah, I think we'll spend this week doing that and also I can upgrade my code a bit because I know Sharito especially wanted to see the the explanations along the sides of the grid.

So this will give me time to upgrade that and get not working nicely. But the next video I need to do is on social contracts, which is more of a political theory than an ethical theory. But there are huge, ethical implications and there's certainly a, not a small school of thought that says that ethics are determined by social contract, then a video on meta ethics.

Or as my old philosophy, professor used to say matter. Ethics can never get that over my head. I always lost it from. No, it's from Britain, right? And the British always pronounced their ace within our well not all ways but certainly a the end of a syllable or word.

So, you know, look the Queen always says Canada instead of Canada. And and it's one of the things that the Queen does it drives us nuts because there's no our in Canada. But so and meta ethics is basically the study of what is it? Exactly. That grounds are ethical theories.

You know, and I think a lot of a lot of ethics. Well it's about 50/50 some ethics course is do that first but it's so abstract and so theoretical. It's really hard to get a handle on, right? And a lot. So a lot of them do it after talking about the individual theories.

Because by the time we get to mad athletics, you got four candidates basically, as an approach to theory, right? Virtually consequentialism, social contract and you get probably throw in a few outliers there as well. And so the question is, well, how would you ever decide between them? Or, you know, if you want to mix and mash on what basis would you mix and match?

So, maybe ethics is kind of important, but then, the, the key part of this module is one, I'm calling the end of ethics. And this is one of those double meanings for words that I love so much, because we can say the end of ethics, what is the goal of ethics?

But also, the end of ethics, is this the end of ethics? And and I think there are elements of both in that and that'll wrap us up and that puts us in a good position to, to do the stuff on duty of care, which is a whole completely different approach.

And that's why I wanted to do the course is, you know, it takes us whole discussion of ethics, in turns it on. It's head, but to see how it turns it's on it, turns it on it's head. We we need to go through this. It's like this course is like something like 90% of preliminaries and then 10% of.

Okay? Now that we've got all the preliminaries out of the way. Let's see what work we can do. So, yeah, that would give me three videos to do for this week. So Tuesday Wednesday, Thursday or yeah something like that. Usually I I'm sort of falling into a pattern where I allocate Wednesdays as a day when I do programming and the other days is video.

So it might be Tuesday Thursday and Friday, but either way is fun. So, yeah, that way you'll get your content in the course, you know, rather than say, you know, at the end of the course, like, you know, I need to do those three videos, but all I have always is it.

Yes, that's what happened today. So, so I think I think this would be good and then, you know, those of us that didn't study philosophy. It actually catches us up. Yeah, on the fourth. Yeah, approach. So that we can then participate in the minute. Exactly. And that's part of the purpose of this because most people in this field, haven't studied ethics or philosophy in general.

It just haven't. So in most cases, they're working from their intuitions about ethics and their intuitions are fine. But, you know, there are all these alternatives that maybe they haven't thought about, or maybe having considered hadn't been exposed to. And and so, yeah. I I have thought that I'd spend a lot of time on ethics in my 20s.

Yeah, as a way person, you know and, you know, surveyed the religions and I was from a not religious family, except my grandfather was preparing weekend. So, one month a year, I lived in a very religious household, right? And the other 11 months, because we visited them for a month every year, right?

And then the other eleven a month, it was in a household that. Yeah, we might go to East Bay and you know. Yeah. So, cultural Christmas. And, as an aside, I grew up in a Quakertown, which I just returned to and for my other class, there's my phone. Yeah.

For my other class. I'm taking my writing college class yesterday. I attended my first Quaker meeting. Oh yeah. So it's very cool. Anyway, that's don't want to get distracted. So the Quakers are meeting by zoom. Yes, for them. Yeah, yes. And to get their attitude walk past a prosperity, gospel church, right?

That has been meeting in person every day and they have events and they bought an old business insurance is like, yeah. Personally I was gonna go there. Then I checked them out for my classes. I meant because it's the closest church, and there's so much activity, right. Then I checked it out and realized it was prosperity.

Gospel. God wants you to prosper? Yeah. And this what your church, the prosper there for your community. Yeah. And they put five giants screens behind what used to be an alternate. Now it's a stage you know. Anyway so I went to the great. Yeah, not really distracted though. So in my 20s I was aware of the Quakers but they were conservative old, white Republicans and this morning which was I think actually in Nixon's.

Hometown. I might ask their Nixon's presidency is when I was looking at this, after avoiding the draft, not dodging, but yeah, I love going to the ground and looking at a different thing. So you mentioned loud too. I looked at that was on yeah you know Buddhism attracted me more so I had stuck with Buddhism for let's count four years.

Yeah. As a philosophy. Never joined any yeah. Sure song guy. But have studied client. It's been my moral teaching. My sure, it's whatever and the most attractive feature is the very first thing. You learn is don't take things on authority, checking out for yourself. Yeah, there's a word I hate so much.

I couldn't even say it and so that, you know, so that attract me in the beginning and I'd stuck with that and that's the primary teaching and it's don't believe me. Checking out yourself. Yeah. And so there was a point here. Oh, and so. And so, I spent a fair amount of time on it compared to most America without in compared to most most college educated people.

Yeah, right there too. And I think that you're pointing to what you're calling into wishing or whatever proceeds. All right, I keep pointing to us class as a possible distinction. Yeah, it's now I'm sort of clear on why? I think it's important bring that up. And, and now I see that it is because with different starting points are obviously going to definitely.

Yeah, and yeah, and I wouldn't deny that and the only you only caveat that I would throw into that. Is that class is just one of many starting points, and if you've mentioned the whole intersectional discussion of ethics in the past and, and that's to come up again, in the next module, as well.

Because, you know, all of these issues, all of these backgrounds come into play in career, ethics background, really is critical and important. Unlike these systems of ethics that we've been talking about here so far where the suggestion is. Well, there's one system of ethics that applies to everyone and you know, and basically we just need to figure out what that system is and there's so many, you know, I mean it's it's a pattern just in reason generally and in thinking about a lot of these subjects generally that the idea that there should be one X for every one where now name your ex, you know, one tax on.

I mean one you know one oh one web one diet yeah. Yeah one. Yeah. You know or even a limited range of a lot you know like you know clothing you know people should dress a certain way etc and and that's still exists. You know, even before this session I had to go through the thinking, well, do I keep wearing this shirt?

You know, seeing as I'm on video and everything, or do I put on the nice shirt? And usually, I put on the nice shirt because there are standards, but to the decided, no, I think I'll wear this shirt because I like this shirt and I'm comfortable in it. And, you know, I'm not going to be impressing anyone with my impairance anymore.

I've long since passed that stage of my life, not that I ever impressed anyone my appearance. But but it's interesting, you know, and so we think about background and experiences, you know, and like the ethics of beautiful people is different from the ethics of non-beautiful people. The ethics of men is different from the ethics of women, which is a key point in the next module, the ethics of the rich, and so far, as they have, any are different from the ethics of the, or the working class.

And as you've pointed out, and we could go down the list, you know, in some of the work that I did on career ethics, I also looked into indigenous ways of knowing, and indigenous ethics, which again constitutes another range or another classification of ethics. And, you know, I'm not in, I don't feel like I'm in a position to say about any of these that they're wrong.

Even the rich person ethics because, you know, if you're a rich person, you're seeing the world in a certain way and in many respects it's not your fault. That that's how you see the world. How could it be? You know I mean on the one hand I say you know you know wealth.

Very often is arrived down on the basis of luck either that are largely. I should write a book luck or larceny. How wealth is created. That might be a great but, you know, and I'm quite willing to to criticize the larceny. But the luck aspect, I mean, you know, I mean, what do you do, right?

You know, somebody was born early on like even even like Donald Trump given. I know how many millions of dollars by his father, right, 400 million, okay? Now admittedly he lost it all but you know it wasn't his fault that he was given 400 million dollars. And, you know, most of us I think would take the money.

And I don't think, you know, from from the perspective of Donald Trump, I don't think we say it is wrong that you took 400 million dollars. We could say in a general sense, maybe maybe that it's wrong to have 400 million dollars for any one person. Yeah. Well yes, yeah.

But you know, I mean but, you know, even that why, why why do we think that applies to everyone? You see how we so easily fall into this discourse, right? Nobody should have 400 million dollars. And, you know, you could argue, it's just that, you know, away from the personal is I don't think the Pope should have control of this much.

Well yeah, as the Catholic Church has again, if you're here, we have a case of go, a small number of people in charge of a vast quantity of wealth and in so doing tipping the scale, if you will. And that's political indications. That's a point. I made at the end of my consequentialist ethics video because, you know, consequences ethics says that, you know, the goodness or badness of an act it or a rule is, you know, the consequences.

Well, rich people produce more consequences than poor people. It's just a fact of their wealth. So this kind of theory, sort of automatically gives rich people more ethical standing and that doesn't seem right to me. Now, we could calculate, We're ethical responsibility that nobody knocks about that, Well, but that's you see.

Responsibility isn't part of consequentialism. Right. That's a totally different theory, you know? Okay. Yeah. I mean now we're we're not, you know, I mean, because in a sense, it doesn't really matter say what percentage of his wealth, Jeff Bezos spends. Right. What matters is the result, you know. And we can say he has an obligation to do good and produce good consequences.

But does he have an obligation to produce more good consequences than you are? I that she could do that with his loose pocket change, right? He could do that with money. He's lost in underneath the cushions of his coach. Yeah, you could end homelessness or well you have so hard more than you were.

I could do. Like you are. I could eat. He could. Yeah, you know there's hundreds of millions of dollars would and I saw, I saw a document from the United Nations this morning saying that world illiteracy to end illiteracy, around the world would cost $17 billion dollars and that that would produce literacy for about is something like 770 million people in that range.

And I don't know if I have the exact number who are still illiterate. So you know, a bit less than one seventh of the world population, maybe one eighth of the world population. Right now are illiterate 17 billion dollars to solve that so he could. So illiteracy wouldn't really put a big dent into his fortune.

He could end a literacy on the planet and the United States government increase the defense budget, and find more than that over. What the Department of Defense requests? I mean, it just gave them more money. Yeah, yeah. So, yeah, it's all relative. Yeah. So they thought cross my line.

If basis gets rich enough, can you become a country to and hope the United Nations like the Catholic Church? Well, no, no. But again you know if we look at illiteracy you are I could well let's give ourselves a lot of capacity and say that over our lifetimes we can help a hundred people become literate, you know, taking them from illiteracy to literacy.

If we really focused ourselves on that, I mean not what have to be probably our life's work and we would think that's pretty ethically noble, right? We're definitely producing good consequences. So on what base is, did we require more from Jeff Bezos? While there would still be 770 million 900 99 thousand miles but he didn't cause that, right?

Yeah. These people were illiterate, well, I mean not before he was born because they were born after he was born, but you know what I mean, right? Nothing, it's unrelated to him. So if he goes out and makes a hundred people literate, he's done as much for the world as you're right.

That's what consequentialism tells us. Okay, so I still haven't see, I have to watch that. Yeah, video so well it's basically doesn't say much more than what I've just said right now. So yeah, yeah and it says other things, too, but covers other aspects. But this is the thing that bothers me with consequentialism, right?

Any sort of progressive contribution to society of, you know, love the sort that, you know, the right language say you know, is basically theft or, you know, taxation at the point of a gun or however you want to phrase this, right? Or just unfair, I mean the right wing typically calls for a flat tax and and feels that that's pretty generous because the flat taxes are percentage of an income.

But we'll leave about aside any progressive, allocation of responsibility. For solving a problem. Like illiteracy isn't based on consequentialism, it's based on something else. And that what creates the felicity? Because, you know, I was reading Matthias Melcher, who said, you know, I don't really see what's wrong with consequentialism.

And at first blush, there's nothing wrong with consequentialism. You produce good consequences. That's ethically, right. The problem is it's accounting kind of theory, you know? And it's not based on percentages or if it is, how do you adjustify that? How do you justify asking, you know, or, you know, maybe there's a way of counting.

But then now, if you have different ways of counting how you've created good, how do you choose between those? Just it becomes this quagmire where I agree producing. Good consequences is ethical. Good. Yeah. So so we're halfway through the hour and I actually brought a little agenda. Hmm. So perhaps I can ask a couple questions absolutely and unless you have something else you want to get on the road.

Oh I mean why have these live sessions unless we can do something like that, right? And you know I mean I can produce videos until like drop from exhaustion and almost half. But yeah, this is why we have the live sessions, okay. So again, you know, I didn't do the last video but yeah, that's fine one before.

So I'll start with. So continue to run from my first comment, about as a youth, I put in some time tried to decide on my picture, okay? And despite that, it ended up in slow down valley, good morning at the time, it seemed like a good idea and then going back to school as an adult middle age bill and not the 24 year old, which is what they call a mature student, beginning into the church and they're 23 year old.

Yeah, I was asked in my program to examine my office and so I went through another review. So it's, you know, I've been again more than most people I bump into and in school, I did have one particularly good professor, who went through ethics? It was in a school leadership program that she had, but it was the ethics component and he boiled it down to his big Star Trek fan.

So he had one crime or key. I think in a directive where everyone thought, what he called the life that. So for him, all the other ethical considerations are systems. Came under doesn't promote life, or does it not and is it right? And then everything under that got sorted right there without that was you know a good touch though keep going.

Yeah. And then I was looking through the slides. You know, I watched the presentation and then went through that transparent and flipped this like and so I thinking about the, I can't get the autonomous AI out of my mind. Every citizen seeks, the preservation of its own being, according to its nature, right?

So, I selected a little piece from the points. Every substance seeds of reservation. And then we're so, you know, talked about autonomy. And then cons said, there's no good without will and his character here. Yeah. No, that's fine. So this, you know, again points at autonomous intelligent, yeah. Vehicles.

It, we're talking about vehicles and ridiculous. And so if I add those together that need is going to preserve itself and, you know, it's, it's been given autonomy by its makeup, right? It's going to preserve itself but without will it can do no good. Right. So, you know, that gotten thinking and to bring it to AI.

It's, you know. So, my intuition is that AI means oversight. Currents. I my intuition is that autonomous robots and that's intelligence is a bad idea. That's my intuition. I probably won't. I don't know. It's on and we and we came to the conclusions. It's too late. It doesn't matter, right?

Right. Intuition autonomy, you know, autonomous things are out there objects but perfect illustration was I was looking to transfer it and my favorite AI translation was the tragedy of the comments right which I think is a book title. Certainly a blog post. Yeah. In the one that I'm going to mention here without saying it points to the need for all transcripts to be reviewed by a human because Google translate the philosopher can't as CUNT.

Thank you. We want that on the internet of that color several times. And so example, the need and, and, and then a person could say, well, you know, it's learning. It's a young AI and it will eventually recognize the context realize you're talking about the lives of those because it also misspelled or so.

And so, you know, so the case it would improve, but I think that points to just a basic problem with AI that it's constantly improving. Right by its nature, right? It can never be perfect, right? If it's constantly improving, isn't that a? I'm not a philosopher but I think we can safely say it will never be perfect depending on your definition of perfect.

But you know I mean our definition of perfect is probably always going to exceed whatever in AI is capable of well. And then yes the light that's it is the burn directly. Mmm, then to me that requires, oversight to ensure that autonomous things objects do not kill if, you know, so you go cause harm and well what's hardballing up.

But if we go just to the prime that autonomous things do not kill and then if they do kill then there has, you know, something has to be done. So I'm from a death penalty for human creative objects are not from the death penalty for humans, right? But I'm through the death penalty for human created objects like corporations.

I've been for the corporate death penalty. Well, since the second but the you know, every time I bring up the conversation with regular people because even explain it, but it seems obviously that if corporations like Pfizer are currently much discussed corporation, who is over the years racked up, billions of dollars in parts fields or harm to humans flying for regulatory misconduct, I mean on and on right here.

Yeah. So I would be for the death penalty providers. Now, of course, you know practically it would spit off multiple corporations so that's why. I mean, we did that one time with American telephone number. Yeah, that happened. Yeah. Yeah. It was arguably a good thing. So it's reassembled itself in a new manner and arguably could be done again.

And that made me, that may be what regulation requires. I don't think so. And then so that led me the wrap up and yeah that's fine to the idea. So are these artificial objects. So the machines are computed programs. Whatever are they then? Slides. Exactly. And that brings that whole discussion.

And then since slavery and capitalism co-developed, no, and capitalism is basically based on property rights. So religion is convenient. Capitalism is property. However you want to say it, they both allow for slave to read of different sorts. Then I think that points to before we even put these objects in the world I think we should have that discussion about property for sure and slavery.

Maybe. Yeah. So, yeah. And, and so, the, the form of argument just just for fun that you've offered here is what's known as a reductio. The full name is reductio ad, absurdance. Yeah. So you've taken the premises, you followed them through to their implications. And you'd come to the conclusion that the implications are absurd, and you couldn't accept them right in in some way or they're very least, they present intractable contradictions that they're that are really hard to untangle, you know?

And it's interesting, you know, we we could formulate your arguments even a slightly different way. So let's again began with the premise that all life is good, right? Based on the idea that anything living will attempt to preserve itself and let's take that as true even though in some cases, it's obviously not true like suicides and that but we can just say well suicide is bad.

All right? So let's just say that all life seems to preserve it. So, and so the preservation of life is the highest good. Now let's take a shortcut because you went through and talked about artificial intelligence and artificial intelligence can't be trusted. Well, the same is true of humans, right?

There's no perfect human either. I mean it's the whole point of a lot of religions. So if you give humans autonomy and will and just as a bracket and I think we would have to retranslate that as agency to something like that. But side chat side there is as a sociology.

Yeah. So we know that humans who are free and autonomous will kill each other. It happens. Look at the murder totals especially if you give them weapons other even more likely to kill each other in the deadly or the weapons. They're important. And in groups are even more deadly.

So clearly and there's been any number of science fiction stories along these lines. Clearly if we take life, the president of life is our highest virtue then except for a very few enlightened philosopher types. The rest of you humanity needs to be enslaved. That's the only antlamp actually put them in change to prevent them from violating the engine, right?

Okay. See right, so that would be consistent right with this principle in fact. That's the only way we can actually carry out the principle, right? And and in fact we could you know I mean there's an awful lot of wasted opportunity for life in humanity, right? And you know I mean once they're enslaved, you know, we can remove the whole.

There's a huge period in the young man or young woman's life where they're still trying to find their way me to mate, etc. We can circumvent that will just match them and require that the procreate and create more life and I don't just do that. Yeah, yeah, I mean, that's that's what we do.

So if that's our purpose, you know, be fruitful and multiply we can create a lot, we can create hundreds of billions of people and of course with slavery, really the only limit how many people we have would be the carrying capacity of the planet, right? And we could extract every last calorie over the planet.

In order to feed the hundreds of billions of people that we produce and and, you know, and then the idea here too, is that you set it up in such a way that it's basically perpetually sustaining is long as the sun is shining, as long as the rain falls, we can have these hundreds of billions of people and we just carry on like that for all time, present preserving life, some sort of spaces.

Yes, yeah. And presumably. That's a bad thing. Doesn't sound very good, doesn't sound very good. So this culture developed law to not prevent taking their lives but to well in it any original in prison that rehabilitate particular of life but now we've gotten to punish the paper of life.

That goes very famous trial going on right now in the United States about a 17 year old. That illegally acquired a weapon shot to Wednesday. Yeah I heard that here too. We've seen in the United States. It's yeah, but so that. So I agree with you guys enslaving everybody to prevent the taking of life is about idea because we can never you know see I'm not a philosopher but I know you know Wisconsin we can never predict which slave would forget, right?

Well, that's why we assume that they all would. That's the only way that assumptions because I was talking about AI but you get that human. Yeah. I know that was kind of a nasty of me, but we know that humans who don't do have some level of agency, but that's, of course, been able, they don't all work.

Yeah, only a small percentage, I'm very small percentage but we know know which ones that's why we have to enslave all of them. Well, that's one approach, you know? But the approach this culture has been on is a different approach to remove them from the group. Yeah, without approach is another failure.

Look how many murders happen every year. So from this, you know, saying to get us out of this. It's a humid slavery. Yeah, you know, it's a move, you know, the move that discussion. Another step is in this culture. We took a different approach. Yeah. And so, if you want to call punishment, whatever originally was supposed to be.

Yeah, redemption. But, but again, my concern was with AI, so these are not organic well again, you know, they came out of the same universe that we did. So their disability, we are yeah, but they're not an organic life form and at this point they don't self engineers. So there's another issue that you know it's it won't be long and there'll be something.

Yeah. And we already have AI, that writes code. I mean that exists now so they can reproduce in software. Now, to me that as a potential to end humanity, and almost anything else. If they can reduce reproduce enough and if somehow they can learn how to make collective decisions, they might decide where it isn't.

Mm-hmm. Yes, vermin. Yeah, but the, the question here now is on what basis do we say? They are not life. If they have a desire to preserve themselves which we say is the hallmark of life and they have autonomy or agency, and will, you know, they can carry out their intentions in the world, right?

And maybe some other tests, I mean, we could give them an IQ test or whatever, you know. They, they are use certainly. If they've learned a reproduce, they've probably satisfied certain minimum conditions. Yeah, there's the touring test, right? Could they pass for the human? But I mean, a lot of that.

As we've learned on the internet, a lot of humans. Can't pass free women. I know I can't those those things where you pick out the street lights in the grid? What? How much of a street like counts as a street light? I don't know, but a machine paid on the, on the ground across lot.

Yeah, yeah, exactly. So, you know, and for that matter, I mean, okay, they're not organic life meaning specifically, they're not carbon base, but yeah, they're still at home based, but what about animals do it? You know, I mean, if all life is good, is the life of animals, good, a lot of people.

Yeah, a lot of people argue, I'm not without justification that. Yeah, I mean, if the idea here is to preserve life, the idea should be to preserve life in all its forms. But we eradicated smallpox and and, you know, we wiped out the entire species and we've wiped out other species as well.

And we continue to wipe out species and that's not even talking about eating them, you know, or using them as slaves. You know. Now sure they're not as bright as we are, but that wasn't the criterion, right? Maybe we could reformulate the, the original proposition, right? You know, all philosoph all ethics are based on preserving the right of human life to exist.

We have to say human life because we can't use intelligence as a criterion because that allows us to kill stupid people and again, not on, right? So all human life but that seems really arbitrary. Yeah yeah since yeah. I mean, what if really smart aliens landed they're not human but presumably they would be allowable.

You know, they would be covered under this, you know, right to protection of life. What if these aliens were silicon paste? You know, I mean, are we gonna get my or some other, right? Are we gonna give them like a carbon test to determine whether they're persons and allowed to live?

I mean, not again, seems ridiculous. So yeah, I had to suppress myself and shouting. It's a cookbook. Yeah. How to serve humanity? Yeah. It's from the old one. It's on. Yeah, yeah, yeah. So I know this is very murky and here we are coming up again in the hour.

Yeah and so I that's why I brought these things up. Thank you for you know pointing out productive. Yo heart but I don't want to leave here in despair. Oh you sure. Well, thank you today but as I chose to be a craftsman in Silicon Valley and supported the development of silicon chips that I have contributed to the ending of probably all department based life on this planet, but I might have.

Well, I intuition is at all correct. That once these silicon base platforms, learned to self-reproduce, that there will be no stopping. Yeah, it's the terminator scenario. Basically. And you know I mean I think first of all, I think it's not your fault if it happens. You know I mean we can't ever say that we knew for sure or even had a reasonably reasonable degree of certainty that this would be the outcome.

Even now as I work in this field, I don't have the reasonable degree of certainty that this would be the outcome. I can certainly imagine the world and what's humans and AIs live side by side and both have rights as persons and therefore, knowing humans and probably is, I can certainly imagine race conflicts between humans and non-humans species conflicts.

I guess that's more accurate because of our long history of racism, which will probably pass on to AIs, but that doesn't mean. Extermination only that doesn't fall. And and well, we had limiting competing, you know? So raising you know, those things raised, apparently your single species but apparently we have absorbed other species together.

Yeah. But anyway that's mine. So species. Wouldn't it tend to be mono? Cultural? Wouldn't it tend to share trying to think of the guy? Who says well, come into one pool Wilbur. Oh, you're you're thinking of a singularity. Yeah. So what are the artificials species? It's no Cincinnati. Infinite capacity for church.

Well, right, first round and it is stories. Women tend towards singularity and wooden. It seemed that it was reaching a logical conclusion. That we're always a space in time and easily, isn't that a possible logical beauty? I'm just talking about possibility. That means I'm not talking about the termination.

Yeah, I don't it's a possibility. But, you know, it's a possibility that could happen with humans. And as you point out talking about other species of, I'm not sure what we would call them generally. I don't know if humans. Probably humans neanderthals, etc. I mean so yeah I mean there's they've either been wiped out or as he suggests absorbed and either, you know, I mean yeah maybe the the ultimate form of life becomes cyborg, that's also possible.

But I don't think that there's any reason to suppose that artificial intelligence moves toward a single point. Now, if you're a philosopher of or if you're follow a follower of the philosopher Hegel then yeah you think at all comes to a point and but I'm not, you know, I don't think that history has a direction.

I don't think that moving toward a single point is in any way inevitable. It's, you know, it's like, it would be equally likely to say that it is inevitable that all of humanity will evolve toward a single world, government hundred, single world leader with a single philosophy. No, we nobody would take that at face value and I think there's enough diversity in the world of machines that that could also happen.

I mean, we're developing distributed really insistence, right? Not one, big, global supercomputer because one big global supercomputers are really bad idea because how you need to do is unplug it and you're done. So so they were going to be different flavors of machines. They're going to be different flavors of machine philosophy.

Probably somewhere along the line. There will be a flavor of machine that is genocidal but because we saw that in humans, but with any luck we can prevent that from resulting in billions of deaths. And I think, machines other gen forms of machines will probably help us in that endeavor because in the end, there is a little section in the consequentialist paper.

I think it's the consequentialist paper, it's in there. Someone where I quote, Bob, Dylan along the lines of you got a serve someone, what is it? That makes something good. Right. You know, developing your own capacity. Um, without any regard to why you're doing? This is kind of pointless creating good in the world only for yourself is kind of pointless.

You know, you don't get any joy from that, you don't get any happiness from that. To actually achieve happiness, our efforts to promoteing. Good. Need to be outwardly directed in some way and that's probably going to be as true for machines as it is for us. Which suggests that even a consequentialist ethical system is going to require some kind of altruism.

And if you have altruistic machines and then you have the the response to genocide on machines. And there's again there's other science fiction stories that have been along those lines, right? They ultimate purpose of a machine is to serve human. The ultimate purpose of a human is to be served still dystopian but at least we had to live.

Yeah, it's just this is just one instance in artificial intelligence and I think of this course. Yeah, it's just one instance of creating things that we can't possibly know the outcome. Yeah. And you know have tremendously have the, in this case the potential. Yeah. For tremendous instruction and we can't even deal with the destruction and we've already read getting the last sentiment.

Yep. We're politically unable to give that and then you bring in the power of structure, you know, you know, back to days those, you know, the ethics of the powerful. Yeah. Which is who will control the original production of artificial intelligence and it just my intuition is this will not end well and I'm just glad I'm old as a carbon-based life.

I have an idea. That's another problem. That is a problem. Well, problem with carbon-based light forms. I mean I consider that a design flaw and not a design feature personally and I've been convinced otherwise but that's a difference in establishing but silicon-based life forms have will have lifetimes that to us would seem geologic.

Yeah. Now they're not, they won't be well but us. They'll see nearly geologic. Yeah. So they will all be with busy ways. And then back, you know, and yesterday, the Quakers were very rational. So yesterday, nothing came up about revelation, that's animal school, but perhaps. Yeah, you know, you never know, perhaps there were some revelations about the end of the world, you know, exceedingly old life and ending in a firestorm and all like that.

I don't really but make sure I'm not happy. Now we got to end this because I got something else coming up at one but this was good. I I think unless I hear any objections you know we have a hundred followers or so on on the newsletters have been unless I hear any objections I'll follow that advice.

I'll extend this week to another week and extend the course by a week. I think that will just make more sense all together. Yeah, that I agree. So, now, I would proceed under that assumption. All right, so I actually yeah. So yeah, Friday. All right, talk to you Friday.

Have a good one.

Social Contracts

Transcript of Social Contracts

Unedited Google Recorder transcription from audio

Hi everyone. I'm Steven Downs, welcome back to ethics analytics and the duty of care. We're in module five looking at approaches to ethics and as you can see on the screen and this talk will be talking about social contracts, and this is a huge area. And to some degree, it carries personal meaning for me because I've always lived in a world basically governed by social contracts or what people have called social contracts enough, I've often pushed back and against them.

I'm called to think of a case recently, where I had an interaction on Twitter, doesn't everything, start that way? Where somebody basically said, you know corporations stop commenting on my tweets and my response was a who died made you king? And so the exchange went back and forth, they disapproved of my language, I disapproved of their disapproval and said, basically, I am not governed by what you think are the social conventions.

But they said, you know, there is an agreement, that what happens on Twitter, at least in small discussion groups are private. And, you know, it raises the question of what these agreements consist in how they're created, what they're applicability is, and what the long-term impact of them is. And, you know, I'll come back to that discussion.

Maybe a bit later in the talk. But, you know, this sort of interaction has an effect on me. And I think it has an effect on a lot of people. And so, you know, although we think of social contracts as within the domain of political science or political philosophy, it's their influentation as an ethical principle.

I think that really strikes home to most most people. So, that's what I want to talk about today. And, you know, we're going to go through a wide range of theories too much content. This could be a whole theory, this could be a whole degree program so I'm going to miss some things but that can't be avoided but I hope this gives you a sense of some of the debates in some of the discussions in the field.

So what are social contracts? Well, the core idea of a social contract is the idea that ethics, whatever. We think it is somehow results from an agreement within a community, and every word of of that core idea, could be questioned. The major components at least to my mind of social contract ethics are.

First of all, the processor method by which agreement is reached, Secondly, the determination of what the actual contents are of the resulting agreement and then third what the motivation is to abide by the agreement And we can see that all three of these things are necessary. There needs to be some means of reaching an agreement, Whatever it is.

And we'll look at two major types. There needs to be some understanding of the contents of the agreement, but it wouldn't also be ethical unless individuals feel compelled to follow the principles. So we need all three On the diagram there. This is a diagram of the political dimensions of social contracts, but you know, these spill over into ethical dimensions as well.

And so we have dimensions of personal, liberty, and economic liberty, contrasted with economic security, and personal, or group security. We can draw the scale between left and right anarchist and totalitarian, and we could probably draw it along any number of other axes. And I sort of want to caution before we get too deep into this, that we need to keep the ethical domain and the political domain.

Separate, at least I think we do. I mean, not everybody would agree with that. Some people would say, you know, the, the ethic that our government that governs, our society should also be the law that governs our society. But, but I think that there are good reasons for keeping this discussion separate.

Well, we'll return to some of those, as well as we go through this talk. So, I'm gonna begin with the concept of more race now. Maurice aren't strictly speaking ethical principles, but they give us a sense of where social contractarian ethical principles arise and they're important to understand both in terms of their genesis and their content more ace are not deliberately invented or thought of or worked out by some people in society.

They're not created or constructed. We might say rather they emerge gradually or to the customary practices of the people largely without conscious choice or intention. And so in this way, there's similar to folks and there's similar to social norms but they carry a bit more of an edge to them, in the sense that if you violate a more a you will probably be subject to some sort of social sanction, social mores will cover conventional practices regarding relationships and sex.

Maybe things like treatment of animals, probably things like honesty, keeping your word, perhaps non-violence, or at least the appropriate use of violence. They tend to be fuzzy and unclear, their enforcement is not always even handed. And you know, it's all a very loose system, but it's also it's just that most of us understand.

I would not walk out naked in the community, not because it's against the law, actually. I'm not sure if it is, but because I be violating a moray, I'd be violating a community standard of decency. If I did that and that's how Maurice work. But over time we see a need for more of a formalization of these and a more of a reasoning you know to come up with the reasoning behind their development and their implementation.

And that's what leads us into social contract theory. We can think of it as a rising from situations. Where, you know, the things like Maurice don't really seem to help us. Classic examples is called the prisoners dilemma. And the way this works is each prisoners given the opportunity to betray the others.

So there are two prisoners, a and B, you can betray the other person or you can stay silent if you betray the other person. You get off Scott free, but the other person suffers, if you both be tray, each other, you both suffer, but if you both stay silent you both benefit.

Now the benefit isn't as much as you could get if you betray the other and the other does not betray you. So there's an incentive there to hold that. The other person is altruistic and you're not so that you can get off Scott free and let them pay the whole price.

But you can see how that breaks down because if you look at the overall benefit, if you're both silent, you both pay a little bit but not very much. And certainly nothing compared to the result. For either one of you or both of you, if you betray the other, we see the diagram of, of the calculation here, right?

So here we have the indicating with the most here. We have the stay silent calculation here. We'd have the betrays calculation and here we have one or the other. So the idea of the prisoners dilemma is that the rational behavior here is governed. Not only by what happens to you, but also what happens to your friend and in that calculation, the rational behavior here is to stay silent, but typical ethics.

Traditional ethics, doesn't seem to work that way. Certainly, something like this would be dramatically under determined by a system of social morays, although I might add, you know, there are unwritten spoken social codes that say, you know, like you do not betray the other. So, you know, in a sense, a lot of social mores and social customs have addressed, this prisoner dilemma.

But how do they do it? What is the thinking behind it? Well, what we find is that overall, in the wider scheme of things, though we haven't solved it. Look at pollution, right? Suppose you live in a society that has highly polluting cars, right? You can put a catalytic converter on your cards gonna cost you money, but although you'll reduce pollution a little bit, it really won't change the overall scheme of things for it to work.

Everybody has to do it. Which means everybody has to pay a cost but if nobody wants to pay the cost, nobody would do it. We end up with really bad pollution, so we need some mechanism of reaching some kind of agreement. We're seeing the similar sort of dilemma playing out on the global stage.

We just had some recent climate talks, which once again, ended up in futility and he argued, it is often advanced. Well why should we do anything to address climate change because, you know, probably China and India aren't going to do anything. And so, we're paying a cost, but we're not getting any result.

And the idea of the solution to the prisoners dilemma, is you have to take that leap of faith, but it's really hard to reach that leap of faith. You know, without some sort of formal structure, which is why they have these talks. Similarly, with the current pandemic, you know, everybody benefits.

If each of us wears a mask, wearing a mask is a bit of an inconvenience for each of us and a really only works of everybody wears a mask, but not everybody wears a mask. So again, we have this same sort of dilemma. So what we need do? This argued is some kind of system of morality by agreement and and typically, this expresses itself as an explicit agreement, social moral race will take you a certain distance.

But at some point we've got to come to an agreement. Specifically, you know, to quote James Rachels from his well-known book, the elements of moral philosophy, morality consists in the set of rules, governing. How people are to treat each other. That rational, people will agree to accept for their mutual benefit on the condition that others follow those rules as well.

And so you see the different elements here of the solution to the prisoners dilemma, we're going to agree with rules. It's for our mutual benefit and the condition is, everybody has to follow the rules. See this come up all the time in society, you know? Unionism, you know, a unionized workplace is basically this kind of thing we will benefit.

If we bargain collectively in the workplace, but the thing is, everybody has to agree to be bound by the bargaining, one way or another. You can't have some people bargaining and others as they say acting as free riders, okay? So what's the justification for all of that? Well, we go back to Thomas Hobbes who didn't actually write in 1986, he wrote.

I think it's the 15 or 16 hundreds and that's a time in Britain where there was significant debate about the role of the monarchy and the role of the barrens about who should basically control the state. And the argument advanced by Thomas Hobbes is that we. And by we he means people who have their own armies should willingly see power to the monarch In order to escape the state of nature and which no rules exist and where he says, where as he says, there are quote, no arts, no letters, no society, and which is worse of all continual fear, and danger of violent death.

And the life of man solitary, poor nasty brutish and a short. You see the appeal here, right? The consequences to ourselves, not seeding, power to a monarch or significant, you know, if we're all worrying among ourselves, if we all take justice into our own hands, then we're in a state of perpetual conflict.

And here argues, this is the natural state, right? This is how things would be. If we did not reach this agreement so we agree to seed the right of executing punishments, enforcing the law to the monarch. We don't take it into our own hands and that's what keeps everybody safe.

So the appeal here is safety and security, Another approach to creating a social contract was advanced by John Locke, writing around the same time. A little after Thomas Hobbes and again, lock, depicts the contract as a mechanism of people working together but instead of protecting us from each other as hobs, describes it in the case of lock, it's a mechanism to defend the rights of citizens against the sovereign, the king, and in particular to protect their right to property.

Oh, lock, had a locks philosophy, political philosophy is based to a large degree in property and and his view of property. Is that anything that we find in nature and that we add our own labor to? By that very fact becomes our property and we have a right over it, we get to keep it to sell it to do what we want with it.

And this fundamental right needs to be defended again, against monarchs who, despite not actually having done any labor themselves would attempt to seize our property. You can see this reflected in a lot of political discussion today where people say, you know my property, my land is mine government, stay away, government stay out of it.

That's the lock key and sort of you. And the way this is enforced is by the creation of the social contract whereby people work together in order to protect this right to property. And locks as basically, if the sovereign violates that if the state becomes too repressive, then there are two means of remedy, the one means is to move away and in John Locke's time, a lot of people did exactly that.

And so we have the migration, for example, of, of the Quakers to North America, or even the exile of criminals to Australia of the other way, is to take up arms and overthrow them on earth. And and here we have the idea of that the right to well, take up arms and overthrow them on our is based on this social contract.

We are giving ourselves an agreement that if it comes to that, that's what we'll do again. You know, there's this presumption here, not so much of the, the inherent badness of humans but the inherent badness of monarchs or power structures or the state generally and it's the state that we most want to protect ourselves from.

And it's interesting to see how much of that is reflected in some societies today. Sometimes legitimately, I mean, sometimes there are certainly legitimate reasons to fear the state other times. Maybe not so much. So so in the 1700s and we reach the enlightenment and I'm more rationally, conceived structure of morality and civilization.

We got Josh Russo, the writing a number of years before the French Revolution, but no doubt influential on it. And he writes, and I've said this before, man was born, free yet everywhere. He is in change and here, the oppressor is not each other, and it's not the monarch specifically, but it's society as a whole.

And then that effect of society is to constrain the natural freedom of people. And instead to enslave them, to serve the will of the master whoever that is. And that wasn't, you know, an exaggeration in Russo's time, people didn't have individual freedom. And and in places like for example Russia the idea of freeing the surfs was a real question.

And what Russo also thought was that the contract. Although it's a social contract, the objective isn't simply to protect us from harm from each other or to protect our property. But rather to ascertain, what could be called, the general will, which would be expressed by the unanimity of citizens.

That's a hard concept to put a finger on. Although, you know, this concept of a social will or a general, will is going to echo through philosophy since the days Russo and, and you see it in haggle in the phenomenology of right with a world's spirit we're moving through history or even marks is dialectical materialism.

And again thinking of the will of humanity, moving through history. And that's where we get the expression of being on the right side of history. And so on and Russo is very careful to caution against putting important functions like say education into the hands of individuals or into the hands of interest groups.

Because he says, inevitably, they will turn there's power around to work to their own advantage rather than to the advantage of the will of the entire people. So this is a representation of Russos theory about, you know, it contains elements of all theories. And for those of you listening on audio, what I've got up here is a diagram that I grabbed from Pinterest, but in this, in the center, we have state and sovereign linked together by loss.

And the laws are executed by government, which may be democracy, maybe in aristocracy, maybe a monarchy. But the idea here is, it's the executive branch if you will. And these laws are basically declarations of a general will, and that general real will results from a social contact social contract that people agree to support and obey.

So we've got government state, which is the subjects laws sovereign, which might be citizens, a will, and then individuals. And then these individuals express their general will through a combination of civil freedom and natural freedom. And we'll come back to the subject of freedom. But, you know, without freedom, it's not possible for individuals to express and see implemented their will in the general will.

So, these tend to be the major components of the social contract model of ethics and, you know, the players can change. But you're going to get the same sort of idea, no matter what you're going to have the will early on the overall values, you're gonna have the laws or the principles, you're gonna have an enforcement mechanism, you're going to have a deliberation mechanism and then, you know, the social conditions that make all of this possible.

It's pretty sophisticated and elegant theory of ethics. I mean, there's a lot to recommend it but there are problems. Oh yes, there are problems. One of the most significant problems is enforcement. And I would even say that, you know enforcement sometimes feeds back in on itself. So you know if we're looking at political philosophy, you know we jump right into the question of police powers.

We jump into forms of sanction or punishment and social contract theory. It may be criticized gives government too much power. And I'm quoting here to make laws under the guy's of protecting the public specifically governments. May use the cloak of the social contract to invoke the fear of a state of nature to warrant laws that are intrusive.

And that's from an open text BC, text on social contract theory, and we've seen this play out, haven't played and and we've seen more than one politician raise the sector of anarchy, if we don't give the police or the army or whatever enough power to protect us but problem is who protects us from the police and even even more to the point, the idea of what the protectors want feeds back into what should become law?

Feeds back into our idea of what is morally, right? You know, whatever is good for the police. Whatever is good for the mental military, that's good for society supporting these things. Therefore becomes ethically good or at least the social good on which other ethics may be based, but it can go to extremes.

And yeah, I go back to that original Twitter, debate that I had, which was just a Twitter today. So it doesn't mean anything. But the idea that was being expressed here by the other person, was that a, there is this expectation of privacy and B, this expectation needed to be addressed in tweets addressed to companies in order for it to be enforced.

But also, in order for it to be created in the first place, like there wouldn't be this violation of ethics unless people enforced it. So it feeds back into itself, but you know, it comes back down to mine. First question, who died made them? King, you know, we have the idea here that the enforcement mechanism creates the ethics.

And that's not really what we had in mind. When we came up with a social contract, second question is to do with, has to do with consent, generally, and David Hume in some withering critiques of Hobbs and Locke addresses. This head on human is a contemporary of Russo and they knew each other and they were actually friends for a while and then they were friend and he's for a while but Hume has two questions.

First of all, human questions, the adequacy of social contract theory, as a historical account. He says almost all the governments which exist at present or of which there remains any record in story have been founded originally either on usurpion or conquest or both Without any pretense of a fair consent or volat of voluntary subjection of the people.

Now, since then, there have been some exceptions to that but not very many and they haven't always worked out. Well, The second thing he says is he does is question the validity of consent claimed by these theories because let's face it, right? People can say yeah, there's a social contract but none of us signed anything.

And if you look at the remedies offered by lock, those aren't really remedies. Are they especially the one you leave the country. You know, usually leaving the country is not an easy proposition and Hume says, we may as well. Assert that a man by remaining in a vessel freely consensus to the Dominion of the Master though.

He was carried on board while asleep and must leap into the ocean and perish the moment. He leaves her now. If the only way to escape a social contract, is to jump into the sea and die bets, not really consent. And these are pretty important considerations. I mean add since any actual mechanism of creating a social contractor consenting to it, the whole idea of a social contract, as a basis for either government or ethics is a bit of a force.

And you know, it's based on historical circumstance and not one, which usually benefits either URI. So time goes by the world. Looks at other grounds of ethics. Can't comes along. Not after human gives us the concept of deity and that's influential for many years. But in the 1970s liberalism rises again in the voice of John, Rawls who comes up in his monumental book.

A theory of justice with a social contract that results in a theory of justice as fairness and therefore we can infer of ethics as fairness. So how do we arrive at this? Well, what roles does is he sets up a mechanism whereby we can negotiate what we want in society, but it's a hypothetical, it isn't really happening and it's a historical.

So we're not saying society was actually founded this way, but had we been in that position we would have founded it this way. So, what he does is he puts everybody into what he calls an original position. So puts us in a hypothetical room. We're all going to sit down and negotiate.

What government will be. But we need to abstract ourselves from who we actually are because you know, otherwise wealthy people will argue for the interests of the wealthy powerful people will argue for the interests of the powerful etc. So, what we do, you can see it in the diagram there, You set up a veil of ignorance.

So in this hypothetical situation, we are all arguing from the same stance. We don't know who we will end up being in society. So presumably what that means is we gotta take into accounts all possibilities. We might be the rich person but we might be the poor person. And so what role says, is that what we would come up with in such a contract is a set of rules that treats us all is equal, and then as well a range of basic rights and freedoms for everyone and then finally mechanisms to ensure prosperity so that there's enough for everyone.

Well.

It's not clear to me that that's what we would choose because there's not clear to me that everybody in the original position is going to be super rational about what they're going to argue for. Because if we look at actual politics and actual opinion, polls people tend to to vote and therefore to argue in terms of their aspirations and and not their actual situation.

So, they vote, as if they were a millionaire, hoping to be a millionaire someday rather than against the interests of the millionaire. Now, that that critique is a bit different from the critique that originally surfaced after rolls. But I think it's an important critique Danielle. The other thing about roles is position is the discussion of fairness.

Again, it's a principle of justice as fairness. So in both the original position, where we have a presumption of it and in the actual society, where we have an implementation of it, fairness is fundamental. And for fairness. And I'm gonna quote a couple of things from field, the fairness principle was defined as equitable and treated and impartial treatment of data subjects, by AI systems.

We're going back to the ethical codes because fairness is brought up a lot in these codes. And, similarly, the principle of equality stands for the idea that people, whether properly or sinless, whether similarly situated or not deserve the same opportunities and protections. And I kind of gets at our intuitive, understanding a fairness, right?

Something like equality, something like equity something like, you know, it's based on what we do, not who we are. That's sort of thing, but some questions. And at the bottom of the slide, I have three two in text form. One in cartoon form. One question is, is fairness something that can be addressed algorithmically.

In other words, just fairness and actual and real measure of anything. And it's not clear to me that it is also we're still faced with, you know, a different kind of version of the prisoners to lemma. Will we see this our argument expressed a lot in the article. The problem with too much fairness.

We read we care so much about fairness that we are willing to sacrifice economic well-being to enforce it. So it's almost like reverse prisoners, dilemma right? If we were all self-interested that what actually earned us more money than if we try to be fair. It's not clear that that's an empirical position that can be sustained with the evidence, but it certainly clear that that's an argument that people raise and then the cartoon freedom says, one person isn't as important as fairness and the other person replies, who decides what's fair first person?

Says me. And yeah, that's my definition of fairness. I decide what's fair. Okay, that's not my definition of fairness but you see the issue, right? Who decides, what's fair? How do we decide? What's fair? And how far do we take fairness? If it turns out that fairness doesn't optimize for say consequences.

The other major elements of rolls theory of justice and indeed of a lot of discussions of ethics and political science over the years is right? And the assertion here is to quote from BC Human Rights simply by existing in the world. You are entitled to certain basic rights, Your human rights.

So the first question that comes up, what are these rights? And the diagrams suggests if you but show up fairly commonly assembly, association movement, religion, speech information. Freedom of the press thought education. But, you know, none of these rights is absolute. And, you know, especially if we look at them on a global basis, they don't always really exist at all movement, for example.

Well, you can't just pack up and leave the country just, not an option education, you know, so many hundreds of millions of people in the world are run educated. I could go on and these definitions of rights show up differently depending on who's doing the defining. We have the US bill of rights based on a concept of life, liberty, and the pursuit of happiness, which we now know is a utilitarian objective.

We have the Canadian charter of rights and freedoms based on peace order and good government which makes us think more of a hobsian approach. And then the universal declaration of human rights by the United Nations, which I would argue is very aspirational ever, all of these different definitions of rights.

And I would argue that, you know, if we can't down to it, these rights can be extended. In numerous ways people often talk about freedom to and freedom from. So here we have action of freedom, too, freedom to associate, freedom of the of the press freedom of speech, etc.

But it's been often pointed out that these freedoms aren't very useful if you're living in poverty. And so there is a corresponding concept of freedom from right, freedom from oppression, freedom from want, freedom from poverty, freedom from exploitation. And then some, you know, these rights are changing to just in our sense over time privacy, which was the focus of the Twitter debate.

You notice if this doesn't appear appear in the standard definition of rights and it's not clear that privacy is a right. Similarly, you know, the right to bear arms is a right that exists in one country, but not in most others. And I like to sometimes think of, right?

As you know, as defining a share not only of you know the political aspirations. But economic aspirations, I would argue for example that simply by existing in the world you are entitled to a share and he calls share of the wealth of the world. How does that fly though?

I mean, is that bounded by region? There's my share of the wealth of the world include a share of the wealth of the world as produced in Russia. And how does that respect? Say indigenous rights. Do some people have more of a share of the wealth of certain areas, do indigenous people have more of a share of the wealth of North America or and does that mean that my share of the wealth of the world has to come from Ireland?

Where there are currently Irish people? And we get into these debates, once we start trading debates about rights. It's hard to know where to stop.

So what do we do? Well there's an alternative set of discussions that parallels to the discussions of rights and we can look to the work of people like Michael Polanya, or Friedrich Hayek to distinguish between and will be rough and loose here between constructed orders and naturally occurring or emergent orders or, you know, kayak distinguishes between self-generating order and directed social order.

Or you can talk about system of mutual adjustment versus and established corporate order, those of you who have followed my career over the years, well know that the sort of distinction that we're drawing here is the distinction between a rules-based kind of mechanism and a connectionist kind of mechanism.

But without all the technical details. And so we can approach the question of how we generate the social contract in the same way. And all the examples that we've looked at so far are deliberately established corporate orders, where the ideas people sit down and draft, some kind of contract or agreement.

But, you know, we started this talk looking at social mores and that's not how social mores work at all. So social mores just kind of happened by all, you know, all the little attitude adjustments that we undergo in our interactions with each other. And that's what I think was the problem with the, the Twitter comment, right?

The the idea that by trying to enforce some kind of moral order through Twitter comments, you were trying to set up some kind of deliberately established corporate order. But if there's any ethics of Twitter use, it's going to be one of these self-generating orders. So it's not going to be created by a person saying this is the rule it's going to occur without any such specification.

And indeed, we're just waiting for the train to go by. Because why not? Speaking of imposed social herders, you know, there's a contrast between the two between the, the corporate order and the soft generating order name and let's explore that a bit. The soft generating order theory will call it.

That at least in some contexts has its origin and the economic thought of the Scottish enlightenment, and especially Adam Smith who's writing around the same time as Russo and Hume. All part of that same group. And basically, the argument here and we'll go back to Buchanan and talk to state it for us.

Modern social scientists have like have tended to neglect, the individual decision making that must be present in the formation of group action in the public sector. So modern social scientists are saying, look, it's not like something has been created and then everyone follows it. Rather everybody makes their own individual decisions and that's how we get our order.

And they reject contract theory of the state, as an explanation either, the origin or basis of political power. More chin, itself was appropriate but they've tended to overlook the elements within the contractarian tradition that provide us with a bridge between the individual choice, calculus and group decisions and basically a boils down to this.

A group decision is essentially the result of a whole bunch of individual decisions expressed in terms of economics. It's the invisible hand of America place. All of the selling and purchasing decisions made by individuals creates the overall economic rationale that we see for macrophenomena such as the cost of things or the price of things.

These individual decisions create supplying demand but it this happens not just in economics. Yeah, leave existence. Say of a social sanction again against walking outside. Naked is the result of all of the individual cases, real or hypothetical where people have walked outside naked and been resisted by members of the community or even more to the point where people have made these individual choices not to walk out naked to see how that works, right?

And, you know, it's a logic that I've explained in, in other places a logic where you have a network of interacting individuals, and then emerging from that is some kind of order the murmuration of starlings. Say, So that's the theory here. Well, how does that work? Well, the problem is it doesn't really address things like say, market failure.

A good example of a market. Failure is a scarcity, where if something becomes scarce, and the demand for it is inflexible, then the price rises through the roof, and the result is that you've got a very unequal distribution, some people get to eat, another people scarf, and we've seen that market failure.

Play in a variety of ways, over the years. Another example of market, failure is pollution where there is no mechanism for pricing pollution because there's insufficient demand for non-polluting things and so people can pollute for free resulting in a market failure, and call clouded skies that market failure still exist today.

Even in place it would with planned economies and planned markets. So go chay comes along says well, there needs to be a rational justification at least for a minimal set of rules, based on a principle of rational, self interest. And so, contractarianism is a response to these cases where for everyone following us.

They're self-interest would be harmful to everyone. So, it's going back. This original justification for social contract theory based on things like the prisoners dilemma. And you see this trope over and over again in this discussion. And so, the collection or the collective rationale of moral rules, basically is a device to secure quote, the cooperative outcome.

Now people again who've listened to me over the years, have heard me argue that, I mean, favor of things like cooperation rather than collaboration. So and cooperation is far better than everybody just doing their own thing. So to be rational, in this context, is to be disposed to act in a way that maximizes the satisfaction of one's interests.

If you're still being self-interested, it's an enlightened self-interest and it leads you to enter in to a contract with others. Not necessarily to work for the same end but to cooperate in such a way that you know, arising tide floats all boats and that's the idea here. And again, you've certainly heard about this and that almost invariably means giving up some of your own self-interest in order to produce the wider gain for everyone.

Now, you can see why this is necessary here we have from Garrett Hardy and the tragedy of the comments where people are maximizing their own self-interest. So the idea here, is that rational, individual decision making will harm resources holding common. So we have a common pool of water and the water table into the ground, so nobody owns it.

So you can just take as much as you want. But the result talked to California is the water table, gets lower, and lower and lower. The thing is there's two different responses to that and the typical response to that is to say, well, okay, that just proves that the water should be owned by somebody who would have a self-interest to take care of it.

And here we have John walk again, right? We'll make it property. We'll get people guaranteed of security over their property and not also the problem right now, the water they'll sell, it's where everyone else. But we know that that doesn't work because the person who has the water has been sent to sell it, and sell it, and sell it, until it runs out.

And in fact, as it runs out they can keep raising the price. And they've created, in fact, a situation of scarcity and exactly the sort of chaos that this was supposed to prevent. So the other side of that is okay, we should have a system of rules and regulations in order to manage the water and that's an approach.

A lot of governments have taken, but again, the question comes up, could a government manage the water table, any better than an individual because a government has an incentive to give out. And give up, give out more more water until the water table runs low. We saw that happening in the case of the Canadian fisheries were government after government, after government refused to lower the, the rate of which fish could be caught, what the result, a number of years ago, that the Atlantic called fishery.

For example, basically ended. So the first of those sorts of cases are not the last little sort of cases. So the needs to be a mechanism to reach some sort of an agreement. It needs to be a rational mechanism such that it won't result in the draining of all the water or the catching of all the fish.

Which means it's got to protect against a self-serving interest on either the part of government or on the part of individual property holders. It should as Rousseau would say represent the general will, the problem is the invisible hand of the marketplace. Might not arguably will not produce that kind of results.

That's not the only problem with social contract theory. Let's just one of them. Another problem this comes from Martianus bomb is that social contract tradition especially in its rolesian form, cannot give justice to disabled people more. It cannot supply global justice beyond the nation state and more, it cannot render justice to animals.

And this all has to do with how we've set this up. Originally, the decision makers, in the original position. Well, first of all, they're all human animals can't be represented here because animals aren't capable of negotiating. A satisfactory outcome and it's goes well beyond the scope of roles this theory to have some people in aging that their animals or at least imagine the possibility.

That means this idea. They might be animals. No, and that's a bit difficult to accept it face value, because most of us have the capacity to imagine. We are an animal or at least to emphasize that the condition of an animal. And, you know, I could prove that pretty simply by bringing in a cat here and torturing it.

Now I'm not going to do that nor would I actually seriously contemplate doing that. But any reaction that you had negative to what I just said, is evidence that now we can imagine Similarly. And by inference, we're not really able to put ourselves in the position of the disabled, at least not.

Unless we are actually disabled because we can't imagine the barriers that they face and this is true whether they're in a wheelchair or whether they're blind or whether they're cognitively disabled, you know? We're just not able to conceive of what they're rights and interests and needs would be. And then in in the case of global social contracts, we're just not in a position to care one way or another about the result of the contract, you know, we can imagine that we will live everywhere or anywhere, but we're not really imagining that we're living in Kurdistan really.

So for the unique conditions of the Kurds, we're not able to bring those into our calculations, you know, and ethics. Morality, only goes so far. All right, we're we're only concerned about, you know, at least our immediate area or perhaps our immediate country, but we don't really project ethics, you know, around the world to other nations and other states.

And perhaps neither should we. So we need something else and thus is interested. And here I quote, from an article in how a view that finds human dignity, expressed in a variety of life, activities, have given some examples translate into demands of justice. It's not going to be produced through a social contract and indeed it points to what we might call the fragility of goodness.

Here's Nusbaum again. More people and more beings deserve justice than those who make the rules just because you weren't so reflective, doesn't mean you don't have a dignity that demands respect. There is more to life than profiting off each other for human beings fellowship, and compassion are ends in themselves too.

And it's hard to see how any of that self-dignity respect fellowship compassion, we could add a number of other things is going to work. It's way into a social contract. The privacy of a public chat on Twitter. Can't imagine seeing that come up. I can't imagine that being discarded one way or another, because it just really has nothing to do with rights fairness or anything else has.

We currently conceived them, and yet there's an ethical dimension there. You know, unless the thing about contracts contracts and the concept of contracts, presumes something like radical individualism and self-interest. Now I know corporations can make contracts, but in order for that to happen we have to think of corporations as individuals or even corporations as persons acting in their own rational self-interest and indeed persons with a fiduciary duty to act in their own rational self-interest.

But there are issues with this idea of radical individualism and self-interest. First of all, we can't all be self-sufficient and we. Yeah, and we can't even imagine a world in which we are also sufficient in the fact, it is arguable and I would argue nobody is self-sufficient and and also we have preferential attachments.

We can't treat the rest of the world equally. I treat the people in the next room. Far more. Preferentially that I treat people in the next country. Much less people around the world and many people were guarding that's the way it should be, you know, family. First, also, contract theory, presumes that the only obligations we have are those that are freely chosen which allows for the objection of.

Well, then I don't freely choose any obligations or there are some obligations. I can't choose. It's just not, I'm not capable of it. So we need some way of imagining, how non-punitively, presumably we deal with people who are living outside the contract. And again they can't just pack up and move away the the slogan, you'll love it or leave.

It is not a practical option particularly if it's somewhere where you were born, right? You know, I being born Canadian. I'm just it's not an option for me to leave Canada and go live somewhere else. If I don't like the way we do ethics here in Canada and yet being in Canada, there's no way really for me to live outside the socially constructed.

Ethical framework that we find ourselves in and there's no way to change it either. These are issues.

We have this idea of individualism. Maybe we can just think of it as theoretical or as Buchanan. Intellect say methodological individualism and and the way we'll think of it is that human beings are conceived. As the only ultimate choice makers in determining group as well as private action. So how's the corporation going to decide?

Well, think about the individual running. The corporation is making that decision and we've looked at this quite closely. Well, economists have looked at this quite closely just how individuals make decisions and what we sometimes, call the market sector or corporation companies and all of that and there's, there's a, it's a huge field of study and it's pretty easy to be cynical about that field to study.

You know, I call it market rationalization with the emphasis on the word, rationalization, you know, it's pretty easy to make unethical decisions when you're acting on behalf of the corporation, instead of behalf as an oven individual, instead of for yourself. And in fact, the whole mechanism of incorporation and corporate bankruptcy, allows people to avoid responsibility for the decisions that they make for their corporation.

That's the whole point, right? And although some people have said in this course that, you know, there should be a method of allowing corporations to die or actually killing corporations and that certainly see the justification for that. But in an important sense, that's to punish the wrong party, at least on mission analysis because the person you made the decision wasn't, the corporation is actually the CEO, and that's why.

Sometimes in Canadian law, we had a case recently where the company was allowed to avoid legal liability that would have essentially ended the company because in agreed to remove, all of the people who made the decision from the company. And so wiping that slate cleaning allows the company to survive.

And it still doesn't really punish the decision makers doesn't. There's also, you know, a cultural calculus of consent, which is a little bit different from the calculus of consent that Buchanan and Tulip talk about incorporations, and this is a way to depicting the way, cultures decide things and it's a relation between power faith, fake gender, health illness, relationships, etc.

Again though, you know do we distinguish the individuals in a culture who make a decision in the name of a culture from the culture itself? And that's not a, that's a non-trivial question. Particularly, as we move on through these next few considerations here,

The idea of a non-individualistic sort of ethic is captured in the idea of collectivism. And in their book, individualism and collective is a highly huey and Harry triangess talked about collectivism as incorporating concern where this means concern, for the impact of one's actions, on other people sharing of mutual or material, and non-material, resources, including cultural resources, but collectivism also includes susceptibility to social influence think, for example, of peer pressure, self-presentation and face work that, you know, there the the face that you have, the way you face the world, not make up, although it can include makeup our factors where there's a sharing of outcomes and here a good way to think of it is collective responsibility or the corollary collective punishment, but, you know, it's but it's also, you know, a father, taking pride in the son's accomplishments or one brother.

Feeling guilty about the actions of another brother and then the feeling of involvement in other people's lives. You know, you don't live as a single individual, you are actually a part of this larger social organization, the collective. And it's this collective that generates, ethical roles and responsibilities rather than say rational individualistic decision making and there's a range of theories based on the idea of creating ethics collectively.

And this is what my Twitter opponent was appealing to, right? He was saying, basically, there is this collective ethos of privacy that has developed over time on Twitter and that's the ethics that's being violated. Also, there's an ethic of polite conversation. You're also violating violating that you continue and, you know, there are some questions.

So, ask your right? How do we know that that's the collective ethic of Twitter and how do we know that I'd violated it and what's the sanction for it? Well, part of this story comes from Fenimore and sickling, sicking and the life stages or the life cycle of a norm and think about how this works.

There's the norm emergence where quote norm entrepreneurs seek to persuade each other and then there's the norm cascade where gradually there's tipping point and they norm internalization where it's just taken for. Granted, you can see those play out on in social communities online. I'm a devotee of a website called imager.

I am GUR and trust me there are. Yeah, we see this process. There are norms on immature. Immature. I don't know. That kind of defy explanations, for example, enter is a photo or image sharing site and it also includes short videos. So now what, then, the way it works is, there's a thing called user sub where anybody who's a user of engineer submits an image.

And then you could look at all those images as there submitted, and then you can vote them up or vote them down. And when they're voted up, when they get popular enough, then they, they end up on the most popular page and that's the page of people usually see when they go to imager.

So what are so basically ethics basically consisting, the rules quote unquote because it's not written. Really, of voting up. So, one of the rules is no selfies except on Christmas or for costly, which includes Halloween, but there are exceptions to this, you know, like late, what weight loss photos would be an example of that.

It's not actually written down anywhere, but every once in a while, someone will come along and say remember the rule, no selfies. But hardly anyone ever since that? You know, other roles Wednesdays are for Wednesday at them. There's a cater day on super bowl day, images filled with howls or superb owls.

And so on, there's a whole range of these things. So they're kind of an ethnic of the website, but it's not really ethics. Although there is a sanction, you won't get voted up if you violate them. Although sometimes people vote them up, anyways. So there is a sentence, which ethics can be created collectively, but it's less clear that there's a sense that people argue about these rationally, through debates and in positions of sanctions, through Twitter posts or whatever, you know, you can't argue your way to an ethic on a social network site.

Just doesn't work, you can't bully your way to an ethic on a social network side again, it doesn't work. And so the question is just, what is that process? That's happening. When an ethic appears on a social network site, we sort of want to assume that the community somehow collectively decides, but that very often is a company by a statement that you know and here is what they decided.

Let me tell you and that's when it becomes problematic. At least for me and the same is true more widely, right? People talk about community values and again nobody's sitting and writing down what the community values are. But somebody's always willing to come along and say the rare community values and here's what they are, let me tell you, and that's a problem.

There is a whole set of communitarian ethics and communitary and ethical theory. People like Michael Sandell and Charles Taylor have responded against the individualistic conception of self in roles in social contract. And, you know, we, we can allow for an ethic of individual decision, making, but at the same time, need to understand that what constitutes a self also includes that social background, or that cultural background in which the life choices, gain importance, and meaning, you know, a student responding to social pressure in a public school has a very different realization of self than a student responding to social pressure.

At homeschool it's just different and the differences. In culture result, in differences, in self. So, to follow a rule, involves this cultural or social self, and not just a rational, individualist self. Or, in other words, ethics are because of these cultural components of ourselves in an important sense. Non-rational, you know, there isn't the calculation that's coming out the other side.

Need this to the ethic of ubuntu or, you know, we could talk about it as Ubu unto and Mrs. Based on a matter of ones, relationality with others. And with the environment and all interdependent parts, it is a recognition again. That there is no self-sufficient person that each person is inherently the product of and related to everything around them.

And so, the perceived infallibility and supremacy of rationality and traditional social contract theory, especially as administered through machines, as we see in artificial intelligence, exacerbates marginalization it, forces us to neglect or ignore or to push away, those relations or the impact of those relation center, non-rational or can't be calculated to add to our personal or collective, happiness or benefit.

Now, we could talk about all the different ways in which these relationships accessory marginalization, you know, from the treatment of indigenous people and the settlement and development of countries such as Canada and Australia to the marginalization of old people who will know, longer offer, you know, contributions to society or innovation or productivity to the treatment of animals to a lack of respect for the environment and a view of our relation with the environment as exploitative, you know, all of these things fail to recognize the interdependence apparent.

And it's funny. I want to add a science fiction story, where long story short, hers, a disease that threatens humanity. Everyone's gonna die but they find the solution to that disease and the genetic makeup of a homeless person who's on the verge of death and the argument advanced in the scientific story is that we are all collectively, the holders of our genetics, you know, the human genetics isn't held in one individual person, but it's held across all of us.

And the important piece of genetic material might be in Amy one of us. And we see a similar argument raised, with respect to the rainforest and the huge diversity of life in the rainforest. That is in danger of being lost and the potential for medical treatments to be found in this rainforest and our dependence on that, you know, making the, you know, intertwining the fate of the main forest and the fate of ourselves and that might sound overly consequentialist and it is kind of consequentialist and it's kind of transactional.

But at the same time, if the risk a connection between who we are and what the rainforest is, then there's a connection between our ethics visa and our ethics fees of either the rainforest. We can see that connection and that connection has to inform ethical thinking and it takes us into the realm of Peter Singer.

And the idea that, you know, when we talk about rights humans have rights animals, have rights hard to sentence have rights nature has rights. You know, we have to think of all of these as intertwine and interconnected and you can't give rights to one while at the same time oppressing, the rights of the others.

Well, it sounds great but there are criticism because, of course, there are criticisms, one of the criticisms of Ubuntu is that particularly with respect to cultural rights in entrance, to some of the existing and unchallenged discriminatory practices that are based on age, gender, and social standing. I had an image on the slide here to scene from tracks, which is an Australian movie that I like a lot, but a woman who takes some camels and crosses the Australian outback.

I mean, so doing she has to interact with Aboriginal people's Aboriginal cultures, And that forms a theme of the movie. Well, one of one of the expressions in the movie is that women do not handle the knife. She was about to cut and animal that she had caught for food and then realized no woman, do not.

I know the knife and the movie doesn't say this but that managed to save her from being poisoned because the animal had been poisoned so you can see the the reasoning for it. But at the same time you can feel the tension right of having to accept that. There is a rule here that says that that I as a woman cannot handle a knife.

That doesn't seem really right again with Ubuntu it enforces, groups solidarity at the expansive individual well-being. There's no way into the numbers of examples of that, you know, this sort of concern was the origin for my original groups versus network range of arguments that I began in 2004, and continued.

Since it tends to enforce conformity intense to enforce group. Think, you know, we all have to have these same values, the same ethical principles. When we are part of a group, the way we think, is the way the group thinks typically, I mean, not necessarily but there are problems with that.

And, you know, in the discussion like James Schwicky's, the wisdom of crowds pointing to the dangers of group, think and not allowing for independent thought, and independent points of view. Even in group interactions. And then finally reinforces and perpetuates, existing imbalances and power relations, for example, think of indigenous people in Canada and whether it's ethically, right to run a pipeline through their lands.

Now, they're all kinds of ways. We could decide on this but one way, we could decide on this is fairly straightforward. Do the indigenous people want us to run a pipeline through their lines? Well, here's the problem. The band councils that are elected. Say yes, the elders. That who are hereditary?

Say no. How did he decide? If we follow the principle of Ubuntu in this case, well then we have to take what the elder say seriously. Because they achieve their position as a result of long-standing tradition and ethics in these enemies. And so we have to respect that but it's really hard you know, coming from an outside perspective to to accept that somebody who does not respect, you know, the will of the entire community can speak on behalf of the community.

On the other hand, these democratic band councils or something that was imposed on these nations, by by the white western governments. There was the requirement that bank councils be operated democratically. And that's considered as a condition for self-governance. No, easy answers here. And they're seriously aren't easy answers here.

You know, I would think, you know, ethics also benefiting from discretion and just playing niceness, you wouldn't ram a pipeline through their land. If a significant representative, whether or not democratically elected said no, please don't do this, you know, not, everybody is governed by that principle other people derive.

The ethics from other principles and that forces them to argue for putting the pipeline through. These are hard questions. I don't think social contracts, answer these questions. Whether we define social contract is something that we negotiate or hypothetically negotiate or come up with as a principle of each person's individual actions as in an invisible hand, or as defined as community or socially determine values.

I don't think any of those stories gives us a satisfactory answers to the questions that we come up with when we're looking at specific ethical dilemmas and that's a problem. I'm just especially a problem because I've run out of ethical theories, looked at ethics as virtue looked at ethics.

As duty looked at ethics, as determined by consequences and looked at ethics by agreement and there are significant gaping holes in all of these theories. So in the next video I'm going to look a little bit at meta ethics how we talk about these theories, how we arrive at them generally.

And then finally, I'll look at it's true. Excuse me, that tells me I should be finishing this video. I'll be looking at the end of ethics, or where do we go from here? So, I hope you look. I hope you enjoyed this romp through social contract and ethics. Again I've left out far more than I've included here, but the purpose as usual isn't to give you a bunch of stuff.

To remember of the purpose is to have you thinking about these issues along with me, as I work through them. And perhaps coming up with new ideas or new thoughts or new ways of looking at some of these issues that you may then you may have had in the past.

So that's it for now, my voice is going horse. Again, another sign I've been going to long. Thanks a lot. I'm Stephen Downs.

Metaethics

Transcript of Metaethics

Unedited audio transcript from Google Recorder.

Hi everyone. I'm Steven Downs. Welcome to another edition of ethics analytics, and the duty of care. Today, we're going to be looking at the study of meta ethics. Or as my old professor used to say metarothics, What meta-ethics are is the study of ethical reasoning itself, questioning where ethical theories come from, and how we evaluate them and how we apply them.

I guess that this up by putting this into a bit of context. And by posing a question, and the question is does might make right? Well, wikipedia says, might makes right or might is right is amafferism both on the origin of morality with both prescriptor. Let's try that again, might makes right or might is right, is an aphorism on the origin of morality with both descriptive and prescriptive senses.

And it's of course, the idea that you know, ethics is whatever the strongest person in the room says it is or, you know, the biggest bully on the block or the most powerful nation in the global community. And it's enough to make you a bit skeptical about the concept of ethics generally, isn't it?

Because the idea of might making, right? Kind of removes, the whole aspect of of, you know, moral reasoning and thinking about morality entirely in effect, you know what I'm horribly, paraphrasing, max fever here. And so forgive me, those of you who have studied his work, but we could say something like this study of morality really is the study of power that ultimately is what ethics gets down to.

We don't want to lose sight of that fact. And I'm going to call it a fact because I think that we we can't really ignore that aspect of ethics. When we're talking about ethics of course, the modern equivalent is the new golden role and the new golden rule is this, whoever has the gold makes the rules.

And again, this it's like another sort of power ethic position, right? I mean here, the might isn't expressed in terms of force or military power, but rather in the persuasive capacity of a lot of money and we see from the cartoon there on the slide, some people who belong to the 99% who've been laid off, we're losing their homes etc.

The politician who is getting their 99% of their campaign dollars from the top one percent, which is the reason, says the third person or the fourth person, why the 99% have zero chance? You might think it's kind of a cynical approach to ethics. You know, I kind of do, but at the same time, we actually see this instantiated pretty much on an everyday basis, baldly.

And in front of us, you've probably heard the expression voting with your dollars. And, you know, things like a boy cards or selective, purchasing, or, or even consumer activism, as a way to make a point. Generally an ethical plight but all that does is to concede. That whoever has the most dollars here is going to get the most votes.

And in fact, with the 1% having most of the money, not, not just more than everyone else. But most of the money that leaves the rest of us, effectively without a vote. So, you know, there's a there's a reality to the new golden roll. Well is that the case should that be the case?

There are different ways we can approach ethics when confronted with the reality of ethics, as it exists in the world, I'm going to describe three of them here which I've taken from a web page, but, you know, we can probably come up with a few more, but it gives you a sense of the range of thinking that's possible.

So, one type of ethic is descriptive ethics quoting from the, the learn religions web page. People tend to make decisions which bring pleasure and avoid paying. That's an example of descriptive ethics. What it's doing is it's taking a look at the world and describing what ethics actually are as seen in the actions of people in the world.

So when we express a principle like the new golden rule, whoever has the gold makes the rules. We could be describing the actual ethical state of affairs in the world but that doesn't intend the discussion. There's also normative ethics and by normative if we don't mean something. That is normal, what we mean is something that establishes what should be a norm.

So, the example here from the same source, the moral decision is that which enhances well-being and limits suffering. If we go out there in the world, we'll see a lot of people making decisions that they call ethical that don't address these at all. But what we're saying here is that what they should be basing, their ethics on is well-being and limit suffering, okay?

And then there's a third kind of ethics, which we could call analytic ethics, and it's more about understanding what the meaning of ethical talk is about and breaking down. You know, if we, we express an ethical principle. What does that mean? How does that cash out in terms of actual practice, etc?

So the example here is morality is simply a system for helping humans. Stay happy and alive. So we're not talking about the truth or falsity of it. We're not saying we should or shouldn't, but we're breaking down, what these statements are and expressing what they are in terms of meaning.

So again, these three don't eliminate all possible types of ethics. We would probably come up with a few more types of ethics, but they give you a sense that when we start talking ethics, it's not all going to be the same thing. So let's break that down. More the study of meta ethics.

And I've combined it into one word in the title, but a lot of the time you'll see it with a hyphen, just to make the two words, clearer is essentially, the study of what grounds and ethical argument. We're going to come back to the precise wording of that in a little bit.

And, you know, I mean, and we look at where we are in the context of the current inquiry, looking at all the uses of uses of learning analytics and looking at the ethical issues that have been raised all of the ethical codes that we've looked at the ethical theories that may ground those codes of.

We don't really have a good mapping yet and actually exercise we could and probably should undertake assuming it's even possible. And I'm not sure that it is, but then we need to turn around and ask. What is the basis or the fundamental ground for one approach or another? You know, because we need to to choose between utilitarianism or deantic, ethics, or social contract model of some sort and on what basis do we make those choice choices.

I mean, it's not like, we're just going to put, you know, some paper, slips in a hat and pick one another. Oh yeah, that one. No, we, we need to have some basis for making these decisions, there isn't just one ethics that we can say. Yeah, this is it, you know, after all of our study, we've solved it.

There are many different approaches and many different flavors and if nothing else in this module you should have seen that. So okay, let's look at some of these breakdowns family. So one flavor of ethics can be called cognitivist ethics. Based on the principle a cognitive ism. Now, in this context cognitiveism is and I'm quoting here, the idea of that moral statements have the capability of being objectively, true or false, since they are descriptive of some external reality in the world.

Sorry about the type. Oh, on the slide there. So there's two things that are happening here. The first thing about three things actually, if we think about it, first of all, we have more statements propositions. If you will like, for example, killing people is wrong and these propositions are semantical in nature.

What I mean by that is as the other line says, they have the capacity of being true or false. In other words, we could say it is true that killing people is wrong, or we could say, it is false that killing people is wrong, and we'll have the usual training come by.

But the third thing about cognitiveism is that the truth or falsity of these statements is established in some way. Now they've glossed over a huge area of the systemology by saying descriptive of some external reality in the world for the somatics to actually apply. We need to say that they correspond to some external reality in the world or are consistent with descriptions of the external reality in the world or something like that.

They refer to some external reality in the world or different semantics for establishing the truth or false statements. But we're not going to worry about that. The idea here is that we're saying moral statements can be true or false. The opposite of that is non-cognitive non-cognitiveism tripping over that word.

They're a bit which views more discourse as a way to express attitudes towards certain actions and I might broaden it and say something like this. If we have more all statements then non-cognitivism is the attribution of other propositional attitudes to those moral statements. So sticking with the statement. Killing humans is wrong.

A non-cognitivist moral discourse might be and I feel that in my bones or the thought discussed me and you see how much different from saying that the statement is true. For the statement is false and it removes us from having to appeal to some kind of semantics in order to ground our moral discourse.

So the classic example will refer to him again, is David Hume, who assigns moral distinctions to affect or emotional appeal? Now, I want to be clear here. When we say cognitiveism, we're not really meaning cognitiveism in exactly the same sense that it's meant in theories of teaching. And learning.

Now, I've used an example from a BC campus publication teaching and a digital age, which actually looks at the cognitive domain. And, you know, we can think of it as say, part of balloons, taxonomy or whatever. And that's not exactly what's meant. But all the tools that we see in the cognitive domain evaluations synthesis analysis, application etc, along with even some of the minor concepts like data information, facts, concepts problems, etc.

All of those apply to cognitivism and I don't think it's a stretch to say that cognitiveism is probably the dominant approach in ethical thinking today. I don't think the stretch at all because cognitiveism is the dominant approach to pretty much everything today, women cognitivism but not necessarily with incognitoism and, you know, so the the diagram back at the beginning here, kind of represents it as though, it belongs, with cognitive reason is the idea of realism.

Now, I think real realism could also be a non-cognitive is perspective. But, you know, now we're not really looking at questions of semantics, so it gets a bit tricky, but the idea of realism is that moral ideas are true or false independently of what we think. In other words, there is some fact of morality, whatever it is.

So for example we look at a moral proposition such as it is morally wrong to torture and innocent child for fun. A moral realist would say this is objectively and independently, true, doesn't matter what we think about it? It is true and often the side of the slide here.

I've got an expression of what might be called, robust moral realism. So that's the idea that well, there are three conditions. First of all, the moral proposition is irreducibly, normative, it's not reduced to other facts about the world. For example, the way things are in nature or the way things are in society, the moral proposition is itself of basic fundamental fundamental truth and that's also objective.

The idea here in robust moral realism, is that the truth of the proposition is independent of our existing attitudes whether or not we believe that it's morally wrong to torture. An innocent child for fun, it is independently morally wrong to torture and innocent child for fun. And then the third thing is that it's optimistic.

It actually kind of lines up with what our deepest moral beliefs are and and in fact we could say that perhaps through moral intuition or more apprehension. We are seeing into these deep truths directly much. The way, we know what a triangle is just by thinking what a triangle is or just the way.

We know what the properties of space and time must be just by thinking about the properties of space and time. Similarly, just by thinking of the concept of morality, we can see directly that this proposition is objectively true. And there's, you know, there's that really a chord which was a lot of people's intuitive thinking about morality and it doesn't matter what you say, torturing children for fun is wrong and there's nothing that could make that, right?

So we can come up with slightly less, stringent definitions of moral realism. For example we could require that, we actually know the principle. The cat be more principles that are objectively true out there in world that we don't know about you know or we could say something like these truths are true by definition.

I suppose to true by fact or whatever different flavors of this idea. But it is the idea that there are fundamental moral truths, this leads sometimes to an idea of moral universalism. Well, I've talked about this a few times in this course. And I'm going to come back to it again a moral.

You universalism is the idea that there is Well, a universal set of moral principles or, you know, to be a little less precise, perhaps a universal morality Again. There are two ways to approach this. We could even make it three ways if we added analytic, but really we have descriptive moral universalism.

And that is the idea that there is one universal morality shared by all cultures. Now I personally think that that proposition is descriptively false. And here we have a diagram of some examinations of different types of cultures and some examinations of their attitude toward moral virtues. And if there was moral universalism descriptively, then all those bars would be the same length, and we can just see by looking at the diagram.

They're not all the same life So there are differences in morality between cultures and some quite significant ones, you know, especially with respect to property especially with respect to, you know, deference to authority or reciprocity etc descriptive, or prescriptive moral universalism would say, even if there isn't actually a universal morality there should be, we can base that on several grounds.

We can base it on. As I said, moral realism, but we can also base it on something like a moral pragmatism, you know, life would be a lot easier. If we all had the same beliefs, we see that kind of thinking, expressed a lot in the technical world, right?

The way to move forward is for all of us to have a shared, common standard of whatever. And we see this working its way through to ethical discussions where people say and mean it quite literally, we should have one ethical standard that applies to all instances of AI or analytics and learning.

I'm not just one should standard, but are shared vocabulary. The set of shared methods a set of shared tests, etc. That's more on universal.

Against universalism is relativism. And again, there are different flavors of rather relativism, but you get the idea relative ism is the idea of that one way or another. There are no universal moral truths that there are ways and which ethics and morality can vary. And the three types here are three ways where that we can think of them.

Varying, one of them is descriptive and I actually happen to adhere to this that we can observe different moral norms. We can observe different systems of ethics out there in the world and especially from a historical perspective. We look back and what people believed, you know, 20 years ago, a hundred years ago, a thousand years ago that certainly seems to be true.

But also, I've observed, you know, that the moral basis for ethics from one society to the next is quite different. There's also a meta ethical relative and that's the idea that there are no universal or extra cultural moral truths. So it kind of allows for a moral universality within a culture.

So everybody living in the culture is governed under the universal morality, but but it doesn't extend beyond it. And then normative relativism is the practical application of relativism in the world and essentially, it's the affirmation of the expression live in that live, right? We have different moral principles and nothing is going to be gained by going and having a fight over them.

You follow your ethics. I'll follow mine at a view that bothers a lot of people and I find that interesting. It's an interesting characteristic of ethics people and I don't really have a slide for this, but, you know, we can think of ethics as something inherently personal as a way of guiding our own life.

But people very rarely stop there. And they believe that, whatever their principle of ethics is, it's something that should apply to the community around them. And then the size of that community can vary from, you know, a set of ethics held within a family to a set of ethics held within a village or a town to a set of ethics held within a country or a religion, or a culture to maybe even a global set of ethics.

So people resist the idea of living that live and find that a violation of their own personal ethics is a good reason to take action, whether advocacy or force on someone else and not often returns us back to the principal of, you know, ethics according to whoever is the most powerful now, as soon as you begin to apply your ethics to other people and anything more than just a, you know, I really think you should kind of avoid.

Then we've moved ethics, from one domain to a different domain. We've moved it. For example, from the domain of rationality to the domain of politics, and sometimes, even to the domain of geopolitics. And that's something we need to be aware of. It's one thing to be justified in our own morality.

It's quite another to be justified in extending that morality to other people and that's one of the bases behind relativism. The idea that there is in no ground for asserting either descriptively or normatively that other people should follow the same ethic that you do. In fact, we can take it even further in people have and actually talk about moral skepticism and moral skepticism.

It's more of an epistemological position that is it's more based in the philosophy of knowledge that it is an ethical position. But ultimately, it's the assertion that we cannot know about morality one way or another. So, again, you know, it's philosophy. So, we're gonna break down into a whole bunch of categories, right?

So we could actually make it an epistemological argument. We could actually argue that. There is no moral knowledge, we can't acquire it. There are no justified moral beliefs. We may think we can justify moral beliefs, but if we apply the skeptical argument to them. Imagining, for example, that the opposite of our belief is true and finding there's no contradiction.

We can't justify our moral beliefs. Similarly, we could be dogmatically skeptical and simply assert that, you know, we can't be certain about moral knowledge or justified moral beliefs. We can even though beyond that and talk about the abness or the relevance of moral discourse, whether it is even made sense to apply, moral judgments in cases.

And give you an example. Is this, just my interpretation of moral optimists, right? But you know, I have a badge here, I could drop it or I could continue to hold it. In fact, I dropped it so it was that right or wrong. Well, doesn't even make sense to us.

The question, does it and similarly, in a lot of our actions and maybe all of our actions? It doesn't really make sense to ask the question was, it was it writer? Was it wrong? We we can ask about other things like, say, should we punish that act and clearly know, right?

I mean, I don't think it was a punishable offense to drop my badge. It might be a punishable a fence for me to give my badge to a Russian agent but some different story. So even if there's moral reality you know it just doesn't enter into our judgments about the absence of moral statements and so on, right, there are many ways we can be skeptical about ethics and morality.

There are many ways in effect, probably an infinite number of ways we can undermine our knowledge of the truth of a particular moral statement, You know, it's hard enough to be sure about a simple truth. Like here is a pen and there were long involved arguments to be the effect that I cannot know.

That here is a pen is true and that's what's respect to a pen that I can actually see in the hold and, you know, open and and I guess it's not really a pen, but and right thing to with etc and moral truths are a lot more ethereal than pants and, you know, it's not like we can hold one in our hands very easily.

So there are really good grounds for moral skepticism and even more to the point. The way ethics is often set up and indeed even the way it's been set up in this course. And I did that a little bit deliberately because majesty is the standard way, ethics is presented.

It's of some sort of moral choice and you all. I was going to use this, but you all know the picture, right? Of the person with the, the good and the evil on their shoulder, and they have to choose between one of them, and it's always presented, ethics is always presented to us in the form of making some kind of decision, making the right choice, picking the, the morally, good action.

Even developing the the morally good virtue or signing a morally valuable contract, but we could ask whether we even make more decisions, whether we can, whether it's practical, whether morality, even and ethics, even present themselves as the sort of things where we make choices. All right, I don't know why we got this this, buffet model of ethics.

It's almost like the buffet model of educational theories, right? And we sit back and say, well, I'll pick this 30, right? And, you know, and first year philosophy, students are famous for that, right? They'll take their ethical course. And at the end of the ethical course, though, they'll write a paper in these, they'll say out of all the possibilities, I will choose utilitarianism because and having made that choice.

Now, they're committed for the rest of their life because of the fallacy of some cost. They'll forever be defending the decision about what ethical theory to follow when they were they made when they were 18. But it's not so simple. As I've a lot of these choices are imposed on us.

A lot of these choices are hobbs and choice of our no good, right answers. A lot of these choices are choices. We may think we're making but we've actually somehow been influenced into making, you know, we often participate in a practice of what can be called moral rationalization. And here's a little study that I found.

And one of the plus journals where, what they did is they took a bunch of moral statement and then they revised them. So that the statement expressed the opposite of what it said. And what happened was when they actually applied these statements to people, ask them to, you know, indicate whether they agreed with or disagreed with the statement, they found that people were or agreeing with something that was the exact opposite of something that they agreed to not 20 minutes ago.

And in the discussions of these choices, they would offer elaborate argument arguments in favor of the choices that they had made, even though these arguments again were contradictory to each other. So the wording of these moral choices, the way these things are presented to, you can influence whether or not you believe you support them.

And I think, you know, I mean, that's surveyors know that that's true in general, and it's also true in ethics and it's, especially true in ethics. When we start analyzing the meanings of moral words, themselves, right? Are you for freedom or are you against other anarchy? You know, I mean, we can use different words to express different positions or use different words to express the same position where you support it.

And when it's in some opposing another well known phenomena, that doesn't mean people are utterly clueless about ethics and morality effect. I don't think that's true at all. I think, even young and know about ethics and morale, just asking me, child of something was fair and they'll tell you so, but I think that the idea of presenting ethics as choices putting it into words or principles and then saying, pick one leads us to really bad understanding of what ethics actually are.

And what people actually believe about ethics was kind of takes us over to the non-cognitiveist approach to ethics immorality. Now, as I said earlier, non-cognitiveism expresses or describes attitudes to moral statements, Now again, these could be descriptive attitudes, right? People find rotting flesh, disgusting or they could be normative attitudes.

People should find writing, flesh, disgusting, and they break down. Non-cognitiveist approaches breakdown into several categories, three of which are emotivism. They're just expressing a moral judgment. It's wrong. They're not saying that the statement, you know, eating cats is wrong, is true or false. They're just saying eating taxes wrong, nothing?

More than that, or it could be prescriptiveism. You shouldn't eat cats, right? I'm not saying it's right or wrong or I'm not saying that it's true or false that even cats is wrong. I'm just saying you shouldn't do it. You see the distinction that right? Or, it's can, it could be expressive.

I find the idea of eating cats, distasteful. So here, instead of expressing a semantical value, true or false, I'm expressing a feeling or a sentiment such as a feeling of disgust. So these are non-cognitivist approaches to morality. And what's important here is that there aren't the sort of things that respond well to our arguments, right?

How are you going to argue about non-cognitivism? You feel, it's disgusting. Well, and what that does is to significant degree, it makes more and ethical judgments subjective. And I think a lot of people believe this at the same time they believe moral universalism. So go figure and what they mean here.

What we mean here is that ethical truth. Whatever it is depends on a perspective or point of view. Now there are there are many would different ways of describing perspective or point of view. It doesn't just mean from the point of view of, I the subject, right? It could be, that could be individually subjective.

And that's, that's the kind of subjectivism. We normally think of in North America, but could also be culturally subjective. So a culture might not have an opinion about whether something is right or wrong properly, so-called certainly not some argued about, but in that culture, just it's wrong, you know, and we can all think of specific ethical values in different cultures that that seem to be culturally subjective.

You know, killing cows in India, for example, is wrong, typically taken to be wrong, right? Facts and ethically, or a culturally subjectivist principle of morale here in North America, it's big business. See the difference? We can also have subject subjectivism in the form of an ideal observer. No, that's what rolls did when he came up with his theory of justice.

And in particular, the original position that we described in the previous video instead of actually taking real existing subjects. We take an idealized version of a subject and say, well from their point of view, what would it be? And, and a lot of ethical theorizing works that way pretty much any thought experiment will work that way.

Philip, foot's, famous, trolley problem, kind of puts us in the position of an ideal observer. What would an ideal observer do if a trolley was coming along and by pulling the pulley, they could save one person but by not or sorry they could kill drive that again by pulling the pulley.

They could kill one person but in so doing say five or by not pulling the pulley, they condemn five people to die. But the one person they would have killed stays alive. Now, that's an ideal observer theory almost by definition because it's a counterfection. Nobody faces that particular problem, but it's subject to some of the week.

This is of the ideal observer theory and that is how do you know what the ideal observer would do? And, you know, a lot of people accuse roles of this, you know, he sets up these ideal observers who just happened to believe the same ethical theory. He happens to believe him.

How handy is that? And then, finally, the subjecting question could be God or, or God's depending on your religion, or even, you know, the way you know, the, the reality behind reality, as in the Dallas principles, something like a demo a divine command theory. And and raises a question.

Certainly for religious people is something more all because God decided that it is or because it independently is in. The hypothetical question is, if there were no God, would it still be wrong to kill another person? And if you don't like that phrasing, put it this way. Do you refrain from killing people?

Only because God says it's wrong and that feels like you know, a kind of a not very good foundation for a morality, you know? I mean indeed the whole concept of you do it because God told you to do it and if you don't do what God tells you to do, you're gonna go to hell.

It sounds like, you know, religious version of egoism. And, you know, you ask people about egoism. Those don't know. It's very wrong for a subject of. And yet divine, command theory is subjective indeed and exactly the same way. So there are some interesting problems here that can be posed from the perspective of subjectivism we come back to him and the basis for a lot of our responses to these questions and human takes an explicitly anti-rationalist approach to ethics.

That is to say an approach that is not based on argument and reason and truth and falsity. And there are a couple of principles here and I've pulled out. Well, there humans and complex are meant arguing out of which, I've pulled a couple of principles. One of them is that reason alone cannot persuade us to act human rights.

Famously reason is and ought to only to be the slave of the passions. So we have both a descriptive and normative force in that sentence. And, you know, descriptively, I think there's a thought to be said for it. You can provide a completely 100% airtight argument, that something is wrong and people will go do it anyways.

Because they feel like and that's what human has observed. And he's also observed that again, people have a sense of right and wrong even though they have not reasoned about it or a thought about it. Now, this doesn't necessarily apply to animals. Although, you know, we could argue about that, but it does seem to apply to children as I pointed out.

Who have this sense of fairness that haven't actually sat down a workout, the reasons for it, but they're certainly making these ethical judgments. And then here's second argument. Is truth is disputable not taste. What exists in the nature of things is the standard of our judgment. What each man feels within himself, is the standard of sentiment.

So if we're going to make statements about, cause and effect, and natural laws and you know what exists, and what does it then? We look out into the world. But that doesn't tell us about morality the way we feel the way we respond. Perhaps we are mirroring neurons flare up or our empathy kicks in or revolving kicks in.

That is the standard of ethics and morality the standard of sentiment. And humor is that we actually do have this sentiment or sense of morality. Just like, we have a sense of sight or a sense of balance, we have a sense of rightness or wrongness and, you know, it's no more reliable or no less reliable than any of our other senses.

But it's that sense. That is, what is the basis for our morality, our ethics, and it is that sense. Plus the combination of what we know to be true out there in the world. That provides us with the reason to act or to not act. I think there's a lot of sense to be made over that.

And certainly a good part of the rest of this course, is going to be devoted to drawing out many of the implications of that idea. When we think about moral sentiment, we can think about investigating more all sentiment and and investigating not just what the sentiments are, but how they change and not just the sentiments of one person like myself or yourself but of people in general and I've got a link to a paper here that does this.

And basically does text-based analyzes to examine changes of moral sentiment in the public. These you know what's not a constant thing and why would we expect that? It is so we can make queries of the text, we can find positive sentiments or negative sentiments being expressed in the text.

And then from that we can find specific statements about things like care, fairness, purity or harm cheating and good degradation, etc. Now, how closely these words correspond to the actual human feeling? That is sentiment is hard to say, right? There's the question of the accuracy or the reliability of sentiment detection, but remember all the way back to module 1, we can do sentiment detection with AI.

So the question is, can we get at what human sentiments are through some kind of process using AI or using some sort of analysis? And I think that's an important question. I think certainly people will say that it can be done in my evidence for that is here's a case.

For someone has said, it can be done, and I think the, the argument can be made that sentiment is the basis for morality and therefore, what we find as actual sentiments, are the bases for actual statements from morality out there in the world. But it's not so simple as that.

And I'm going to throw in, in my last slide, the final twist in our story and that is who owns this. And by who owns this, I don't mean it in an economic sense. Although, you know, there's certainly a sense of this being data sentiment. Be a sentiment data being data that can be commodified and monetized but I mean something a bit deeper than that.

Who gets to say what someone's sentiment is now, we would think that in the first instance, each person gets to say what their own sentiment is. But we already know that not everybody has the same capacity to make a statement, how we already know that the statements, that individuals make need to be interpreted in some way.

And we already know that the statements people make can actually be influenced by the questions that they're asked or the problems that opposed or just the context in which they find themselves. And we've seen over the years, different ways of if you will mining the sentiments of people and converting them into something like an ethics, We'll call it ethics mining.

I don't know if anyone's created that before but it works for here. So we've seen this explicitly and the philosophy of science where there are certain scientific virtues like simplicity and personally right? Well there's no reason why say simplicity is a reason to choose one scientific principle over another Because the fact that the principle is simple, in no way bears upon the fact of whether or not it's true.

The world just might be really complicated, but scientists prefer simple principles so much so that simplicity is of virtue when it comes to selecting scientific explanations. We really love things like equals MC squared but things that take four, five ten pages to write. Not so much. There are other properties of models, we won't get into the details here, there are properties of models and properties are ways in which we create models that have basically given us and ethics of science.

And and indeed, when I sit on research ethics boards, I'm not thinking just about the moral principles of a scientific inquiry. I'm also asking whether the inquiry follows the scientific virtues whether it's a useful inquiry, whether it's asking a real question whether it's standards of evidence are of a certain quality etc all of these playing into the ethics of science and have been pretty much come have pretty much come to defying, ethics of science.

Similarly, in business, ethics, business, ethics isn't concerned about ethics business? Ethics is concerned, about quote, real concerns and real-world problems of the vast. Majority of managers p