Content-type: text/html
Ethics and Analytics: Getting a Feel for the Subject


(Unedited auto-transcription by Google)

Hi everyone. I'm Stephen Downs. Welcome to the latest episode in Ethics Analytics and the Duty of Care Today. We're looking at ethics and analytics. And the purpose of this video is to give us a feel for the subject of ethics. And analytics the way we'll do this to begin with is we'll look at a few examples of where ethics and technology have clashed.

Here's one example. Consider this case a patient is required to see a healthcare robot instead of a human. And what's interesting about this case is that it's not simply a choice that made by the patient, but rather it's a requirement. So, the element of choices removed, they can either see, they health care, robot or they see nobody.

And the ethical question here, that's raced I think is a question of access the same sort of thing can happen in education. If you look up robot tutors on Google, you will see dozens and dozens of results. And there's even one case the case of Jill Watson where students were taught by aerobic tutors without being told that they were being taught by robot tutors and that again raises a question of ethics, with respect to their choice and with respect to how much information they should get.

Here's another case, little while back. Google revealed something called project nightingale where they were accused after they were accused of secretly gathering personal health records. This is reminiscent of the Cambridge Analytica scandal for Facebook. Where again, records were secretly gathered and used for research purposes. Now, Google also offers a classroom application and it's relevant to ask.

Are they secretly gathering classroom records? Are they not so secretly gathering classroom records? And what are the ethical implications of this? Should they tell people? Should they do it at all?

Here's another example, analytics data is being used to adjust health insurance rates. So, the insurance company looks at what your doing on online, maybe watches your videos, perhaps your skydiving or bungie jumping, and then adjust your health insurance. According to what they see, now, another country is like Canada, where everybody receives health insurance.

This isn't the case because we don't have health insurance rates but we do have tuition and other costs for education and it's no stretch to imagine companies adjusting education. Whether it's the costs or access to education or any other factor related to education based on the analytics data that they can get from trolling through social media sites.

And again, this raises ethical questions. What data are suitable for use for educational purposes?

Here's some other one. This involves Facebook again, where a company experiments on the use of news feeds and other data to actually alter the emotional states of users, when this came out, of course, it was a scandal. But what if we know ahead of time that companies are doing this?

And what? If ahead of time, we're able to identify beneficial purposes for this. We can easily see, for example, experiments on educational data, feeds allowing researchers to alter or adjust the emotional states of learners. So, they're more receptive to the education. They're receiving. Is this, right? Is this wrong?

Under, what conditions would we countenance? Doing such a thing.

There's something more down to earth cafe, and deli using facial recognition software to build its customers. There are a number of stories like this in the media stores where, you know, longer have to go through a checkout. They just use cameras and watch what you take off the shelf and put in your bag and then charge you for it.

Based on things like facial recognition, school districts have been using facial recognition for some time. Now, the most ostensive purpose of, this is using it for security purposes. The US has a problem with school shootings. And as a result, they're screening everybody who comes into the school, is this ethical?

It's certainly a good purpose, right? Charging people for what they take preventing violence but a spatial recognition, the software you do this. What about facial recognition software used by an examination company? Say protorial to proctor exams. Now, the ethics are a bit different, aren't they?

And sometimes it's not the use of the technology but the refusal to use the technology in many cases physicians because of religious reasons. Perhaps have refused to apply certain technologies on the grounds of ethics. Some of them may even be life-saving technology, We've certainly heard of cases where physicians don't want to perform a certain operations, Don't want to perform blood.

Transfusions Don't want to perform transplants on people who have already had covid. Educators may also refuse to use learning analytics for similar reasons, If an educator believes, for example, that video proctoring is ethically wrong. They may refuse to use. It is yes, is the educator ethically, right? In such a case.

Well let's take that. Another step further. Some technology companies are refusing to provide services to customers that they believe are ethically wrong. That was a case with Google Cloud services. For example, they made the climate contract for to an abusive government, or an agency. They may put their finger on the scale if you will to, for example, equalize error rates across protecting classes of people.

There's all kinds of practices that accompany may take based on the ethics of actions, undertaken, by other people. And specifically their clients and are the companies ethically entitled to do this is it up to the company to decide on the ethics of a certain action. I recall in the most recent federal election, we had a political candidate here in Canada.

Who was flagged by Twitter for posting, what they thought Twitter thought was a misleading or misrepresented of Lydia. It was an advertisement and it was saying that the other candidate hell of a certain position and Twitter said no, they did not and they flagged the video. It's very arguable that the Kennedy did.

Hold that position. And that Twitter was wrong. Unless suppose that they were wrong, is it up to Twitter to apply ethical standards, Twitter and American company to apply ethical standards to politicians running in the Canadian election? It's a good question. So what do all these cases have in common?

Or there are number of things that they all have in common and this will define the scope of our study, first of all. And most obviously, all of these are cases where the company or individual or government or institution uses advanced computing applications and learning analytics, which will call analytics, just for brevity.

And they may vary, we'll talk in a later video about the types and applications of these technologies. But that's what they all involve their, their instances of this intersection between advanced computing technology and ethics and they raise similar questions. You know. In each case, the specific question is different.

But the questions overlap in the sense of asking how we address these practices, whether these practices are ethically acceptable, what would constitute ethically acceptable in educational circumstances and in wider circumstances and on what basis should we decide one way or another?

These cases also aren't simply cases of individual ethics. They aren't simply. Cases of is this company doing the right thing or the wrong thing is that person doing the right thing or the wrong thing. These are all cases where the use of analytics, artificial intelligence data gathering and the rest of the infrastructure that supports all of this, maybe pushing society as a whole in a direction that we're uncomfortable with.

And we sometimes label this, for example, that this surveillance society or the data society, the information society and these terms suggest that the fabric of society is changing as a result of the ethical decisions or the unethical decisions that we are making with, respect to this new technology.

There's also the sense in which there may be misuse or deliberate harm caused by the people who use these technologies. Sasha Baron Cohen, who many people know better with borax argued recently that the platforms created by Facebook, Google, Twitter, and others. Constitute the greatest propaganda machine in history recently in testimony before a committee.

A Facebook whistleblower argued that it's not the informational content of the disinformation that Facebook produces, that's the problem. Rather it's the algorithm itself. The use of these particular technologies for the purpose of nothing, more than making money. And that there's a structural problem here. Either way, we're looking at not simply ethical lapses, but deliberate harms that are being inflicted on society either, by individuals by companies or by the overall structure of the system that we've put together.

Collectively.

And finally, These technologies are a lot like people, and, I mean, that in the most literal sense, these technology's are either already able to perform tasks that humans have traditionally performed. And we'll look at some of those in some future segments of this course, or they're potentially able to do so, you know, the case of robot tutors, arguably, we're not there yet, but we can imagine based on what we've seen so far where robot tutors could replace teachers.

It's it's conceivable. It might be technically impossible. I don't know, but it's conceivable and the ethics of robots. If you will aren't the same as a ethics of humans, one other consequences of using analytics or artificial intelligence is that they may make their ethical decisions differently than we do A driver, might swerve to avoid a deer on the road.

A machine might not a human teacher. Might find grounds for forgiveness, for a student who, for some reason skipped the question in their test and machine might not. And so, the replacement of humans by intelligent machines, pauses a whole class of ethical questions and they kind of break into, I would say two categories.

One is what were the ethics that the humans? Apply in these cases and second, what should be. And how will we create the ethics that the machines use if they're replacing humans? And of course, there's a third question. Overall should machines do the ethical tasks that humans have done in the past?

So that's a broad sweet of some of the issues that are involved in this course. It's by no means comprehensive. We hope be very comprehensive in module three with the issues, but the idea here is to give us a sense of the scope in the scale. The problem that we're wrestling with.

So, I hope this causes you to think about it and I hope looking at some of these issues in particular give rise perhaps to new thoughts about the ethics of using analytics in these particular situations. So that's it for this video. I'm going to stop it. Now I'm going to in fact, this is the end of this video and if you're watching, here's what's going to happen.

I'm going to turn everything off. I'm going to set up the next video and then we'll do the next video. And what you should do is give me a give me a couple minutes and then reload the activity center, just, you know, refresh, the screen, and the new information should show up.

So, thanks a lot and see you in just a few minutes. 

Force:yes