Content-type: text/html
Ethical Practices: Part One


Unedited

Hi everyone. I'm Steven Downs, Welcome back to Ethics Analytics and the Duty of Care. We're in module 8, the final module of the course, looking at ethical practices in learning analytics and this is kind of a core part of this module as we work toward the final conclusion of the course, We're going to start basically a five-part series.

Sorry about that on ethical practices. I could have done one big long presentation, but I thought, yeah, I'll break it down into five and we'll try to keep these completely short. So that you don't go numb listening to them. So, okay, so let's take stock of what we're up to in this video and in the rest of the five part series.

So we're gonna begin looking at the concept of managing for risk which motivates the need for a focus on ethical practices rather than ethical principles gonna outline some simple practices or more accurately, some simple mechanisms for describing ethical practices. Again then going to do a deep dive into some frameworks for ethical practices.

And as you can see there on the slide, I've got four overall types of frameworks that I'll be considering then, finally, I'll be doing a bit on activation of ethical practices. So when we were, when we were looking at regulations, is essentially an approach based on managing for risk, and that was the overall purpose of these regulations.

They began with the premise that activities that have the potential for greater risk ought to be subject to closer, scrutiny, and greater regulation. And indeed, we identify a number of cases where it was felt that the risk was so great that the practice should not be undertaken at all of.

That's fair enough. But what I want to say here now and I suggested this in the previous video is well that managing for risk is not the same as ethics. First of all, it's a broadly consequentialist approach to managing anything, right? If you're managing for risk, what you're trying to do is avoid bad consequences.

Sometimes though you need to run the risk of bad consequences and in order to do the ethical thing, I mean, I came up with three reasons here on the slide, sometimes the most ethical root, okay? I've got a typo in that one sometimes. Well, let's go with the second one.

Sometimes the safest path is the least. Ethical sometimes, the most ethical root is the most risky. And then finally and I think pretty significantly risk measures some parameters like cost but not others. And that's the point risk and ethics aren't the same thing and we see this pretty clearly as soon as we get lawyers involved because lawyers will attempt to minimize risk, that is their job.

And what they want to do is keep people out of the courtroom because the courtroom is going to consume a lot of time and a lot of resources. But very often reducing risk means that you have to play it safe. For example, if you're considering whether to use a resource and you're pretty sure a fair use applies or fair dealing in Canada applies to the use of that resource which means that you could use the resource and you might even ought to use that resource for some reason or another but it's not a hundred percent, right?

Nothing's a hundred percent when it comes to things like fair use or fair dealing. So the lawyer advises don't do it. But what's happening here is that gradually or maybe not so gradually. But certainly the right to fair use or fair dealing is being eroded by the fact that institutions seeking to minimize risk.

Simply won't assert that, right? And that leaves it in the hands of individuals like me to assert that writing, you know. And yeah, I can live with a certain tolerance for risk because big corporations. Probably won't go after me, but still even as eyes and individual have to manage my own risk.

Again, I'm not assuring my right to very useful fair doing. So the need is for someone with the resources which probably means of either a very wealthy individual or a corporation. There's the need for someone with those resources to assert their right to fair dealing or fair use. But if managing for risk is how you approach, these sorts of things you never going to do it and gradually, we end up with a lot and ethnically less desirable option of less various or fair dealing.

So there needs to be something more than, just managing for risk. We need to be seeing ethics of something. A little more positive, Another aspect of managing, for risk, is what might be called. The fog of war managing for risk and and systems of regulations in general are basically composed of rules and principles And they need to be that way because it's the nature of the law that the same law ought to apply to everyone the same regulation ought to apply to everyone.

But not everyone's circumstances are the same and we see that in practice and that's what the idea of the the fog war, captures, it captures the uncertainty that results. When complex, real life situations are encountered now in actual real war. Once you engage then, strict adherence to the rules and principles will often be waived for the purpose of, you know, surviving the battle for perhaps, or achieving your ends in the battle, or, you know, just adapting to the complex and uncertain circumstances.

And I think it could be said that rules and principles and regulations, apply to complex complicated. But relatively defineable environments, but they follow apart, they don't fall apart, but they become less effective way dealing with less defined more complex environments. And that is what propels us down the staircase.

I've picked this metaphor, I could have picked the metaphor of going up the staircase or, you know, stages of progression or something like that. But I picked it down the staircase because I think it's the direction of the most easy transition from step to step to step. But there are two directions.

As there are any staircase going down, and going up, going up the staircase, we find a more formal approach, a more institutional approach and focused much more on wrongs, much more on risks and it's a doctrine based team. Essentially, fear the fear of bad things happening. The fear of unethical things happening.

On the other hand, going down the staircase, they becomes less formal, it becomes less personal and is focused more on the good. And as I said, way back at the beginning of this course it's about ethics based not on fear but on joy and so now we're seeing the steps we begin with regulation and in this section we'll talk about practices in the next section.

We'll talk about culture and then finally we'll get to the place where there is joy. So, let's talk about practices then. And again, the idea of practices not principles, the idea here is that, well, the actual outcome and the best decision and any particular environment, cannot be protected, predicted Following a standard will leave to lead to an optimal outcome in the given situation.

He see a typical sort of model here where you do some planning then you execute and you monitor the results of what you've done, you assess the results and then you go back to planning and it's an ongoing iterative process where you're learning from the practice rather than simply following a set of regulations or principles That makes a lot of sense to me.

So what are these practices then how do we describe them? How do we express them? Well, here's an overview of the elements of artificial intelligence and analytics practices. And we can see some of the major topics that we've talked about throughout. This course coming to play. For example, we have fairness which really is something again, that belongs to the social contract tradition.

The idea as described by John Rawls as justice is fairness. But also, we draw from the theoretic tradition. The idea that each person is valuable in and of themselves. And so we're focused on things like safety. Autonomy sustainability. We're also looking to achieve the benefits of artificial intelligence and analytics.

So we're looking for robustness, we're looking for generalization and performance, generally, but with an ion security and then on the reducing risk side, at least that's how I would put it. This diagram puts it as technical requirements, things like explainability, transparency and reproducibility and then the idea that this is a practice that is ongoing creates they need for something like accountability with auditability and traceability, all being a part of that.

And so you have this, you know, this framework, this this practice-based management-based framework that we can look at as an overview of how to describe what an ethical practice would look like. How do we do that in practice? Well, it caches out and a variety of ways, some of them brought out relatively simple.

And some of them as we'll see through the set of videos, rather more complex one that I'm sure many people will be familiar with is the mechanism of decision trees. And these are used when in the course of working on an AI or Analytics application, your opposed to series of questions about that application and then it'll point to a recommended course of action.

For example, it might ask you to consider whether the action you're undertaking is legal, obviously, if not, you should not do it. Also, perhaps asking whether it adds value, and then, finally, whether it is ethical. Well, the third question here is, of course, most important to us and that, this is where Bagley leaves us off.

In either 2001 or 2013, I'm guessing. 2013, what? We could continue down the decision tree, right? Does it respect a tiny? Does it put people at risk? Is it open? Is it transparent? Etc. The difficulty with a decision tree. However, is that it's inflexible. The range options for example to say well it's not transparent but here's the reason why and we'll go ahead and do it.

Anyways, if it's not transparent, if you're using a decision for you approach, then, whatever the result of that decision is, that's what you need to follow. So essentially it's just an application of a set of rules or principles and with regulation with ethical codes as we've seen earlier in the course, this is going to be insufficient.

So decision tree is an approach based in practice, but it's simple and it's rigid as probably not gonna be sufficient for our purposes.

Another approach of very common approach is the use of a checklist and we'll actually look at if you were these checklists are popular and they're proven to be effective in various cases, the most common applications of a checklist that you may be familiar with are the pilots in an aircraft going through a checklist, you know, the landing checklist or the engines have suddenly stopped checklist and they'll pull out their book.

Used to be a big manual. Now it's an iPad and they'll run through one person. The co-pilot say will read off the items in the checklist and the pilots say we'll say yes, done that yes, done that, yes, done. That same with surgery. You have two people in a surgical environment.

One person is reading off the checklist. The other person is saying, yes we did that. Yes we did that. I like I said, this has a beneficial effect on practice. It is proven it prevents stupid mistakes and you know, even an ethics stupid mistakes happen and they happened in the best of us.

They happen to the most professional of us and a checklist. Prevents some from happening. They ensure that the people involved do not admit essential steps or procedures, I recall, for example, there was an airplane crash in Toronto, I think was trial because when the airplane did almost to touch down but then decided to do a go around.

There's a fairly common procedure. I've been involved in at half of just in times as a passenger and it could be for any number of reasons. You're not squared of the runway. There's a plow on the runway, there's another plane, that's too close. Whatever only this time as they were taking off the pilot, forgot to drop the spoilers, spoilers are flaps that go straight up and basically reduce the aerodynamics of the airplane to zero.

Turning it from something that can fly through a large hunk of metal that came out flying. So needless to say, if you don't disengage the spoilers, you're not going to be able to fly and do a turnaround and of course, the plane crashed. That's this sort of thing that a checklist prevents.

And that's obviously an ethically good result because crashing your airplane isn't ethically bad result. So in FX, we might use a checklist to consider whether all the ethical matters have been considered, what are the ethical matters? Well, all of those things that we covered from all of the different ethical theories, except maybe virtue theory, because virtue, very doesn't speak to practice but consequentialism does social contract does and day ontology does.

And so the idea of a checklist is it's only going to prescribe the solution, but it's going to ask you. If you thought about it and asking you, if you thought about it is enough to prevent you from ethically crashing the airplane. So we need to keep in mind though that with a checklist is not a decision making tool, it's not going to actually get you to do ethical practices but what it's going to do is to make sure you didn't leave anything out in your thinking, so we might say that it's necessary, but not sufficient.

A framework is a more complex. Evolution of a checklist is the checklist actually doesn't you know aside from the items in the checklist, it doesn't actually provide you guidance for action. You know it says did you consider the bad consequences? But you need to have some idea of what cost it eats a bad consequence.

So a framework, we use a processed based approach and it'll do basically four things. I identify the things that ought to be done. Name the issues to be considered identify the people involved in considering them and note the resources that need to be considered, but the actual consideration, the actual outcome is not determined ahead of time by rule, or by fiat is determined by the process.

So you go through the process described by the framework. And the idea is, that is more likely to generate and ethical outcome. Whereas Jessica Baron says, it's the ability to organize thoughts into a formal framework that allows ethics to move forward instead of world around as a series of open-ended questions.

And I think she's right.

Here's an approach. For example, that focuses on many disciplines that might be involved and an ethics in AI environment. So, if you look here, we've got academia and government, at one end, we've got the users involved. We've got industry. Which who are typically making or designing a software inside?

We've got academic research, we've got governance and management. I, then four areas that they need to attend to data management. Algorithm design development deployment. Those for steps. We should recognize from the previous module as elements in the AI and analytics workflow. And we we can make that workflow a lot more complex because it is a lot more complex and it includes at each end on one end problem framing.

Then on the other end product delivery and what we see missing from this framework and we should see it like almost instantly based on our previous work. There's no step. Therefore evaluation assessment testing etc.

We can describe elements of good practice in kind of a grid. Here's one that was proposed on and sorry about the small text. Let me pull it up from the article. We'll look at it in a bit more detail. So this is on page, 22. So we'll zip down to page 22 here.

There it is. Now, unfortunately it's kind of sideways but I don't know of any good way of of improving that. So we have a cross the top here and again this is sort of in reverse order now, right? But there there's the stages in the workflow from data preparation to algorithms developing deployment management.

Right now, we look at each of the stages in more detail and we know there's going to be more in data preparation than that, but then the rest of it right, training maintenance or we've got metamorphic, testing neural coverage testing. So we have our testing here, formal verification attack, monitoring for security, human intervention, trusted, execution, environment, auditing, etc.

Now again as we saw in the previous module, this could be expanded quite a bit. The trustworthyness metrics are across the other side here and here they are along the top and their sideways. So I'm sorry about that. Let's make this even bigger.

There we go. So we have robustness generalization explainability transparency, reproducibility fairness privacy protection value alignment accountability. So basically, it's a selection of the values. It's certainly not a complete list of values. We know that because we look at that, but it's a selection of the values associated with ethics in artificial intelligence and analytics.

And so the the cross sections here point to where these things intersect. So for example, in the step adversarial training which is part of algorithm design, we have generalization and generalization in adversarial training is classic mechanisms and then explaining ability will have explainable model design, fairness will have pre-processing methods, which is a practice, and ethical practicing, algorithmic fairness, privacy protection, will have secure MPC.

I'm not sure what MPC stands for, but you could look that up on Google right now. Part of the weakness. Here is a lot of these boxes are blank and I don't think it's because nothing applies. I think it's because this model is a bit incomplete. Look at data preparation.

For example, for robustness, all we have is anomaly detection and there's a lot more. We could do to ensure our data is robust than that. Generalization we have nothing but as we'll see a bit later. There's a lot. We could do there explanation, collection data, providence for transparency bias mitigation for fairness data providence.

Again, for accountability. Again I think there's more that could be done there but still still it gives you the idea, right? It gives you a pretty good sense of what the elements of good practice are going to be. Now this chart still isn't going to tell us what is ethical in each of these areas.

But it does break down the different components of what it means to be ethical and you can begin to see why a rule-based framework or an ethical code is hardly going to address this. There's far more in here and far more variability, each one of these boxes then could be covered in an ethical code.

We're going to need some kind of practices document in order to manage that.

So process very often is described in the steps and these steps generally correspond to the workflow element of that here is the workflow element, right? That we talked about in the previous section. So for example, here we have something from Fournier in Sylvester on a classroom conversation model, all right, and I know it's not an AI or analytics kind of thing, but we still have the same sort of process happening.

And this is a process from the Canadian code of etha ethics for psychologist, for a discussion around ethics. So you identify the individuals potentially affected by the decision, identify ethically relevant issues and practices, consider one's own biases. Develop alternative course of actions courses of action. That's a key step because that shows we're going to be doing almost like a NAB testing kind of approach here And now this is likely short term ongoing term risks.

Choice, of course, of action action with commitment to assume responsibility for the consequences of the action, evaluation of the results, assumption of responsibility, and then appropriate actions who prevent the billionaire from coming up again. So what you should notice is that back process is very different from this process, right?

This process being the one in green here. So this steps involved in developing a I am analytics don't necessarily mesh with the steps involved in coming up with good. Ethical responses to ethical dilemmas.

The, oh, I've done. I've got two here. I accidentally switched to the Canadian coat of ethics for psychologists the other one. I wanted to look at which makes the same point as well, right? The forignier Sylvester classroom conversation, model establishing open, respectful environment, helps students move beyond opinions and emotions.

Help them learn how to identify a weak argument established. Ground, rules and anticipate the issues. Let everyone have a voice. This side, on the role of the teacher. Maybe you should have done that first and close the discussion. Again, very different process than either of the two processes that we've just looked at either the, the code of ethics, for psychologists or the workflow model in the grid.

So we're going to look at a number of different frameworks and the upcoming videos because what we need is something that is going to take us from this rough concept of a framework, which I've outlined here to the sorts of things that we can actually use in practice in an ethics and analytics environment.

Not I'm not going to be able to cover all possible frameworks or all possible types of frameworks already. This videos too long, or this series, video is too long. But I'm going to look at four approaches management, frameworks data, governance frameworks IT governance frameworks and then human rights frameworks.

And what will observe for these frameworks beyond the simple sort of practices is that in addition to listing, the sort of things that we need to consider the frameworks are going to are going to invoke and bring to bear things. I principles values and purpose. So that's the overview for frameworks.

And for practices and frameworks and that'll conclude this video. And then, with the next video, we'll begin looking at our frameworks in earnest. I'm Steven Downs. This has been ethics analytics and the duty of care.

Force:yes