Content-type: text/html
Ethical Practices: Part Two


Unedited

Hello and welcome to ethical practices part two where in the course, ethics analytics and the duty of care. This is part of module eight, ethical practices in learning analytics and as the title suggests, this is part two of a five-part series in this module on ethical practices. In this video, just look at ethical practices, some basic types of ethical practices for management of ethics, and AI and analytics.

As we left off, from the previous video, the difference between a framework. And in this case, a management framework for ethics. And some of the more simple practices is that the simple practices like decision trees or checklists are just kind of lists of things to do, but they they don't actually tell you what you should do necessarily well actually two sides of that, right?

The, the decision tree is a bit too prescriptive. Well, on the other hand, the list isn't prescriptive enough is, you know, you'd like some sort of happy medium. But how do you choose where to go, if you're in that happy medium? Well, the deeper frameworks have the answer to that the deeper frameworks, they'll give you the list of things to consider, and then they'll provide principles values and purpose as a means of governing your practices, or your decisions, in that area.

So, like I said, well, look at some simple ones in this video. And then, in the next couple of videos, we're gonna get into much more complex approaches to frameworks for ethics in AI and analytics, so trying to find my cursor here. There we go. So, let's begin with management frameworks for ethics.

This one is kind of typical. It's from the market center for applied ethics at Santa Clara University. It goes into more than what I've just put on the slide here, but you get a sense of what we're up to here with the five steps recognize and ethical issue. Get the facts, evaluate alternative actions, make a decision and test it act in reflect on the outcome.

Now, you can see what what's being described here is kind of a process but it's not very deep is it? And it's not getting at what would base, what we would base our ethical decisions on evaluate alternative actions. Actually a very common approach to applying ethical decisions but of course your ethics in that situation is going to be limited by what range of alternative actions.

You can think of and not, that certainly is, is initial particularly when you're working in a very bounded contact. Like, for example, a university, where the alternative actions that you have presented to, you might not include enough ethical action, and that would be a particular issue. In this case, Here's another one.

Very similar called digital. Well, it's from a company and process called digital catapult, it's developed the the application or company was developed to help AI companies design and deploy. Ethical AI products and consists of seven. What they call concepts be clear about the benefits of your product or service.

No. When manage your risks use data responsibly be worthy of trust, promote diversity, equity, and inclusion, be open, and understandable, and communications and consider your business model. Again, you know, based on the hours and hours of discussion of AI and ethical theories and practices in analytics, we can see that even if they spell out what they mean by these two a greater degree and obviously do they do?

This is still gonna be, you know, far too shallow a framework for us to rely upon when approaching the question of ethics in analytics and AI. We look for something that actually is going to bring out what we mean by feasts areas. Of, for example, value and purposes. Another framework is the Sheila framework, which uses the right rock.

It outcome mapping approach and we'll look at that in a bit more detail in just a second, but the Sheila approaches basically three step plan. Identify the problem. Develop a strategy, develop a monitoring and learning plan. So okay this is better is it certainly still pretty shallow but like I say well we'll look at that in a bit more detail but it's it's better in the sense that it is actually identifying three major areas of things to look at rather than just a step by step kind of recipe like approach.

So let's look at Roma in a bit more detail so here it is and let me make it big for you select. You can see it very easily. So again we have a workflow sort of thing happening here, right? Map political context identify key stakeholders, identified, desired behavior, changes develop a strategy, and analyze your internal capacity to effect change, establish monitoring and learning frameworks, and map, political context.

Now, in many respects. It's similar, although not exactly the same as the sort of AI analytics workflows. But we looked at in the previous section, it's also bringing in aspects of the ethical theory that we talked about. For example, when you're identifying the key stakeholders that's bringing elements in of the social content contract approach to ethics.

When you're doing some of the evaluations and monitoring here, we're looking a little bit at consequences, or consequentialists to ethics. And of course, it's based on defining and redefining your policy, objectives. Some of the specific mechanisms that they would use to map the political contacts, which I think really is kind of the starting point for this.

Even though it's in a circle, it does have a starting point things like the rapid framework or the drivers of change. I prefer a tractors of change, but that's a separate presentation power, analysis, squats or strength, weakness opportunities, threats influence, mapping force field analysis. So these are all various tools that you can use in all of these stages of your model.

Nonetheless, I was going to say it's one dimensional, but it's not one dimensional, but it is too dimensional, right? It still doesn't really get into the sort of depth that we want. We we see the influence of values and principles but we don't know where they come into place just as vague area in the center.

And otherwise, it's just a step by step kind of process that loops back. So it's iterative but it's not driving us deeper into what does and ethics of analytics and AI look like really with a lot of these guides, there's what we might call failure at the first step.

And and what I mean by that, is that we often look at the ethical issues for something in AI and analytics in hindsight, and when we're preparing a product or a service, the ethical issue isn't actually detected at the start, you know, like the Santa Clara guide, looks at three sorts of questions, that might arise, could someone be harmed?

Could be action? Be considered good or bad? Is the question about more than just what is legal? How from my perspective? The answers. Yes, yes. And yes could be applied to just about any situation and so if you're sitting there and with those are the kind of questions you're asking, you're not asking questions that are sufficiently pointed.

Yes, of course, any action could be right or wrong. The real question here is in what way could it could be considered wrong? And now you're going to have to go through. I mean if you're doing it in this process oriented sort of way, you're going to have to go through all the ways in which something could go wrong and be considered bad, Similarly, with the legal thing, right?

Strictly speaking you would need to go through all possible laws. Well, you could probably narrow that down pretty quickly. But nonetheless, you're gonna need a bit more of a sharp focus to identify. Well, what kind of law could this violate? What it violate copyright law, would it violate well whatever you pick your law, right?

Similarly harm. You need to actually look at what the outcome is and ask yourself about possible harms which means thinking about possible harms. And if you say limit your thinking ahead of time to physical harm, which is very common approach and I see that quite a bit, then you're not going to see the possibility of psychological harm or social harm that might be because through an AI process.

And so you miss the ethical problem, right? At the first step. That's why it's good to have an iterative approach. So you might circle around and find it when you you get back to that first step again. But you know without really this ethical focus to begin with is very easy to miss the ethical issue more.

So the points I think and and we'll wrap up this video at this point. I don't see any of these simple frameworks as being more complicated in kind really, then a checklist. All right, they're basically asking us to consider the following. What are the relevant facts of the case?

What individuals and groups have an important stake in the outcome? Have they all been consulted, have I identified creative options? Really? That's kind of what it boils down to, but we're basically back in the position of someone who's an airline pilot or someone who's a surgeon who is very attentive to the what and how of what they're doing.

But but perhaps less attentive to the why and the why isn't covered in the checklists? It will step you through and ethical process, but it won't distinguish between what is ethical and what isn't all of the questions that we've looked at through? All of these three framework approaches that we looked at involves, a series of judgment questions that could be made in correctly, for example, identifying relevance importance, or creativity, or how creative, we've been how complete we've been, how comprehensive we've been and these frameworks don't offer the answers to those questions and it's not so much that they don't offer the answer that might be too prescriptive, but they don't point to the basis on which we'll find the answer.

There is at best in the in the Roma framework the the vague illusion to. What was it? Policy objectives. So frameworks makes sense. Theoretically they make sense. Well conceptually it's better to do more than just have a list of things to think about but at a superficial level they're not going to be sufficient because they haven't gotten to the point of what exactly is it that you should be thinking about in the sense of what exactly is it that guides your decisions or your purpose, and each of these things.

I mean, the point of an airplane checklist is to make sure you don't crash the plane and that's really good motivation for using the airplane checklist. We don't have the similar sort of thing in any of these three frameworks. We don't have the, what does crashing the plane look like?

And why is it wrong Approach? Or maybe approach is the wrong word. But attitude, That's also the wrong word. But you get what I mean. So in the next three frameworks, we're going to see the checklist part of it, but we're going to go more deeply into what are the values.

The purposes, the motivations for following these frameworks. So that's it for this short video and we'll move straight on to the next. I'm Stephen Downs. See you at that one?

Force:yes