Unedited
Welcome back to ethics analytics, and the duty of care. I'm Stephen Downs. We're in module eight, ethical practices in learning analytics, and this is part four of the videos in this module on ethical practices. These ethical practices constitute one part of the four parts staircase, that embodies all of module eight.
The four parts of the staircase, regulations practices culture, and then what I call an ethics of harmony and it's interesting as we descend that staircase. How at the top of the staircase, where in a more institutionalized, more formal, more legalistic kind of environment and one, which is based on reducing risks.
And as almost based on an ethos of fear, and as we descend the staircase, we get less formal less institutional, more personal more individualized, and it becomes more about the good that can be done rather, than avoiding risk and it becomes an environment of joy rather than a fear.
And I chose to staircase because I think going down the staircase really is the more natural direction, the direction that we want to to, to flow in. But as I say, you know, for constantly pulled back up the staircase back into the grip of the formalist institutional environment, part four, as going to look at IT governance frameworks and set a little too big there.
That is, isn't it? Let's just close that up a bit there. Make that there that's more the right size. It should be. Let me close that black and gap. Just attend to these little details of presentation. Although not the world's most professionally done video if I do say so myself.
But considering I've done, I don't know 60 or 70 hours of video for this course, and I'm all by myself, making professional videos on top of everything, might be just a bit too much, anyhow onto IT, governance frameworks. And again this is an area that most of the discussions of ethics and learning analytics.
So I've read, we see nothing about. It's like, it doesn't exist and that's too bad because there's a lot to be learned from these, IT governance frameworks. I'm only gonna like slightly dip my toe in the water, with this video. Again, it's one of these things that could be.
I mean, entire course. I mean, even if you know, the the role of ethics in these frameworks is just a small part of the overall framework. It's one of these small parts that that could become implicated in every aspect of the framework and it's the sort of thing where we want to say.
Yeah, I know, I'm working in an IT governance framework, and I'm on security or quality or whatever, but I should be thinking about ethics to and where does that fit in? I mean, it's not just fitting in into a little raw somewhere down in a corner, right? The side, I don't know, usability or accessibility.
It should be something that permeates the entire framework, but I proceed myself, perhaps a bit too quickly here. So, here, I'm just, I'm just quoting from this. This core document that says basically many respected, IT organizations and standards setting bodies, have established frameworks to identify the quote, risks, and mitigation strategies with the evolving cloud perenna, and that's where you'll find most of these frameworks, not talking about ethics, or anything like that.
But talking about new paradigms and computing and we'll get like an IT governance framework for cloud environments, which is what the court document talks about, or for risk management, generally your hosted IT facilities or maybe new technologies, like supporting blockchain or supporting VR, or supporting conference systems, or student information systems, etc.
The framework landscape is covered with a number of standards and specifications and like I say, this is something that has been well, discussed over the years and you know, the field of software generally has evolved from this creative, perhaps mathematically base, but certainly artistic kind of activity to a type of engineering.
And it's it's known literally, as software engineering, and they're very good reasons for that. And in the area of software, engineering, we have learning software, which leads us to the badly named discipline of learning engineering. And the idea is to take approaches like these frameworks, these governance frameworks and apply them to the development of any technology and learning software.
Like I say there's a lot of good that can be said about that but there are certainly some weaknesses to this approach and we we will discuss them in this video. But again, fair warning, we're only touching the surface of this and not getting into the sort of depth that it really deserves.
So, let's begin with one of these frameworks. And you know, I'm working from the from this this overall diagram here and I'll be looking in particular at Cobit and ITIL. You can see that they play different roles. They have different scope of coverage and you know, you might say that Kobit is focused on the what and the ITIL is focused on the how but the, what isn't necessarily going to be all?
What ethical standards or anything like that? It's actually quite different. So the what can be summarized as control objectives for information and related technology COBIT and it's an IT governance framework that helps. And I'm quoting here that helps organizations address the areas of regulatory compliance risk management, and aligning.
IT strategy with organizational goals. So, risk management. We've been talking about so far, same with regulations. I noticed that other thing, aligning IT strategy with organizational goals knocks where the ethics is going to come in. So, to four step process. Well, it can be described as a four step process.
First of all, understanding the enterprise second determining the scope of the government system, third refining that scope and then finally concluding the scope. And what we're doing here. What this is is it isn't a blueprint for a governance system. It's a blue. It's it's a framework for creating a government's governance system.
So it's sort of like taking our whole thing that we are doing and backing it up a step. And so that's why, you know, we're looking at understanding enterprise strategy goals the risk profile etc. And then considering the strategy, considering IT related issues, compliance requirements, implementation methods, etc, and then finally concluding the governance systems.
So that's what we need right at least. According to this approach, right? What we need are those things in order to create a government, a governance framework for IT. So how is this government's framework going to work? That's what takes us into the realm of ITL stands for IT.
Infrastructure library, and and again, I'm quoting. It provides a set of best practices out of become the most widely accepted approach to IT. Service management in the world. I've no reason to believe that's not true. And certainly, every experience that I've had with IT infrastructure and IT service strategies.
There's a model pretty similar to this, which describes it interesting to note, and I don't know if it really comes through in the diagram. Remember, before I was talking about the one dimensional and the two dimensional approaches, this is a multi-dimensional approach. These aren't just hexagons here. These are actually boxes.
Each one of these boxes is going to contain various axes of factors and considerations and it all revolves around the core central value that you're trying to produce. I would argue that, that value should include the ethical value. But, you know, in the typical enterprise environment, it's going to include other things as well, that defines the products and services.
And then around the products and services, we have organizations and people partners and suppliers, information, and technology, value streams and processes. The data management framework that I talked about is a subset of all of this, right? It's going to contain elements of all four of these things plus the center, but it's going to be focused specifically on data.
So it would be just one row of each of these boxes. So again, quoting ITIL advocates, that IT services must be aligned to the needs of the business and underpin. The core business processes. It provides guidance to organizations on how to use IT as a tool to facilitate business trains transformation and growth.
And it's interesting to see it phrased that way because when I first got involved in the development of educational technology and learning management technology in particular, One of the truisms was that when you introduce one of these systems to a learning environment, like college university or what you're not simply layering the technology on top of whatever it was that you were doing before, but you actually change what you were doing before in response to bringing the technology in, For example, at a cinnamon community college, we implemented a student information system.
And one of the things that was discovered, I believe it was calling, but I know we evaluated both calling and banner at the time. One of one of the thing. One of the realizations was that the information system required a lot more data and a lot more precise data than we were collecting at the time.
So it was impossible to do a simple transfer from the existing system into the new system because the categories that the new system required simply didn't exist in the previous data. So we had to redefine what all of our business processes were before. We can even think about using the new data, We see a similar effect with learning management systems where a course, as taught by a professor.
In an in person class might not be nearly a structured and definitely not nearly as pre-planned as you need for a course in an online learning environment. In fact, one of the things that I've done with my course is, is I've continued to do my courses, much the way I did them in person which was, you know, I had all my readings in my background and all of that and my notes etc.
But I would basically make up the courses. I want to long and are really good reasons to do that. It's a much more engaging response of approach to offering a course. And it worked really well for me a lot harder to do on line. And in fact, there's a lot of pressure to adapt the way you create an organize your course before you offer it online.
So that's sort of thing happens here as well. And something like an ITIL, IT management framework, where you need to do a lot more thinking about the various factors involved and the various components of your overall, IT system in order to put it into place, you know surrounding this.
This model I think is like political factors, environmental factors, legal factors, tech factors, social factors and economical factors, and perhaps around all of those because all of that is society, right? All of that is culture. So all of that is also going to include ethics and that's going to need to come in from outside.
That's an important realization. I think that we need to have the system is going to demand that we define ethics. Perhaps more precisely than we used. To doesn't mean that these didn't exist necessarily, but the way they existed in no way, resembled what they need to look like in order to be used by an IT management framework like this.
We see a similar approach to IT governance and higher education. Now this is a completely different way of representing it visually. But nonetheless, you can still, you still have your multi-dimensional element to it. In this case, the multidimensions are structured processes and relational mechanisms which is which is really kind of interesting, like data function and connection perhaps is is a way you could put it.
So the structures are going to be organizations structure, your roles and responsibilities project management and all the rest, your processes. The things that you're doing with your, IT student information system frameworks and standards dashboard, portfolio management, etc. I'm the the relational mechanisms are where these things connect with the rest of the world so you have knowledge management knowledge, sharing on the University's training and education interesting that that finally comes up corporate communication and the rest.
So all of these are aspects of IT governance for higher education, somewhere in there is ethics. But again, if ethics is just one of these little boxes, is basically going to be swamped by everything else. So you need to think of, you know, surrounding this as the learning or corporate culture of the higher educational institution which includes among other elements ethics.
In any case, the MD MDPI document here, argues the best configuration is one where both worlds, have a federal structure where the infrastructure strategy rolls. And procedures are centralized to avoid wasting resources and the execution and operations are decentralized. Now, that takes us back to the governance models that we talk about at the beginning of the previous video.
And so this approach is arguing for something like, well, a federal approach, but by federal approach, it's, you know, a little bit of monarchy and a little bit of fiefdom is what we have. And again, the objectives here, the core objectives notice avoid wasting resources and, and that is the argument, and always has been the argument for centralization empirically.
It's not clear to me that centralization of avoids, wasting resources. There are many cases, you know, for example, where governments have taken a number of individual communities in an algorithm, I mean to one big community and it's not clear that this ends up saving, what anyone, any money at all, and certainly has an impact on the quality and style of government.
So that community related to this is an effort specifically on ethics, by the IEEE this has been on for five years or so. Now maybe longer the 7000 series of standard or specifications, they're calling them standards is based basically on the ethics of IET management. So there were 15 of these two of these have been discontinued the one on personal AI agents and another on facial recognition.
I actually sat on personal AI agents for a short time and then priority shifted at NRC and I wasn't sitting on it but I followed their procedures throughout and it was disbanded just a few months ago actually, others of them are still in progress and a few of them have delivered standards.
In fact four of them have ethical system. Design employer data ontology for robotics etc. And the impact on human well-being, the mechanisms for considering impacts on human well-being and the rest are all still in progress problem with the eye triplease standards. Is once they finalize the standard it for most of us just disappears and you have to buy it.
So it's very much geared toward companies and enterprises that have the money to buy a triple e standards. I do not. And therefore, I haven't actually seen the contents of those four systems and, you know, I triply is beginning, but they've released now, one standard openly that I know of, and that was their first and that happened just a few months ago and they may release more openly in the future.
But that's been to my mind, the big problem with eye standards. How can it be a standard if you have to pay for it? In any case, the eye, tripoli is just one organization. ISO is another one well worth keeping in mind addressing standards for all of these. And what's important here is that the the process that they're undertaking for ethical standards in AI and analytics is similar to the process.
They've undertaken to create other standards for IT governance generally. And it's not a bad process and it is comprehensive and it has demonstrability led to systems that are safer and more. Secure certainly, you wouldn't want to fly in an aircraft that did not follow the standardization practices or eat food or take medicine, etc.
So, there's definitely a positive role as being played by these standardization processes on the other hand. And this is what I encountered when I was sitting on seven, seven. Oh, oh, six. Is that it's engineers doing ethics. And from my experience was that the engineers doing ethics, didn't have the background in the various ethical theories.
And certainly, if you did not have the background in things, like, the ethics of care that we've discussed and the approached ethics as though we were an engineering problem, and that you could, if you engineered the process properly, you get good ethics out the other end. It's not clear to me that that's the case in all of these systems.
Not not just ethical systems, but you know, and for for safety security, etc, what comes out? The other end depends, very much on what goes in, for example, with safety and security. The presumption going in is that people want safe and secure software and that everybody in the enterprise is involved in the process of creating safe and secure software.
Although, there are mechanisms and, and processes you can use to augment to even increase the awareness of safety and security in software design, or in enterprises. Generally, it's not clear that the same exists for ethics, you know, I think we've shown pretty much definitively in this course that there isn't the same agreement, that things should be ethical.
The way there is agreement, that things should be safe. What counts as ethical varies from place to place to place the idea even of talking of autonomous systems or autonomous agents as we were in 7006 and considering that these agents, or these systems might because they are autonomous be self-responsible for their own actions and therefore leave their owners or their controllers, not responsible for the actions that be systems undertook and that was very troubling to me, but sure it's a nice solution to the engineering problem and that's how it was presented.
So, I kind of glad seven, oh, oh, six student produce anything. Although, that wasn't the result of my intervention by any means in all cases, with all of these frameworks, it's always going to be a bit of a measurement problem and a bit of a checking and egg problem.
And this diagram really illustrates that quite well in fact, well enough, I'm going to pull it up on the screen here so we can all have a look at it.
So let's open it up with firefox and what page was that on? That was on page 14. So let's go have a look at page 14 here, one of? Oh yeah, right. Let's like get diverted. That's just the review process that they follow on. Here's all the different studies that they look at.
There's a really typical sort of research process. Here's the diagram that they came up with the ITSM benefits conceptual model. So let's make that bigger, so that everybody can see it. So full right screen, right? Bigger soil. Way to see this whole thing here, since neither should be. Oh, let's do it this way.
Oh pediatric exchange. There we go. And we'll put here we are. And I believe I have a way of their couches. There we go.
Some brilliant video experience for you there. So these are our factors that tend to produce benefits, whatever the air. And so we look at some of them, better processes controls and documentation, mature processes tangible improvements and process metrics, higher efficiency, staff reduction, decreasing IT, expenses increase in customers satisfaction, these just loop around to the bottom.
So like staff, reduction results in increase in organizational revenue? Similarly, decrease in IT. Expenses results in increase in organizational revenue. Now, the recommending here is that the process should focus on the beginning points rather than the end points, which is actually pretty insightful. So many business process initiatives start with the objectives of say, increasing organizational revenue, which leads them.
Then to say, okay well the way to do that is decrease IT expenses and reduce staff and they just do that but they have it thought of the impact on all of the other processes. Like, for example, and there should be an arrow here. If you reduce staff, you're probably going to have an impact on IT service quality improvements.
If you decrease IT expenses, you may negatively impact your information system business, environment, etc. So, but the main point here is that, there's no single starting point. If you start here, for example, better processes control and documentation. Well, in order to get that you need tangible improvements in process metrics, but in order to get that you need mature processes, but in order to get that, you need better process control and documentation, etc.
And we, we could draw more and more loops through this when describing our practices. So to come back here, right? So organizations, should, for should first focus on the benefits that promote. In other words have arrows leading out more than other benefits being promoted. In other words with arrows leading in and that seems pretty reasonable to me.
Hoops. There we go. So that's the IT framework side of things. And I have a slide later on that talks about the the weaknesses of frameworks overall. But I want to preface that, or maybe anticipate that here was a few remarks. And the first remark is, and I've suggested to this all suggested this already.
None of these models is in any way, democratic or anything remotely resembling Democrat democracy. We have this monarchy on one hand or anarchy on the other hand and I think that's now it's maybe a character caricature, but we could say that engineers view the world about way. Now, that's totally unfair to engineers.
And I know that they are more multifaceted than that, but when you have governments processes that assume and organization is either a monarchier and anarchy, there's a certain whole range of business processes that haven't been considered. But this really has to do with the nature of these frameworks themselves.
So these frameworks are designed for society as a whole and mostly we wouldn't govern a society that way of hope Europe is maybe trying to show us differently. Therefore, the governments of specific companies and specific institutions that have clear lines of accountability and responsibility that are authority driven and even more to the point that are mission and values driven.
And that's why these frameworks are all based around mission and values, and these missions and values. Sometimes include ethical components. But very often. Do not and sometimes explicitly, do not particularly if the institution in question is a for profit corporation, which has, you know, basically a responsibility to promote its own interests first.
And you might consider okay promotion of ethics is sort of like enlightened self-interest kind of, you know, aversion of egoism but it's not clear. That egoism is a good basis on which to organize companies or to organize a society consisting of companies and that's the friendly analysis. And I've heard it said not completely inaccurately by people like Harold Jarkey among others that corporations are literally psychotic in that.
They have no real concern for anything that isn't them and to a certain degree, the same is true of institutions like colleges and universities who very often when they adopt these frameworks, do not look at the wider implications. Some of these frameworks do take into account stakeholders, but it's interesting to draw the distinction between stakeholders and people.
Right. A stakeholder is someone with a direct interest in the product or service being discussed. So, stakeholders sort of almost like, a shareholder of a company, but they're interested isn't a based on owning shares of the company they're interested is in buying services or selling services to the company.
It's still going to be a financial, and perhaps commoditized, kind of interest as opposed to an ethical or social, or a cultural interesting company. But I mean interested in what Disney does not because I'm a stakeholder. I don't even buy their movies, but because of the wider impact that they have on society, and I've said over the years, various things about Disney and, and how I think the content produced by Disney is in various ways, harmful to society, different argument nonetheless, a framework that includes.
Stakeholders does not include me. When we're talking about Disney a framework when we which includes society does include me. But these frameworks are based on for the most part stakeholders and not society, Although, with some of them and particularly those focused on ethics, they do look at impacts on wider society But it's not clear how these impacts come back to influence how the corporate or institutional governance operates.
So one of these wider framework frameworks is human rights. And so there has been, I'd say a fair amount of discussion about the role of human rights frameworks in designing or implementing ethical frameworks. So, what is first of all human rights framework and what does it look like for algorithms?
Well here from the ranking, digital rights.org paper, we have in that quote a human rights framework for algorithms that would quote not just set forth standards on how to do no harm or be ethical. But it would help hold companies accountable for those standards by providing mechanisms for risk, assessment, in force enforcement, redress when harm has occurred and individual empowerment for technology.
Users, a bunch of stuff is happening there, right? Again, it's kind of an effort to drag us back up the staircase because it's talking about holding companies. Ethical providing mechanisms for risk assessment. Providing redress etc, but there's a little bit of down the staircase influence as well. When we talk about individual empowerment for technology users.
So while a human rights framework is kind of consequentialist in its intent, you know, seeking to root out and prevent violations of human rights, there's still something else. Maybe not quite social contract to eat, but maybe sort of dale, they ought to logic. They ontologically focused in individual empowerment and anyway, at any rate, with respect to human rights, companies are not doing well, as is diagram indicates, you know, companies like Apple and Amazon and Verizon Samsung just don't give us any disclosure about how say users online content, just curated range, or recommended.
We get a little bit from Microsoft and Facebook, but, but again, nothing near adequate disclosure and without disclosure, how can you tell whether they're violating human rights? So, let's suggest the needs for something, like, a human rights impact assessment. And this is one of these processes again. And so we're almost like, backing to the land of checklists, joy.
So we have planning and scoping, we have data collecting and baseline development, stakeholder engagement throughout the process. That's nice andalizing. Impacts impact mitigation and management reporting and evaluation back to planning and scoping. You see how this is a very basic framework. It doesn't actually say what the human rights are and what they are going to be is going to be defined by stakeholders.
But remember stakeholders are going to be only those directly affected. So like, for example, I am not a stakeholder in the plate of the Rohingya. The Rohingya are an ethic group in Southeast Asia. That have been basically exiled from their homeland along with a whole lot of what's the phrase that they use ethnic cleansing and I run.
Welcome and unsafe in their new landing spot in. Bangladesh on islands that are just a few feet above sea level, and sometimes a few feet below. Sea level. I'm not a stakeholder. I'm so, how do I get involved in this human rights impact assessment? Well, I don't, you know, unless I work for one of these companies.
So you see sort of a weakness of this framework and it's a similar sort of weakness that we saw with the IT governance framework, where again it's very institutionally focused. Again, it's based on interest rather than society culture. Refix again it's processed based but the process kind of reduces to a checklist.
And again we don't really have this core essence of what ethical is although at least in principle, the idea of that it's human rights means something like if it promotes human at rights it's ethical. If it does not promote or opposes human rights, it's not ethical. But we talked about this already.
We talked about this when we're talking about social contracts and how a human rights based approach to ethics is going to be insufficient because a lot of the scope of ethics goes beyond human rights. But less of a criticism in this case because it's just one of many governance frameworks that we could be considering and there may be other governance.
Framers, for other aspects of ethics that are not engaged or do not have anything to do with human rights. Here's an application of it thinking of human rights designing for human rights. Now it's interesting. How brutal gives us three categories of human rights violations. Humiliation. That is being put in the state of helplessness in significance.
Losing autonomy, over your own representation to me, it's an odd way of putting it. But you can see how it's, you know, based on perhaps a day ontological point of view of the individual, the second violation instrumentalization, treating an individual as exchangeable and merely as a means to an end.
And that very clearly is a day ontological approach. And then third rejection of one's gift, making an individual's. Superfluous. I'm acknowledging one's contribution, aspiration and potential and again I think that's a day ontological perspective. So this particular application of a human's rights framework is based on, I would say a day ontological representation of human rights and that kind of is reflected.
And the sort of right here is in the process is to begin with values. Then expand to norms, which then talks about how to actually design for these things. So one of the values and not surprisingly is privacy on which leads to informed consent to processing confidentiality and right to a race here and people will recognize these as elements of Europe's GDPR and then designing, for these things, which is what's new in this kind of approach is positive.
Opt-in homomorphic encryption and data removal from live and backup storage. Okay? So the, the homomorphic encryption is kind of a fun one. And I'm not going to try defining that because I don't know what it means necessarily, although we could look at up on Google and we would probably know.
Well, let's look it up on Google and then we'll know. So what are we gonna do? And just this on the side if we have to look up something on Google it's probably not a basic human, right? But okay, so homomorphic encryption. So,
Homomorphic encryption is a form of encryption that permits users to perform computations on its encrypted data without first decrying it. Okay. So in the world of encryption and distributed ledger technologies, these are known as zero knowledge proofs or zero knowledge computations and it's really quite interesting. And it's based on a principle.
Something like this, suppose you have a piece of data and you add a number to it and you have a new piece of data. Well, if you encrypt this piece of data, now you have the encrypted piece of data. Oh, we're not saying I'm not seeing my hands here.
Okay, so right screen. So okay you have this piece of data, right? And you do a calculation, you have a new piece of data. Now if you encrypt this piece of data, there it is, and you perform the calculation on that. Then there's a mathematical relation between the effect of the calculation on the encrypted piece of data and the effect of the calculation on the unencrypted data such that you can test or perform the calculation on encrypted data and get the results that you wanted in an encrypted data.
This means that you can use your own real data encrypt it, send it to a third party have that third party, do stuff with your data, send back the results also encrypted. You decrypt it and now have your results or can verify your result zero knowledge proofs or if you want to get really fancy look up zed k snarks, lots of fun.
Okay, so aren't we? Glad we did that. So this what this is what happens though with terminology right different people use different terminology to refer to the same sort of thing, all right? So back to human rights frameworks. Let's get back up here. There we go. So again, human rights typically is taken from a day ontological perspective of ethics.
It has an overlap with a social contract, picture of ethics. And the idea expressed as fairness is something, like, if we were signing a contract counterfactually, and then we would probably sign a contract that respected various human rights, why? Well to give us equal opportunity, etc. Justice is fairness, and all of that.
But we know that there's much more to ethics and justices fairness. And so, a human rights designing for human rights is going to partially, but by no means completely address the applicable issues. Still, you see the the application here, right? It's the use of a framework. The framework is run through a process.
The process takes elements of that framework and produces concrete tangible outcomes. And these outcomes things like positive, opt-in homomorphic encryption data removal from live and back back at storage. Generally tend to be elements of this workflow, as informed, by ethical considerations. So there's a lot to be said for the approach.
So how does this work on a more broad space or a broad-based basis? Well basically what we're doing here is in the field of ethics for an analytics and AI we're drawing on international human rights law. And what we're doing here is drawing on it, shall we say as a means for assessing harm.
Again, there are other harms but we'll leave that aside that gives us clearly defined obligation and expectations that apply a cross. The algorithmic law life, cycle here. I'm quoting from this report from Cambridge international and comparative law quarterly on international human rights laws. A framework for algorithmic accountability. So there's the framework in the diagram.
We can see it preventing monitoring and oversight, effective remedies. These are all elements that by now have become familiar to us, and they're playing a role. And each step. And now this is a basic workflow, right? Conceptualization and analysis design, testing deployment, monitoring and evaluation basic workflow. We know that our AI and analytics workflows more complex, but we'll just plug in the AI analytics where it flowing here.
And then let it be informed by this internationally. Human rights law framework. So what's that going to do for us? It's going to identify the roles and responsibilities across the full algorithmic likes life cycle, or the full algorithmic workflow life cycle and workflow are going to basically mean the same thing here.
Operationalize that is describe in operational terms the necessary, the measures necessary to ensure rates compliance and then third integrating a rigorous accountability framework by rigorous. What we mean is not strict or really hard to comply with or anything like that. What it means is an accountability framework that has actual numbers and principles attached to it, so it's not based on someone's perception that.
Yes, I think we were ethical here or, you know, our ethical compliance was high, it'll be focused on specific indicators and specific ranges of values. For those indicators that allows for what we could call an objective assessment of rights compliance in this case. And not unreasonable. Not wrong of.
Okay, so what are the limitations of this approach? There's a good discussion by Natalie Schmoha from 2020. Looking at what we need beyond a human rights based approach to AI governance. I'm very good paper. Definitely recommended. So she considers four objections, she's going to argue against these as objections but I think that we could maybe still make a pretty strong case from them.
Nonetheless here are the objections that she's going to dismiss. First of all, that these rights may be too, Western not and eastern or southern analysis of human rights might be different point. I think is reasonable. She's going to argue that. There's some people argue that human rights frameworks are too individualists, they're too focused on individual rights, but I think there's a wide spread acknowledgment, and excess and acceptance that there are such things as cultural rights or community rights.
That can also be violated in the context of human rights. Like, for example, genocide isn't the thing that you do to a person. It's a thing that you do to a culture. It involves people obviously but it's the culture that's being liked it. I think that human rights frameworks are too narrow in scope.
Although it has, it is arguable that everything that's at the could be covered under human rights frameworks. I don't agree. Because I don't think all questions of ethics are questions of human rights. But you know there's room for debate there. And then the last is that the use of human rights frameworks is two abstract to form the basis of sound AI governance.
But I think we've seen that, this isn't the case we can work from as we saw before from values through syndromes, through to specific design requirements. And and that's not abstract if it gets right to the point of design, requirements value well into the software development process and you've got concrete stuff, you can work with rather she says that human rights frameworks in some important ways, required democracy and she writes without securing and underlying social societal infrastructure, that enables human rights in the first place.
Any human rights based governance framework for AI risks. Following short of its purpose. And we could probably broaden that and say ethical instead of human rights and the same point could be made with out and underlying societal infrastructure, that enables and ethical stance in the first place. Any ethics based framework?
Governance framework for AI risks. Following short of its purpose for society, needs to support the framework. Just like I said before, right? We all agree that aircraft should be safe. There's nobody out there. Arguing for run safe, aircraft. We all agree that surgeons should not leave their gloves inside patients.
Nobody is out there on the pro glove side. But, with ethics, we're not in that situation. At least not obviously with ethics, you know, it was a lot of the ethical principles that have been discussed even in these videos, autonomy consent etc. There isn't a clear societal infrastructure, that enables these and that's the problem.
And so there isn't this overlapping of, you know, ethics and democracy that we need to take place. I mentioned earlier that you know, community can be defined by consensus where community a community is identified. By the way, it defines a source of truth that perhaps is the first failing in our current communities where we don't have structural mechanisms.
That allow us to come to an understanding of what will constitute a single source of truth in our society. Now would of common shared ethics, follow from that? Well, no problem not but at least we would have the grounds on which to have that kind of conversation so that we could design systems and methods that allow for differences in ethical perspectives, but still allow us to work together in order to create software that respects those.
But if we're in a situation where one side is right in the other side is wrong, period. End of story. You don't have the grounds for those kind of discussions and right now that's what we like. And that's why these these frameworks the you know, data management framework. The AI framework, the human rights frameworks, they're nice, they're useful, they're great for addressing risks.
They're less stringent than regulation and therefore more widely applicable. And you know, can take into account exceptions and special circumstances context, specific applications or say, call them application profiles etc. But they don't have this surround that they need that would lead people to use them. And and that's the problem.
And this is an issue of governance frameworks. Generally they are designed from, not for wider society, where you do have these dramatic differences where you do have a more democratic government structure. And that's pretty much true almost are all around the world where individuals who are not necessarily stakeholders.
Do have a point of view. A governor's frameworks are designed for organizations with a single point of view, single set of values, single perspective, single mandate governance frameworks depend on agreement with shared presumptions, some things like vocabulary on, what's right? What's wrong, what's valuable, what's not valuable? What counts as a benefit?
What doesn't and in the end, they're not actually based on ethics. They are governance frameworks. They tell us how to manage things. What all the processes are what they should be in and not, they're incredibly useful but they don't tell us why we're doing this management, you know, and they put culture ethics and behaviors and a little box.
Let's overwhelmed by all the rest of the stuff and finally the problem governance frameworks is humans. A human will follow a governance framework for as long as it's convenient to do. So a convenience, maybe the wrong word but as long as it's acceptable to do so or profitable to do, so pick your word, right?
But at a certain point, they won't, and this is where we have a clash between ethics and governance frameworks. You can identify sometimes very precisely what the person has done. That's unethical for, at least goes against the government's process, but not why they shouldn't have done it, simply that it didn't follow.
The process isn't enough of an ethical motivator for people. I hear all the time people saying, you know, we really need to focus on the process and not the results. The process is what matters that fine until your harmed by the results or that's fine until the results reduce you from the status of being a person to being a non-person as happened to the Rohingya.
And then you can't trust the process anymore. And finally it is arguable. This is the point of things like critical race theory and critical theory in general. The process is rigged. Now, this is a much bigger discussion. It's one that Chomsky attended in manufacturing consent. And we saw how well that was accepted by the keepers of the process.
But and it's also covered by people like Naomi Klein. The process does favor authority. The reason no democratic mechanism in the process. It does favor, some particular kind of order within it. You know, it's an ontological rule space sort of order and it views things that aren't consistent with the process of being some something like anarchy, which I don't think is the case at all.
And you know, it rewards the powerful and kind of shrugs its shoulders. When, when people are harmed by the process. Now perhaps these are all unfair criticisms and you know the all processes in all of the world not all systems. Not all organizations are institutions favorite of the wealthy harm.
The poor etc but there is a preponderance of this and there's nothing inside the mechanism of government and frameworks that even suggests that there's a problem with that. And to me that tells me that a government's framework like a regulation, could be used as a mechanism for implementing something, but not as a mechanism for deciding what to implement.
All right, one more video in this series and then we're done with ethical practices and we'll move on to the next part of the module. So that's it for this one. I'm Steven Downs. See you? In part five.
- Course Outline
- Course Newsletter
- Activity Centre
- -1. Getting Ready
- 1. Introduction
- 2. Applications of Learning Analytics
- 3. Ethical Issues in Learning Analytics
- 4. Ethical Codes
- 5. Approaches to Ethics
- 6. The Duty of Care
- 7. The Decisions We Make
- 8. Ethical Practices in Learning Analytics
- Videos
- Podcast
- Course Events
- Your Feeds
- Submit Feed
- Privacy Policy
- Terms of Service