Unedited audio transcript from Google Recorder
Welcome to ethics analytics and the duty of care were in module eight. That's the last module talking about ethical practices and learning analytics. And we're just starting off this module. I'm going to do a reasonably short. Look at efforts to regulate artificial intelligence and analytics. So to begin with, let's consider what the scope of regulation is basically the, the idea here is to try to put some parameters or boundaries around what we're talking about.
I started the very beginning of this course that I would interpret the concept of analytics in AI, very broadly and I have and I would consider the contacts of ethics, very broadly, and I have, and taking to account the wide range of applications in the wide range of issues that arise and these principles can continue to apply as we talk about the scope of regulation.
But let me draw from Korean cath and and we'll identify a few of the major areas covered under what regulatory agencies are trying to do. First of all, what might be called, ethical governments. And this is an attempt really to zero in on what calf calls are the most pertinent ethical issues.
For example, fairness, transparency privacy. Now my argument to some degree on this. Is that over time? We will probably find that our tastes and our interest in the issues raised, by AI will vary. And right now we're focused on fairness, transparency and privacy and 10 years from now. We might folk be focused on a three, very different things.
Similarly, from society, to society, different Asia of try that again, different issues have different levels of importance, for example, privacy. In a large city is one thing. Privacy and a small town, like, the one where I grew up is very different. But nonetheless, the idea here is that there will be a continuing interest in the particular.
Ethical issues that are raised by AI. Secondly, the scope can be thought of as covering well what Cath calls explain ability and interpreability, for example, the the right to an explanation of algorithmic decisions, we've already covered the idea of explaining ability in AI and we're going to revisit that in this presentation again as well.
But the there is this sense that people should know about what's happening and an artificial intelligence or analytics application so that they have someone understanding of what they need to fear and what they don't or even whether they need to fear anything or whether they can just go on with their lives.
This is a bit different from the idea of issues. This is more looking at the idea of AI and analytics from the perspective of risk, but informed risk. And that's, that's the intent here. The third area of regulation could be covered under the heading of auditing. These are as cats as mechanisms that examine the inputs and outputs of algorithms for bias and harms.
I think it goes beyond that and we'll see some cases where it goes beyond that. But the idea here is that, even if we can't explain everything and even if we can't come to decisions about the ethical issues on a broad scope basis. And of course, I've argued that we can't there is still grounds for regulation.
Just based on the principles of good management, good software practices and good corporate practices. And so auditing and accountability would certainly be within the scope of regulation. So as you can see, I mean these scopes don't really narrow the field at all. At least I don't think they do but they give us a sense of the different sort of approaches that are taken by different efforts.
At regulating artificial intelligence and analytics and arguably any attempt to regulate without addressing. All three of these would be incomplete.
I said we'd revisit explainability. Let's do that. Now, now, when we talk about explainability for the most part, we talked about explainability in the sense of the first of these types which is titled rationale, or reasons behind the decision and to alerts degree. I argued that were not really going to get explainability in the sense that we know in AI system is going to follow a nice neat.
If then type rule or regulation, we won't be able to appeal to it and even when we try to approach the issue of explainability counterfactually through, for example, the semantics are possible worlds, they're going to be different perspectives, different points of view, that will realize the explanation differently and explanation.
Very much has to do. With point of view, the range of options you are expecting and the the alternative thing that you thought might have happened, but that's just the rationale. And, and there's the slide here, says the ICO and the Ellen touring institute. Actually, identify six mean varieties of explainability.
Now, I argue on a technical ground, but the other five aren't really explainability, but they are things that need to be considered whether or not we call them explain ability. So, these are the sorts of things that someone regulating a, I would want to know more about with respect to an AI or analytic extension.
For example, responsibility questions come up, like, who made the system? Who designed it? Who created the model? Who implemented it? Who owns it now and if it does something or makes a decision, who do you talk to to get some sort of review and the presumption isn't? I think it's a fair presumption that what the AI Center does shouldn't necessarily be the final word, another area of explain ability and we've covered this in a set of four videos data, what data went into the model and how the data was used.
And we know already from looking at a previously, all the different questions that that entails and we haven't even yet finished talking about data in the context of this course, you know, there's there's artificial intelligence and analytics and then there's just thread about data that runs through it. And I suppose that's always been the case with computer science and it's certainly the case with modern neural network AI, which depend on data of another type of explainability is fairness that ones harder to get at because it's hard to know what counts has fair.
When we look at social contracts theory, we look at John roles and the idea of justice as fairness. But what counts is fair might be perceived very differently from one person to the next and might be applied very differently. In one from one context to the next and fairness does not simply mean that the AI is in by unbiased and individuals are treated equitably.
And when we talk about the duty of care, for example, we talked about the importance of getting the perspectives and indeed even getting the involvement of those who are impacted by the AI in the design of the AI. And that's not a thing. You can just measure as a fairness measure, but it certainly does seem to play as significant role and our idea of whether AI is fair or not fair.
Another dimension of explainability could be safety and performance and these went align with typical and I think reasonably will understood computer science, auditing procedures and these address issues of accuracy reliably reliability, security robustness. All of these are already governed understands set by eye or the ISO. And all of these are indicated through various common practices, or methodologies.
And then finally, in and we talked about this in the previous module impact, understanding what the impacts are of the use of analytics or AI and how the effects are monitored and how decisions are monitored. But again, impact isn't as simple phenomenon and it's not easy to trace a specific impact back to a specific thing that you did in setting up the AI module or modular is setting up the AI model or providing it data to learn it from still this this again gives us a framework for thinking about explaining ability or thinking perhaps more accurately about the way we would like people who create AI systems to be accountable for what they've created.
So this is perhaps a good way of describing the information that regulators are going to need from the providers of AI analytics system. And if you're in the process of building or implementing a system and can't provide answers to all of these questions then I think that regulators will be in a good position to question exactly what it is that you're doing and ask for some sort of guarantees or safeguards because what you're doing might be in significant respects dangerous.
This tool alerts agree. The approach taking in Europe earlier this year that would be 2021 a draft proposal. Called the AI act was presented. Basically a regulation down harmonized rules and artificial intelligence, I've got the link to it here on the slide, plus links to a couple of summaries, the web is littered with summaries so you don't need to depend on me for this information but there's a few things that are worth noting the AI act takes what may be called as a risk-based approach.
And in fact, I read in several documents that there is a general consensus, that AI that pose different levels of risk, should be treated differently and, and obviously, AIs that pose, the greatest risk should be regulated. Most stringently AI, that poses. The least risk. You're almost like anything goes.
So at the high end of that risk, scale are uses of AI, which the proposed AI access should simply be banned and these are all listed under title two of the proposals. And this is a quick summary, obviously read it from our details but it bands AI tech that is subliminal or is based on subliminal methodologies.
So that the individual isn't aware that they're being manipulated in some way. Whether that would cover things like dark patterns, I don't know. But if it does, I'm not going to complain. Also AI and analytics that exploit vulnerabilities and here, you know, we can ask whether that includes things like the dopamine, hit that you get from receiving a like on a Facebook account or whether you're engaging in addictive, behaviors, you know, in doom scrolling just endlessly scrolling because, you know, it just never stops also banned our applications, that create some sort of social score or ranking based on your social media activities, which is interesting because it's a contrast with China, which has taken the opposite approach and then subject to a number of conditions.
In fact, the number of conditions, the description of the conditions is the longest part of title too banning real-time biometrics for identification and public places. So those are the things that the European Commission feels are the most risky applications of AI. Interestingly, as I say the look at that and think about that, it doesn't include things like what we've been calling in the chat or in the discussions.
Ox or autonomous armed autonomous quadrupeds. And it's interesting that yeah, there isn't a specific band on joining AI with weapons although you could interpret. That is exploiting vulnerabilities. There's also a fairly significant set of regulations regarding compliance with existing rights and freedoms in areas that are considered high risk.
AI. And then just generically where there is a potential risk, AI needs to bury CE, markup product compliance showing that it's been developed under processors and procedures that provide reasonably reasonable degree of assurance about their safety and about their security. Oh no, I think it's pretty reasonable approach. Although, you know, I I'm good want to say explicitly that we don't want weapons on our AI.
The other thing is, and I think this is very much worthy of note. The things that we think are risky now might not be the things that are actually really risky and the things that we don't think are really risky. Now might be the things that turn out to be really risky.
For example, using engagement as the primary consideration. When designing recommendation, algorithms, we've talked about this already. People seem to like to engage in fake news and controversy it turns out that that's really risky for us society and that there may be no bad effects and that it exposes a society by making vulnerable to being if you will act by fake news and fake information, which arguably is what has been happening to the United States in the Western world in general recently.
So these risks are going to move around and they're, you know, in a framework, there needs to be sufficient. Flexibility to move specific practices, even out of different risk, categorizations, to allow for a reasonably flexible response to them.
This is the sort of thing that comes up people, especially in the United States, have decided in many places that facial recognition, algorithms are too risky to use at all. And so there have been bands in places like Massachusetts main and Minneapolis and companies like Amazon, IBM and Microsoft are reported, as saying they will stop selling, facial recognition, technology ML, this is documented, but as tech crunch points out, they run, they run a service called crunch base, which tracks investments in technology companies, including AI companies and according to them.
The investment cash is just rolling in suggesting that there's a bit of ambiguity society wide. As to whether facial recognition is a really risky technology, that should be banned or a really good business opportunity. That should be funded and the cynical among you. And I include myself in that will say apologize both, right?
And so that's going to create some pressures and that's one of the risks of regulation are taking. The regulatory approach is often. The people who write the regulations are on the same people who benefit financially from certain regulatory regimes, you know, it's a lot like letting cable companies, right?
Your country's communications policy or pharmaceutical companies, writing your country's drug patent law and here. There are cases where AI and analytics company's will have a hand in writing the regulations that govern a high in analytics and they make money from what we make consider risky behaviors. Now in Europe, the government is taking the the approach that their money doesn't buy a freedom from regulation, but that's not something that applies across the board.
The United States. The United States is a little bit of a flux obviously because of fairly dramatic changes of government. They've had recently the NIST plan describing, an American AI initiative was developed under the Trump administration and is dated 2019 reading it. Though there are elements of trumpingness in it to be sure like the America first approach, but to a large degree, it reads like a reasonably well thought out set of principles and the overall five principles that we could say guide the initiative are the search principles.
We would expect from the United States generally and I can't see Joe Biden changing these dramatically. They want to drive technological breakthroughs. There's certainly a sense that getting better at AI is an imperative. They also want to drive the development of appropriate technical standards. And again, that's an approach that does characterize what the United States has done in other areas.
A lot of the impetus for things like, the international standards organization or the eye, tripoli comes from the United States and, you know, they do have standards and in a lot of product areas and services. They're also focused on training workers, with the skills, to develop and apply AI technology.
That presumably is a good thing. But now here are the I was going to say, here are the Trump things, but I don't think these are Trump things. Specifically protecting American values including civil liberties, privacy, and fostering public, trust and confidence in AR technology. So this reflects the tension that always exists in the United States, it's very much a nation that believes in civil liberties and privacy and these will be of primary importance at the same time.
The United States is a nation that believes in and trusts it's industries. I including its technology industries and industries. Always had a large say in American policy development So there needs to be a mechanism to enable people to be confident that their rights and freedoms are being respected and also confident in the companies that are developing and deploying this technology.
And then, finally, they're looking for an approach that protects the American technological advantage in AI while still promoting an international environment that supports innovation and here longer term. We can expect to see things like copyright patents and and other IP regulations come to the fore in Canada. Canada has adopted basically well, at least according to the report by the privacy, commissioner a rights-based framework that would explicitly allow personal information to be used for new purposes, but within a right space framework would create provision specific to automated decision making.
And as I suggested one of the earlier slides, include an accountability requirements that could, for example, take into account the, the different types of explaining ability, the listed previously, although subject to the difficulty of providing explainability of artificial, intelligence, and analytics applications as well. There's a there's quite a bit of movement and I'm not really sure where we'll go with respect to copyright and intellectual property.
Generally also, I'm not really sure where Canada will go with respect to ownership of data and data privacy. There is a strong streak of wanting to protect the privacy of Canadians and the same time there is an openness to allowing companies to develop and implement technology with the broadest freedom possible.
So a lot of open questions I think still with respect to Canada follow. Michael Geist on that. He's probably been a national expert on the subject China or various discussions of China. And obviously, I don't read Chinese so haven't been reading these in the original. I did look at them but they were all in Chinese, but this is something that we've seen develop overall over the years.
Right now, most recently, they have released a set of draft guidelines on recommended systems. And what I noticed was the provision in there that allows individuals to turn them off which I think would be very disruptive to say Facebooks or even LinkedIn's business model and although China hasn't emphasized privacy in the past, the rears more of an emphasis now on, requiring user consent for the use of personal data.
But it's really been an interesting in China. Recently, however, has been the limitations that it's been imposing on tech companies, generally and there have been a number of high profile announcements in that regard, including prohibitions on tutoring and education services, companies selling services to people under the age of 18.
These limits would also prevent platforms from violating user privacy, and engaging in practices that are harmful like encouraging users to spend money or promoting addictive behaviors. Haven't seen that sort of emphasis in the Western world, but I wouldn't really be opposed to them if I saw them implemented. You know, it's there's a big gap between subliminal practices and exploiting the vulnerable and encouraging people to spend me have encouraging addictive, behaviors.
You might think of it as two of the same sorts of things, but I think there's a bit of a level setting issue here. The, the companies in the west I think will have greater freedom to engage in practices that are kind of like this in the interest of providing them with good business models.
Similarly, in China especially recently there have been strong antitrust in 19 monopoly laws and it's with specific liability for abusing a dominant market position by discriminatory pricing. And then as well, I've noted some expression of concern about working conditions in China, especially with respect to things like AI surveillance.
So I found this article as well. A statement on accountability from the global privacy agency. And I really think that that's where a lot of the emphasis in regulation is going to be, especially over the next few years. And there are six provisions here that I've listed, and there are more provisions as well in this document, but accountability includes things like assessing the potential impact of human rights.
Testing the robustness reliability etc, of the system keeping records which I'm sure Facebook wishes they didn't have to disclosing data protection and privacy and rights impact assessments again so that they don't need to be disclosed by whistleblowers or court orders. Disclosing, the use data and logic involved in the AI and ensuring that accountable, human actors are identified.
And again, you see a consistency here with these principles of accountability with the definitions of explainability that we saw earlier. And so overall, the regulatory framework is going to require some kind of mechanism for accountability, but the danger here is that it will be very prescriptive but we'll talk about that.
So but first, let's consider some other regulatory areas our outside of AI and analytics. Generally, the one that's brings to mind right away is data regulation. We have the European general data protection regulation and there are more in Canada, the United States elsewhere, there are regulations on data. I've considered them kind of marginal to this that they have an impact on the ethics of AI, but they also have a wide book ability outside that specific scope another area.
They mentioned this previously intellectual property, they're various issues that arise. The big one recently is defining the authorship of AI generated content. And the question is, who owns the content? If anyone, if a, if a monkey can take a selfie? So can an AI and if that happens, who owns the results as any interesting question, but there's also it's actual property issues about data models data.
In general specific data. The processes involved in cleaning data, etc. Etc, etc. Right now. Artificial intelligence is pretty free and easy. Most of the initiatives are open source, even those that are proprietary are sharing quite well as the dollar values rise and as the importance of AI to the economy and to culture generally rises, I would expect more and more debates about ownership and IP.
And then finally, just the general area of civil wrongs or torts, as they're called covering things like manufacturing and design defects just generally in AI. Particularly if they can be identified to cause harm, that's going to be a hard one. Because again there isn't going to be a simple causal change from the design of an AI to.
Somebody got injured where you can do that sure. And we might see that in the case of obvious things like automated cars or self-driving, vehicles generally. But for things like, say, an education recommender. You can wreck a person's life by badly madness, managing their education. But how could you ever prove a consequences in court of law?
And I just don't see it. Happening you. Similarly, not just design defects. There's also the failure to warn people of risks. This will be, especially the case in the data management area, where people might not be aware of the risks that they incur, not just to themselves, but to other people, when they share data even simple things, like I've had arguments with survey companies they call me, they want to do a survey fine.
I'll do the survey and then they ask me personal information about my wife. It's not my information to share. And, you know, if I'm careless about sharing that information she might be harmed. And and and you know, if I'm not thinking about that ahead of time, that's the sort of thing that could be a problem companies.
I think have an obligation to warn. If the information that you're sharing is a risk, not only to yourself but to others.
The thing with regulating well, anything, but especially regulating artificial intelligence where it's really not clear that. You can identify the benefits, is that it becomes subject to something called good hearts law. And the idea is that I quote here any metric thesis to be a valid metric the moment, it becomes a target for optimization and hence gets metric hat and, you know, we've seen it in researching.
They're sorry, in research, the phenomenon of pee hacking to increase the significance of experimental results. The example that is given in various places is probably apocryphal is certainly colonialist. But the idea is that the the British administrator in India, wanted to eliminate cobras in the city. And so, what they did is they put a bounty on cobras which sounds like a good idea and it did work at first but as it transpired that because there was about he on Cobra's people started raising cobras and then killing them and handing them in to collect the bounty and raising cobras became very profitable and inevitably some escaped and inevitably, once the administration realized that the system wasn't working, they resumed it about he at which point all of the cobras had been raised were released and the city had many more cobras after about he that had ever did before.
That's an example of good hearts law. And you can see that people zero went on the bounty and forget about the point of the law, which was to limit the number of cobras. Now, if it had been possible to reward the city overall, for li the number of Codebrows that might have been a more effective effective mechanism but without really accurate data and cobra and, and accurate ways of identifying, what measures a city takes for eliminating cobras, it's rather hard to regulate and the variance.
Some of the issues of using regulation to manage something is complex as artificial intelligence. Instead one writer and afraid, I don't have, I don't have her name. Here is around the previous slide. Let's check now, but one writer proposes. Well, first of all, observe that given the pace of AI, programs laws will often be outdated by the time that they're passed, and that's kind of true kind of not true.
And I really depends on the law, right? If the law is specifying for example, you must not leave CDs. Open for anyone to read. Well then yeah, it's going to be outdated, but if the law is something like you should not leave data media open to be read. Then that's all that can be written.
That would pass at least some of the technological advancements that are taking place. Although of course after you move from data media to streaming data, now, your lawn no longer works. So you know there is like inner flag and it can be addressed by a more careful wording of the laws.
None. The less and you know, outdated regulations are a problem and regulations that try to set specific metrics are a problem. So what suggested in this article and I really do want to get the name of the person who wrote it here, so I'll just like to it. And okay, the idea is based on work that Jillian had feel does it's an interview article.
The article itself is written by Jeremy Harris. And here it is. There you go. So the idea then is to create regulatory markets for AI system whereby governments, set safety metrics. There are some examples and then drive whole sectors of the economy to compete on those targets. But the trick again, is occurred hearts law and it's going to be very difficult to write these regulations in such a way that they don't become targets for option optimization and manipulation.
And they forget where I saw this. I saw it just recently. Any time you write a fairly detailed regulation? You're drawing a line and you're pretty much guaranteeing that. All the operators are going to squeeze up right along that line. Can't see what I'm streaming. So, there we go, because I'm holding my hand.
I just want to make sure you can see it. Okay? So here's the line, right? And they'll be right up against the line to come as close to violating the regulation as possible because the regulation is according to their perception. Anyways, preventing them from making more money than they would, if only the regulation were changed or even better if only the regulation were removed.
I mean, did I saw an article in an Indian newspaper? Talking about the, the AI regulation being proposed in Europe saying that it would cost 35 billion euros which was very specific figure and maybe it does, maybe it would maybe that CM out of money that they would not be able to make, but, you know, optimizing for predatory AIs, they use subliminal mechanisms and exploit vulnerabilities, etc.
But of course, that article doesn't look at the wider social cost of not having such a regulation in the reason for that is obviously those social costs can be offloaded to other people to the rest of society. And so that's why there's the line and that makes sense. And it's good argument for regulations generally what it's not a good argument for is clear and precise regulation, it shows exactly where you can go.
And the argument that I read and I'm in support of this argument, is that regulations should be written more. Vaguely. So that the reason really is sense of an aligning, you can approach to without quite crossing that you're in the dangers zone. Even if you're here, and not quite at the line, and then, as you get closer, and closer to the line, the more and more, you are in the danger of, it's kind of like speeding limits.
We have a speeding limit on the highway, 110 kilometers an hour. Everybody knows that the line for enforcement is not a hundred and ten kilometers an hour. Although I did see a police guy on TikTok saying, nope, it needs. You can never go more than 110 kilometers an hour, which is ridiculous, because that would make you a danger on the road.
According to many people including me, So you can go fast. But how much faster? Well, if you enter the province from the Quebec citizen is in great, big sign. That is actually like a buffet menu, right? And if you're caught doing 20 kilometers an hour over the limit, here's the fine and you get a demerit 30.
Here's a bigger. Fine. And more demerits 40, etc, until you know your it's stunt driving. You receive like a $10,000 fine. Roadside shouldn't have licenses seizure of your vehicle and all of that. So back some vague law, it's vague in the sense that as you approach what really becomes socially unacceptable, the risks become more and more.
So most people drive around 120 ish, maybe approaching 125 and then there's a few people who drive 130, they're willing to push it a bit but it's too lame, four lane highway so you can have variables speeds on the highway and it's all okay that's actually system that works.
And is quite the contrary of a system where a precise speed limit is defined because now you're going to have everybody driving at precisely that speed limit, no matter what, the conditions are whether or not it's safe to do whether or not they're capable of driving at that limit.
At least that's the argument again. I'm not sure. You can create targets to optimize on with out invoking, good hearts a lot. I like the idea of vagness, but at the same time, you know, I think regulation really only works well for those egregious situations, like stunts driving, where everybody knows you're doing something wrong, the regulations won't prevent people who our run ethical from doing the ethical things and they won't prevent people who are ethical or they won't persuade people who are ethical to do ethical things.
You know, you a wave of saying it is, you can't legislate morality. You can't create a law that people don't actually believe is morally right now. Some societies and some cases you can you can get away with it for a while but maybe you shouldn't and maybe what you're really doing when you pass a law that isn't supported by the bulk of the population is that you know, legitimizing breaking a lot because nobody who's ethical would follow the law.
I think that with respect to legislation we as a society need to be aware of our limitations. I'm not saying that this should not be regulation of artificial intelligence and analytics and certainly in the cases and the most risky applications. I think there's a good argument for doing so, but our discussion of ethics neither be gains nor ends with those regulations and those regulations indeed are dependent upon our discussion of the ethics.
It turns out that there's nothing unethical in using subliminal tactics, then they wouldn't be considered riskier against the law. So we need a better understanding or we need to be able to understand what the ethical foundation is for our beliefs about what's risky and what's not risky. It's tricky, it's tricky because people don't understand AI especially the legislatures and it's tricky.
Because even those who do understand the AI, don't necessarily agree that there are the sorts of risks that people say there are so any legislation, any legislative approach that we're going to take to artificial intelligence. Today is going to be tentative, it's going to be a best effort sort of thing.
We should think of it as draft. We should think of it as beta we're not sure. Now what the impact of the legislation will be or hoping it's good and we're using our best judgment. I would hope to make good regulations. And the incentive for regulation really is to avoid the worst terms and which is a conscious consequencialist of position and, you know, it's kind of a I don't want to say cross material position because that's not quite it.
But but it's one that doesn't get at the nuances of what we think is right and wrong in artificial intelligence. So regulations legislation will only take the so far and is dependent on what our ethics already are. So we're going to take a step down our staircase, and look more at what we think good practice is generally without thinking about how we should legislate, good practice into law, but that's for another video.
So I'm going to stop here. I'm Steven Downs and moving forward in modularly by for now.
- Course Outline
- Course Newsletter
- Activity Centre
- -1. Getting Ready
- 1. Introduction
- 2. Applications of Learning Analytics
- 3. Ethical Issues in Learning Analytics
- 4. Ethical Codes
- 5. Approaches to Ethics
- 6. The Duty of Care
- 7. The Decisions We Make
- 8. Ethical Practices in Learning Analytics
- Videos
- Podcast
- Course Events
- Your Feeds
- Submit Feed
- Privacy Policy
- Terms of Service