This module addresses the idea of ethics in general, outlines what we mean by analaytics, and looks at how the two come together. It frames the course as an exploration of how we as a society ought to address ethical issues in analytics. What is the scope of the problem facing us? What are the historically relevant approaches to social dilemmas, and why do they seem unequal to the task before us today?


Module 1 Introduction - The Search for the Social Algorithm, Oct 12, 2021

The Joy of Ethics, Oct 13, 2021

Ethics and Analytics: Getting a Feel For the Subject, Oct 14, 2021

Ethics and Analytics: What We Mean By Ethics, Oct 14, 2021

Ethics and Analytics: What We Mean By Analytics, Oct 15, 2021

Module 1 - Discussion, Oct 15, 2021

Live Events

2021/10/12 12:00 Module 1 - Introduction - Overview of the Course

2021/10/15 12:00 Module 1 - Discussion


What Does Ethics Mean to You?

This task is another request for a blog post. In this post, considering the sorts of issues that come up when working with analytics and artificial intelligence, what does ethics mean to you? This ideally should just be a list of the sorts of things that are ethical (like justice, fairness, equity, etc.) but a consideration of how you come to arrive at those values, what form they take, how they apply to your work, and how they should (?) apply to the field generally. Record your answer in your blog, tag it with #ethics21 and if you haven't already be sure your blog has been submitted to the course (if for some reason it can't be submitted, send me an email when you've written your post and I'll make sure it's included manually).

Due: May 25, 2022


This course is a comprehensive study of what analytics actually are and how they're established in our field, and maybe generally. After setting out some basic terminology and ground rules in Module 1, we begin by looking at the applications of artificial intelligence and analytics in learning technology in be module 2. And then later on in the course in the second half of the course, in Module 7, we're going to be looking at what decisions we actually make when we apply artificial intelligence analytics, neural networks, to any of the applications that we've been talking about.

People talk about, for example, the need to avoid bias in the selection of the population that we study. Quite so I agree. But I'm looking at this from the perspective of we are selecting a population to study. What are the decisions that we make when we do that? Because we're still in old world thinking: we want to say simple things like "bias causes bad results". We want simple explanations. But there's a range of decisions that we make when we're selecting a population for a study as input data for a neural network analysis. We need to know what they are.

Then we apply the ethical dimension to all of this. Module 3 looks at ethical issues. The test here is whether the issue exists. Some people do literature surveys where they break down the list of papers into a small number of methodologically valuable studies, but if somebody raises an ethical issue, it doesn't matter what the context is. That issue exists. Now we can argue about whether it's salient or not, but the existence proof is simply the fact that there's a piece of writing or an infographic or a video and this issue is raised.

Similarly, with approaches to ethics, discussed in Module 5. The discussions around ethics and learning, analytics and ethics and artificial intelligence generally skip over this step. They assume that the ethics have been solved - "we know what ethical uses of AI are, and we just just shouldn't do what's not ethical" but I'm going to argue, and more to the point, so I think, pretty conclusively, that these issues are not solved, that the 2500 year Long Quest to find reasons for deciding what's right and what's wrong is an effort that was ultimately a failure, and that we haven't been able to find reasons to make these determinations. We can certainly rationalize things after the fact, but the manner in which we actually determine what's right and what's wrong is not a rationalist project.

And that leads us to the duty of care, which we discuss in Module 6.. The duty of care is a feminist theory that is has its origins in recent years with the writings of people at Carol Gilligan and Nel Noddings and others from the perspective of practices, from the perspective of context, and especially cultural context, and from the perspective of putting the needs and the interests of general the patient, but more generally the client, first.

And there's a whole discussion there. And this is not a rationalist case of "I reasoned out that this is the right thing to do in all cases." It's nothing like that. It's not universal. It's not argued for. It's based on - well it's hard to say when it's based on. The caring intuition, the specifically female capacity and need to show care towards the young. I think there's reasonable argument there, and I don't think it's specifically a feminist argument. I think we all have our capacity to decide for ourselves or to make ethical decisions for ourselves in a non rationalist way. And this is a way of approaching that subject.

And that leads us to the practices. In Module 4, we look at ethical codes. People say there are common things about the ethics here that we all agree to, but when we look at these ethical codes, we find very quickly there is no common definition of ethics as we've quantified it in the different disciplines and in different circumstances. There is some overlap - fairness is something that comes up a lot, for example. But what we think is fair varies a lot from one circumstance to another. Similarly with equity, diversity and other ethical values. Justice - you know people think, "yeah, ethics should be about justice," but the understanding of justice is very different, not just from one society to the next but from one person to the next.

So that leads us to the question: if not ethical codes, what are the ethical practices? And that's the section that I'm going to use to finish off the course in Module 8 and take all that stuff that we looked at before, and think about it. How do we actually decide what's right and wrong? What are the processes of this? What do we actually do?

And looking at this from this mesh perspective that I talked about, we get an understanding of how we can move from ethics as determined for us by an authority or by an ethical code or by a set of rules to something that we can determine for ourselves as individuals and as a society. That's the objective.