The ethics of analytics is particularly complex because issues arise when analytics works, issues that arise because analytics are not yet reliable, and issues that arise in cases where the use of analytics seems fundamentally wrong. To these three sets of issues we will add a fourth describing wider social and cultural issues that arise with the use of analytics and AI, and a set of issues related specifically to bad actors.
Pages
All Ethical Issues
This page lists all the ethical issues found in our literature search for ethical issues in learning analytics.
Media
Live Events
2021/10/25 12:00 Module 3 - Introduction2021/10/29 12:00 Module 3 - Discussion
Tasks
Examples of Ethical Issues in AnalyticsTake a look at the list of all ethical issues in analytics.
First, are we missing any important types of ethical issue? If so, please write a short description of that issue and place it into the correct category using this form.
Second, take a look at a specific type of issue from the list provided. Can you find an example or discussion of this issue on the web? If so, at the bottom of the page there is a place where you can suggest the resource. In the form that appears, provide the title and link as well as a short summary of the resource.
Due: Nov 21, 2024Add to the Graph
In this task, draw commenctions between the analytics applications (left-hand side) and the ethical issues (right-hand side). Consider why the application you've selected raises the issue you've selected. The overall course graph will be modified by the sum of all the graphs input by course participants.
To access the graph tool, click here.
Note:
to draw the line, click on the node, then press the alt key and drag your line across to the other node.
to submit your graph when it is completed, right-click on the graph and select 'Export'. You will see a message confirming that the graph has been submitted.
Due: Nov 21, 2024Your Posts
M. Caldwell, J. T. A. Andrews, T. Tanay, L. D. Griffin, Crime Science, 2021/10/29
This is "report on a scoping project concerned with the crime and security threats associated with AI." In the article "examples were collected of existing or predicted interactions between AI and crime, with both terms interpreted quite broadly. Cases were drawn from the academic literature, but also from news and current affairs, and even from fiction and popular culture."
Web: [Direct Link] [This Post]Synopsis
In the previous module we hope that we have established that there is a wide range of uses for learning analytics and AI in education, from tools institutions can use to manage resources and optimize offering through to tools individuals can use to learn more effectively and quickly. If there were no benefits to be had from analytics, then there would be no ethical issues. But in part because there are benefits, there are ethical issues. No tool that is used for anything is immune from ethical implications.
As a result, as de Bruijn (2020) writes, "There is widespread demand for applied AI ethics. This is perhaps unsurprising in relation to government, academia, regulators and the subjects of algorithmic decision-making. However, for industry, a deluge of negative press over the last year could be seen as evidence of the reverse - a disregard for the ethical implications of AI-driven products and services. Yet this would be an oversimplification of reality."
Indeed. There are many documents extant devoted to identifying and tracking these issues. These issues are captured not only in the criticisms arising from the application of analytics, but also in the principles and codes of conduct developed in response to these criticisms. While what follows is by no means an authoritatively complete catalogue of the issues that have been raised, it is a reasonably comprehensive listing, intended to identify not only the most of-cited and common concerns, but to dig more deeply into the ethical implications of this technology.
The ethics of analytics is particularly complex because issues arise both when it works, and when it doesn't. Consequently, in an approach we will follow, Narayan (2019) classifies these issues under three headings: issues that arise when analytics works, issues that arise because analytics are not yet reliable, and issues that arise in cases where the use of analytics seems fundamentally wrong. To these three sets of issues we will add a fourth describing wider social and cultural issues that arise with the use of analytics and AI, and a set of issues related specifically to bad actors.
Thus, this module is divided into five sections:
When Analytics work.
Modern AI and analytics work. As Mark Liberman observes, "Modern AI (almost) works because of machine learning techniques that find patterns in training data, rather than relying on human programming of explicit rules." This is in sharp contrast to earlier rule-based approaches that "generally never even got off the ground at all."
As we have seen previously, analytics can be used for a wide range of tasks, some involving simple recognition, some involving deeper diagnostics, some making predictions, and some even generating new forms of content and even making determinations about what should or ought to be done.
In such cases, it is the accuracy of analytics that raises ethical issues. Sometimes, we don't want to know everything and see everything. In many cases there is a virtue in not knowing something or not being able to do something that is challenged when analytics reveals everything. This section consider a few examples.
When Analytics Don't Work
Artificial Intelligence and analytics often work and as we've seen above can produce significant benefits. On the other hand, as Mark Liberman comments (2019), AI is brittle. When the data are limited or unrepresentative, it can fail to respond to contextual factors our outlier events. It can contain and replicate errors, be unreliable, be misrepresented, or even defrauded. In the case of learning analytics, the results can range from poor performance, bad pedagogy, untrustworthy recommendations, or (perhaps worst of all) nothing at all.
Bad Actors
Bad actors are people or organizations that attempt to subvert analytics systems. They may be acting for their own benefit or to the detriment of the analytics organizations or their sponsors. The prototypical bad actor is the hacker, a person who uses software and infiltration techniques to intrude into computer systems. Bad actors create ethical issues for analytics because they demonstrate the potential to leverage these systems to cause harm.
When it's Fundamentally Dubious
Narayan (2019) describes the following "fundamentally dubious" uses of learning analytics: predicting criminal recidivism, policing, terrorist risk, at-risk kids, and predicting job performance. "These are all about predicting social outcomes," he says, "so AI is especially ill-suited for this." There are good examples of cases where analytics fail in such cases; Narayan cites a study by that shows "commercial software that is widely used to predict recidivism is no more accurate or fair than the predictions of people with little to no criminal justice expertise" (Dressel and Farid, 2018).
It is arguable that the ethical issue with such employments of analytics is not that they will be inaccurate, but rather, that analytics shouldn't be used in this way for any number of reasons. The complexity surrounding social outcomes is one factor, but so is the impact on individual lives from that decisions about (say) recidivism of future job performance. Even if analytics gets it right, there is an argument to be made that it should not be applied in such cases or applied in this way.
Social and Cultural Issues
This is a class of issues that addresses the social and cultural infrastructure that builds up around analytics. These are not issues with analytics itself, but with the way analytics changes our society, our culture, and the way we learn.
So what do we learn from these cour categories?
In the work above we've identified some areas that lie outside most traditional accounts of analytics and ethics. We found we needed to widen the taxonomy of learning analytics to include deontic analytics, in which our systems determine what ought to be done. And we have to extend our description of ethical issues in analytics to include social and cultural issues, which speak to how analytics are used and the impact they have on society.
And it is precisely in these wider accounts of analytics that our relatively narrow statements of ethical principles are lacking. It is possible to apply analytics correctly and yet still reach a conclusion that would violate our moral sense. And it is possible to use analytics correctly and still do social and cultural harm. An understanding of ethics and analytics may begin with ethical principles, but it is far from ended there.
There are some studies, such as Fjeld, et.al. (2020) that suggest that we have reached a consensus on ethics and analytics. I would argue that this is far from the case. The appearance of 'consensus' is misleading. For example, in the Fjeld, et.al., survey, though 97% of the studies cite 'privacy' as a principle, consensus is much smaller if we look at it in detail (Ibid:21). The same if we look at the others, eg. Accountability (Ibid:28).
And these are just studies strictly within the domain of artificial intelligence. When we look outside the field (and outside the background assumptions of the technology industry) much wider conceptions of ethics appear.
- Course Outline
- Course Newsletter
- Activity Centre
- -1. Getting Ready
- 1. Introduction
- 2. Applications of Learning Analytics
- 3. Ethical Issues in Learning Analytics
- 4. Ethical Codes
- 5. Approaches to Ethics
- 6. The Duty of Care
- 7. The Decisions We Make
- 8. Ethical Practices in Learning Analytics
- Videos
- Podcast
- Course Events
- Your Feeds
- Submit Feed
- Privacy Policy
- Terms of Service