Content-type: text/html The Decisions We Make
The Decisions We Make


What is it to use learning analytics? In this section we look more closely at the nature of artificial intelligence and machine learning in order to understand where the decisions we make have an ethical outcome. In this module we look at the entire lifecycle of an analytics application, including but not limited to the framing of the problem, the data set, application and testing.

Media

Module 7 - Introduction, Nov 29, 2021

The Machine Is Us, Dec 01, 2021

The Learning Context, Dec 02, 2021

How AI Works, Dec 03, 2021

Module 7 - Discussion, Dec 03, 2021

Data, Dec 04, 2021

Classifying Data, Dec 04, 2021

Organizing Data, Dec 04, 2021

Working With Data, Dec 04, 2021

Tools and Algorithms, Dec 08, 2021

Models and Interpretations, Dec 09, 2021

Testing and Application, Dec 13, 2021

Evaluation and Impact, Dec 14, 2021

Explainable AI, Dec 15, 2021

Using AI, Dec 16, 2021

Live Events

2021/11/22 12:00 Module 6 - Introduction

2021/11/26 12:00 Module 6 - Discussion

2021/11/29 12:00 Module 7 - Introduction

2021/12/03 12:00 Module 7 - Discussion

Your Posts

An Overview of the End-to-End Machine Learning Workflow
Larysa Visengeriyeva, et.al., MLOps, 2021/12/01

"A high-level overview of a typical workflow for machine learning-based software development ."

Web: [Direct Link] [This Post]

Let’s not forget: Learning Analytics are about Learning
Dragan Gašević, Shane Dawson, George Siemens, 2021/12/02

"The paper stresses that learning analytics are about learning. As such, the computational aspects of learning analytics must be well integrated within the existing educational research. "

Web: [Direct Link] [This Post]

Module 7 - Discussion
Stephen Downes,

We consider what sort of factors are taken into an account by an AI when it performs an essay grading task or an image recognition task, relating these decisions to the accuracy of the result and the ethics of using the AI for these purposes.

Web: [This Post]

The End of Theory: The Data Deluge Makes the Scientific Method Obsolete
Chris Anderson, Wired, 2021/12/05

"All models are wrong, but some are useful." So proclaimed statistician George Box 30 years ago, and he was right, writes Chris Anderson. He argues that in the era of big data, we have no more need for classifications and taxonomies, no more need for theories that are only broad generalizations of what the data describes precisely.

Web: [Direct Link] [This Post]

Hopfield Nets [Neural Networks for Machine Learning]
Geoffrey Hinton, YouTube, 2021/12/08

Lecture from the course Neural Networks for Machine Learning, as taught by Geoffrey Hinton (University of Toronto) on Coursera in 2012. This lecture describes Hopfield nets, and offers an easy-to-follow explanation of how neural nets can be used to remember.

Web: [Direct Link] [This Post]

The Legal And Ethical Concerns That Arise From Using Complex Predictive Analytics In Health Care
I. Glenn Cohen, Ruben Amarasingham, Anand Shah, Bin Xie, Bernard Lo, Health Affairs, 2021/12/13

This article has good sections on the evaluation and application of AI-generated models in health care environments, which I adapted for the current work on the testing, application, evaluation and outcomes of learning analytics.

Web: [Direct Link] [This Post]

Explainable AI
IBM, 2021/12/15

"Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms."

Web: [Direct Link] [This Post]

Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller, arXiv, 2021/12/15

"There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations.... This paper argues that the field of explainable artificial intelligence should build on this existing research."

Web: [Direct Link] [This Post]

Researchers Publish Survey of Explainable AI
Anthony Alford, InfoQ, 2021/12/15

The survey covers the work of 67 papers and charts recent trends in the field. "Deep-learning pioneer Geoffrey Hinton downplayed the need for explainability, tweeting: Suppose you have cancer and you have to choose between a black box AI surgeon that cannot explain how it works but has a 90% cure rate and a human surgeon with an 80% cure rate. Do you want the AI surgeon to be illegal?"

Web: [Direct Link] [This Post]

Synopsis

synopsis

Force:yes