Content-type: text/html
The Learning Context


Unedited audio transcript from Google Recorder

Hi everyone. I'm Stephen Downes. Welcome once again to ethics analytics and the duty of care. We are in module 7, titled. The decisions we make. Now, the overall purpose of this module is to look at the learning analytics and AI workflow from beginning to end and consider it as a whole thinking about, at each point, the decision was we make as we go through it and therefore, what the ethical implications might be, it's an attempt to get past a fairly simplistic, look at learning analytics that focuses in on a particular thing like say data bias or a particular part of the analytics process such as course, completion.

We're looking at this much more broadly and so it's useful to take a beginning to end. Look at this. So, that's the objective for today, the learning analytics, and AI does not operate in the volume, kind of did not operate in the vacuum. I'm gonna quote quite a bit in this talk from Dragon Shame and George's paper on learning analytics titled something.

Like remember learning analytics is about learning and they write learning analytics needs to build on and better connect with the existing body of research knowledge, about learning and teaching, some of the key questions that we need to ask. As we look at learning analytics. What are we trying to do whatever I run.

Jackets. What are we trying to measure? Or predict? Who is involved and arrange of other questions setting? In other words, the overall learning analytics context there is not probably won't be for years. Maybe decades a general artificial intelligence or a general AI. So all applications of analytics and AI are built specifically for our particular context and when they're built for a particular context, how you define that context immediately shapes, how you would define that application of AI and analytics?

Here we have a model a framework basically provided by growler and drastler a pedagogical model that contains basically six dimensions competence constraints method. Objectives data and stakeholders Now I've taken competences out of that and the place maybe to look at that might be in the final module and I'm deferring.

A look at data to the next video. So what we're going to be looking at, in this video will be basically theory objectives, stakeholders and constraints though not in that area because why would I follow the order? I've just put on the slide here, so let's begin. Then with stakeholders.

When we think of stakeholders we can perhaps began with the concept of responsibility for artificial intelligence responsibility for analytics. And basically here what we're looking at is the concept of complicity. That is anybody who's involved in some way or another. In other words, anyone who is complicit is responsible for the outcomes of an AI analytic system.

Now that might seem like a non-controversial statement but it isn't, it isn't for two reasons. First of all, there are some people that suggest that analytics and AI might become autonomous and therefore becomes self-responsible for their own actions. Now, I'm rejecting that here in this discussion, although I don't want to presume to have close the door on the from debate about this issue.

However, I think that for the foreseeable future responsibility for AI will still be attached to people who are complicit in its development and deployment not assigned exclusively to the AI on the basis that the AI is autonomous. I just don't think people will accept that. The second argument that pushes against this is the idea that some people but not all people are responsible for the outcome.

For example, if somebody designs and builds a vehicle and then the vehicle gets into an accident they might argue that. Well it's the person who's driving the car who is responsible for the outcome. Not the person who built the car And a similar case might be made with respect to AI.

If you don't like the car example, an example, where this principle actually holds is with the, with respect to weapons, the manufacturer of guns. And somebody shoots another person with a gun under current law, the person or companies that manufactured the gun are not held liable for the outcome of the use of that gun.

So we have two different views here on how to assign responsibility for AI. I'm here. I'm taking the perspective that complicity complicity means that the responsibility and here. I'm according from Zimmerman is shared by individuals involved in its development and deployment regardless of their particular intentions. Now.

It might be argued that, you know, nobody could predict the outcomes of the AI. However, you know yes AI is inscrutable. Yes AI is difficult to predict because we don't have simple rules that we can appeal to and determine the outcome of. But nonetheless it's up to system creators and operators to quote, Joshua Kroll.

Here to determine that the technologies they deploy are fit for certain uses. So even though elements of the system are screwable, the overall application of the citizen of the system is not insrootable. So why do I take such a wide prescribed perspective? Well mostly for completeness, I could take a narrower perspective and maybe even defend that but for the purposes of talking is broadly as possible about the ethics of AI, such an approach would be self-defeating.

So I want to consider at least the possibility that everybody involved in, AI will be responsible for AI and that and in that way, get at all of the different decision points that are going to be made with respect to the development deployment of AI. So, what we can understand what the ethics of each of those will be not just to be clear.

I'm not going to look at the ethics of each and every single point of decision. But I'm going to try to identify them and I and outline a broad strokes. The ethical implications sometimes responsible for an AI is rolled up into a single person. That might be called the chief AI officer.

This does not absolve other people of responsibility, you know, just as a command structure, does not absolve employees or or members of the military or whatever to be responsible for their own actions. But the usefulness of this job description in this rolling upper responsibilities, is that it touches on some of the major areas somebody responsible for AI at an executive level might be responsible.

For example, the AI roadmap AI business models project implementation processes and project, management specific technologies and machine learning algorithms that are used governance and ethics people hiring data scientists. We're going to come back to that AI platform architecture and design pipeline automation, just because it's automated, doesn't mean you're absolved or responsibility for what happens during that automation infrastructure and deployment, all of these areas of responsibility where ongoing day-to-day decisions need to be made.

If you think about that, think about all of the ethical issues that we've talked about, all of the ethical approaches that we've talked about, including most recently, ethics of care and similar, search of theories. These all mesh with all of these different responsibilities. So where each of those ethical considerations comes up.

Each one creates an interaction with each one of these decision points. And that's why I'm not going to go through them all. I'm already talking about 10 separate sets of ethical decisions that need to be made. That's why I so hard to come up with just, you know, a single, here's our ethical theory for AI.

Well, how does that ethical theory? Apply to the roadmap. How does it apply to pipeline automation, etc? Looking at the stakeholders in AI is Muhammad Khali and Martin Edner who in 2015, created this map of stakeholders and this is more specifically applied to the learning analytics context and they identify four major groups of stakeholders by their role in the system.

Specifically learners instructors researchers and educational institutions, I think is relevant to identify those four layers because there are objectives and the role that they play in the design of AI analytics are going to be very different For example, learners want to say enhance their performance or get course, recommendations Instructors want to improve teaching and maybe provide feedback to students researchers evaluate courses, develop course, models.

Develop new structural technology and institutions have their own, their whole set of goals, their business models, their institutional criteria for development, and growth, and learning outcomes. So we can see here that looking at. Those people who are involved in AI, each set of stakeholders is going to bring into the picture, a different set of objectives.

Well, that's an important consideration Data.

Different stakeholder groups can also be defined with relation to their relation to, or with respect to their relation to the data collection process, for example, suffering, and others wrote about data subjects as being a group of learners and data clients as being teachers, tutors discussion, moderators, etc. So you have the research group on the one hand that you're studying and then the client looking at the outcome of the analytics who are giving you those results in order to presumably improve education, GDPR breaks this into three groups.

That's the general per general data, protection, regulation from Europe. It identifies data subjects. Those are the people GDPR was written to protect and and and are usually group of learners. In other words, the same sort of data subjects, that's in the previous distinction data, controllers, those who actually make the decisions about personal data processing and then, finally data processors, who may have been outsourced by the data controllers.

So for example, an educational institution, might have an analyst who's making decisions about data collection and analysis, but they may have hired a company to the act to do the actual collection and storage and run the algorithms. Again, these are all subjects to their own distinct responsibilities for the outcome of AI analytics processes.

Why is this important? Well, if we prioritize one group of stakeholders over another, we might get bad results. For example, if we priorize institutional stakeholders, then we might get a situation that z wrote about where and a quote the presidents have Mount. Saint Mary's University administered, a predictive analytics test to see, which students were most of at risk of failing.

The idea was to encourage them to drop out before, the University was required to report, its enrollment numbers, to the federal government, thereby creating better retention numbers and improving its rankings. Now, I hope we can see why that would be bad. Although, you know, we've looked at a variety of educator of ethical theories, there may be some theories according to which that's not bad, maybe a macular sort of theory.

But nonetheless, I thinking in the broader context and certainly in the context of this discussion, we could say that focusing on the institutional stakeholder would lead to results that are not based on the interests of either educators or especially the data subjects, which is to say they'll learners and question.

And here we have a case whereby participating in the predictive analytics. The learners could be harming themselves because they would be encouraged to drop out which is an undesirable result. So here we have a clear case where identifying the stakeholders and making sure that all involved in the decision.

Making process has direct ethical implications and that's generally what's encouraged something like a requirement or a mechanism for designers and users of AI systems to consult relevant. Stakeholder groups, that's what field says. Anyways, this definition of stakeholders might be tool specific or it might be a wider general policy.

It might include policymakers customers suppliers shareholders, if it's a private institution, funders, etc. And of course include owners managers employees. And, of course, students, there's a process for stakeholder involvement. In fact, you know, there are many descriptions of processes for stakeholder involvement. I've put up a simple one here planning process, presentation promise.

But again, how you go about doing stakeholder consultation, who you think is a stakeholder, how much you weigh their interest? How you go through the process of doing the actual stakeholder consultation? And then how you wait the results and prioritize, or don't prioritize specific individual concerns. These are all ethical decisions in one.

Okay, and it what case we may focus on the consequences. However, those are defined in another case and and this is the view that I think is more currently in vogue, at least in some quarters. We would focus first on the, the most vulnerable stakeholders and look for their expressed needs.

That's what a theory of care would tell us. Or we might follow general principles of stakeholder consultation and let the chips fall where they may. In each case, the different ethical considerations cash out quite differently. And so deciding on a mechanism for a stakeholder collaboration process is an ethical decision.

As a hand, has a direct bearing on the outcome of learning analytics. And the ethics of learning analytics, There are also levels of analysis and this is a different way, again, of classifying, the different stakeholder groups. And I'm sorry about this slide that should say shum and not shim.

Sorry, Simon. And I thought I had corrected that and I thought I had removed that s. So obviously a little lack of perfection in my presentation here. But you know there are stakeholders at all of these levels, right? You know, especially for public institutions, you're gonna have regional state level provincial level, national international interest groups international you say, we're not just the United Nations, although they may be interested, but also international research.

Societies, the international scientific community, etc. Mezzo is the institution wide level of stakeholder, involvement and includes administrators, it includes teachers, it includes support staff including computer, science staff, and so on. And then, of course, micro is the individual user, the teacher, the student, the or any others that might be directly evolved in specific educational practices.

All of these are different ways of defining, the stakeholder analysis. Here's another way of grouping stakeholders, this is lusts for groups of users, and it, this is important because it shows that we can't even consider a natural group of stakeholders like students to be a single undifferentiated. Whole, there are different kinds of stakeholders even within the student community.

So, lust less than and colleagues. Look at, for example, the no-use low level adopters of tools or the intensive active learners, who use everything and all for everything suggested by the course design and actively use them. So selective users who just pick out certain tools to use and then intensive superficial users who use all the tools, but spend more time reading them, other groups predominantly on cognitively, passive activities and kind of garbled that quote, but you get the idea there in this course.

And indeed, in all of the Mukesh, I've taught I've seen examples of all four of those and we have people in this course, I know, because they've contacted me who are intensive. Superficial users. They're reading everything, but they're not participating in the discussions. Some people are just using some of the tools, but not everything.

Some people are doing everything, and some people are just possibly. They get the newsletter. That's about it. Each of those is a different group of stakeholders. Each of those is going to have a different perspective, regarding what this course is about, and how the ethics of learning analytics applies to them out as I stated before, I'm not doing any measurements, I'm not doing any gathering of information, and it could quite legitimately be asked, whether I am serving the best interest of all four of those groups of students by taking this approach, I'm serving my own particular, ethical objective, but maybe I'm not considering the water group of stakeholders.

Sufficiently Certainly that's been suggested to me in the past. Stakeholder formation also is subject to ethical considerations, so it's not simply. That stakeholders defined, what the ethics are that in the same time, ethics define? What the stakeholders are. And a good example of that is learning design teams, a learning design, team in an AI or analytics.

Supported course is going to involve an AI team. Now one of the problems that's been observed, with AI, we've touched on this in the past is where everybody involved in the development and deployment of the AI comes from a specific demographic, specifically tech bros. This is argue to be a bad thing.

And and I think for for good reasons, certainly it does not respect the consideration of diversity, which we've touched on in previous discussions. So, you know, if y'all says ethical and rights respecting AI requires more diverse participation in the development process and the first and most common interpretation of this calls for diverse AI design teams, and extending that, we could say, it also calls for diverse learning design teams.

So we see the interactions that happen between members of a learning design team. That's just borrowed. A diagram from Kathy Moore here, to show how different people with different objectives. And different information will interact with trailer. And the idea here is that this be kept diverse. So, the requirement of diversity is having a direct impact on who becomes members of our design teams, then, the constitution of our design team, feeds back into the ethics of the course.

So it's not, you know, do this first thing do that. It's a back and forth, constant negotiation between the ethics and the people whose involved, why should they be involved? Is it ethical to consider their specific needs are? We considering everybody in an ethical fashion, okay? That's the stakeholders.

It's plenty of questions, right there, plenty of ethical decisions to make. Now, we look at the objectives. And here I'm thinking not of the course objectives or the learning objectives here. I'm thinking of the objectives of the analytical array process that we're involved. What are we trying to do with our learning analytics?

So we, we could look at a bunch of things. Here's a slide where I've actually named Simon. Buckingham some properly there you go. Although, right. I had to paste an image of the slide, because slide share as I always predicted, has comps down on downloads and now, you can't download without signing up for a two month trial membership.

So boo hiss on slideshare. But anyhow, some broad objectives might be to increase efficiency to ensure our education systems produce. The greatest return on investment possible or to improve system performance, for example, system wide management, and evaluation decisions, or perhaps increase transparency. So that people can see how the system works.

What's working in and what isn't or to improve student achievements. You know, to inform everybody involved in the process so that they can make the best decisions to help each students achievement go. A lot of the times we look at learning analytics and AI we just think about, well, you know, how does this correspond to test scores?

But as you can see, analytics will serve a much wider range of overall objectives than just that it's not. Just course recommenders, folks, how do we set these objectives? Whether there's a model out there called smart, and I'm not sure I totally 100% agree with it. But the idea here is that there are ways to design the process you use in order to set up the objectives for your learning analytics, or AI process.

Smart is an acronym, it stands for specific measurable actionable relevant and time bound. So in basically the idea is that if you're setting goals, if you're setting objectives, then you need to be able to say clearly what they are obviously. But also on really importantly to know whether or not they have been achieved.

If you think about it, that makes sense, right? If you just say, you know, make learning better. That's what you want to do with learning analytics. Well, that's not specific enough, I'm making learning better, could be any of a number of different things and it's not measurable enough. How do you know you've made something better?

What's specific things? Are you looking at out there in a business world, they talk about KPIs or key performance indicators. Not might actually be getting no, perhaps, two analytical for our purposes, but in a lot of actual practical applications analytics people will be talking in terms of KPIs and they'll be talking about specific indicators of specific performance that add up to a way of describing these objectives and determining whether or not we've met these objectives In any environment where there's a lot of management happening.

So any environment that involves public money or shareholder money, for example, there's going to be quite a bit of emphasis on being able to determine what objectives have been set and whether they have been achieved So you can see. Now, some of the ethical implications of the objective setting process, should it all, be based on financial objectives?

If not, what are the other objectives that we can be looking that? How do you quantify or even should you quantify these objectives? Or maybe, they're objectives of the type, I know them. When I see them, each of these kind of decisions that we make is going to have an ethical impact.

And again you know it comes back to it's really hard to see how a simple rule or even a simple process is going to lead you to being able to determine what the right answer is in each one of these cases. Objectives are measured by metrics so a key performance indicator is typically a type of metric.

There are numerous metrics that learning analytics can draw from some of these are relevant. Some of them are not, this is a set of metrics for corporate training and and these metrics are a bit different perhaps and we might see in a public institution or in the town's school, public school or something like that, but they'll come up one way or another, the training cost per person.

The amount of learning engagement the satisfaction with the training experience. Course completion, enrollment data etc. All of these are things that could be measured. And what has to happen is that and analytic process needs to have some comprehension of how these metrics map to the performance indicators and how the performance indicators map to the objectives.

This is called a logic model and the logic model tells you how you get from measurements of data to statements about, you know, whether an objective has been better, not all of this is happening before we've done any of the learning analytics. The idea here is, we're getting in our head.

Some idea of what the analytics and APR AI process needs to accomplish. Perhaps, it might be able to be applied to some steps in the logic model. Perhaps if it's a really good analytics, array isolation. It can handle the whole logic model for you, and you don't need to do it.

But then you have the problem of ring out how it's drying, the conclusions is drawing. I want to point to one metric, which is course completion, because not once been talked about a lot over the years. It originally came up in the big University of Pennsylvania study that came out.

I don't know, is around 2012 2013, somewhere around there, where the reports on courts, completion rates for MOOCs were terrible and so everybody started going. Oh no. Mushroom course completion has been used arguably as a proxy for learning, what's happened in a lot of media. And I'm speaking more here of the popular media, not so much researchers is that they come up with a course completion number and the use that is all of the metric and all of the KPI and as the only objective of the course, is so we could talk about the success or failure of a mooks, specifically, and only in terms, of course completion.

Now, base of what I've just said that should be, obviously not a very good idea because there are so many other things to take into account. If we go back to institutional objectives for example, a lot of institutions have used as marketing tools as publicity tools as ways to help student get a sense of what it's like to study at a university level, none of these require course completion in order to be successful and they would certainly meet the universities objective even if the course completion rate was very low, on the other hand, if your objective is to have people go through a course.

Well, then yeah. That's that's your your metric. Where should the objective? Start, you know, we talked about consulting the stakeholders and all of that but argueably even beyond that there's a case to be made for starting with the public good generally and if you want to look at the lower right hand side of the slide, I've put in the United Nations 17, sustainable development goals, and there's a good argument to be made here.

That overall, there is a social objective, that should be fulfilled by educational processes, including learning analytics such that the outcomes of the deployment of technology in education, serve in some way or another, to support those 17 goals. Now that's going to bring into ethical consideration, things like environmental sustainability, for example, and and education.

Of course. Health care, etc. The full range of goals. I'm trying to read it there but the text is too small for me to read. Maybe if I make it bigger on your screen, at least, I can't make it bigger on my screen without messing it up, but you can see how you know, some things that almost seem unrelated to education can play a role in the ethical evaluation of online courses.

So how does this play out and determining objectives? Well, Drew writes design. Usually starts with a discovery period of qualitative research into people's lived experiences. So in other words, how was education actually going to affect people? How is the the learning solution the learning institution? However, they actually impacting people in the community, even Alex usher, yet.

A column today, said, you know, if you he wrote about the, the mutual growth interaction between the American University system, the German University system and made it very clear. That historically, there has always been a back and forth between universities and society, such that universities are thought of as providing for the public good in return for which the public provides them with funding autonomy.

Etc. So any educational attribution? Any educational intervention, may play a role in public good. Similarly data projects are analytics process specifically can often should often aim for the social good but true points out. That's often not what happens often. The process just starts with the data and they have some data and they sort of go.

Well what can we do with this? And as he says that worse, the process might simply evolve playing with a data set for the sake of playing, with the data set or at best, the process might be motivated by a clear public benefit. Knowing what you're working toward is an important part of knowing what you want to do with data and with a high and learning analytics.

I shouldn't have to say this but I want to be complete in this course. And this is part of being complete that takes us to theory. Now here, I'm talking about mostly about educational theory and the thinking here, is that what you think about the nature of teaching and learning has a lot of impact on how you approach artificial intelligence pain analytics.

Now I'm not going to be able to cover all of educational theory. In this presentation, I'm barely going to touch on it, but I hope to say enough to convince you that there is an interplay between theory and AI. And therefore, since there is an interplay, there are decisions to be made and these decisions will have ethical implications.

So right off the bat, let's let's go back to Dragon Shane and George, and their taco and are their paper on learning analytics. And looking at pedagogical theory, and they note that a lot of learning analytics is aimed at teaching to teaching for memory, right? Get the person. Remember the subject you know?

Did they remember specific pieces of information? If so then the intervention was positive, if not, then the intervention was negative and by analogy, they argued that this is like teaching to the test, rather than teaching to improve understanding and they say, and I, quote, learning analytics that do not promote effective learning and teaching are susceptible to use of trivial measures such as increased number of logins into an LMS or as well.

Since I'm a paraphrasing here and many things are counted, but few have any bearing on theory or practice. So there needs to be a relation between what your measuring and what you're learning, objectives are whatever those may happen to be. So this is kind of like the logic model discussion we just had, but it's a little more specific now, arrowing down on the specific objective of learning learning outcomes, etc.

And in this case, the logic model is at least in some degree replaced by the educational theory and question. So take a, take a typical theory, the socio constructivist perspective, right? So that is going to give us a model and here it is. On the left hand side, so we have something called inactive learning and observational.

Learning observational learning proceeds by reinforcement, either vicarious, director self and there's modeling and vicarious learning. That's a crappy explanation of social cognitive theory. Sorry, I said it that way, but it's true. Nonetheless. What this is telling us here is well, we would hope is what we're looking for, what we're trying to achieve.

And to some degree, what things will be looking for in the data as indications as to whether or not we've achieved that. So according to a social constructivist perspective, couple of things active participants in a discussion show better learning outcomes. So social network of analyzes of students discussing in a forum are conducted in order to discover effective ways of supporting, participatory online learning or to rephrase that we want.

You know, we think better learning happens when students have discussions and are actually engaged in involved in the discussions. So what we want are learning analytics to do is find ways that help that that help us understand what makes them participate more actively in discussions. Did you see the role of the theory here?

Right. There's no point measuring or evaluating their participation in discussions. If it's completely incidents to their learning on an instructiveist theory. For example, the discussions are peripheral in fact they add to cognitive load and they actually get fear with teaching so we're not interested in promoting them. And so we wouldn't measure for things that promote them.

But if we accept socio constructive is perspectives, the discussions are an important component that leads to better learning. So we want to be looking at them. So the selection of theory matters. In this case now, does the selection of theory have an ethical impact? Well, it might right. For example, if you choose the wrong theory, and if as a result, the learning that you're offering does not provide the sort of learning benefit that your students desire, then you're wasting their time, in their money, and that is arguably, ethically bad either from a consequence position, or perhaps, from an ethics of care position.

On the other hand, you might be wasting your time and money by engaging them in discussions, which they could easily do on their own outside school. Instead of providing them with direct instruction, You need to have these conversations. We need to have these conversations in order to understand the ethical impact of learning theory.

Just as in the side I can't you know, I've been involved in this field for a long time and maybe I'm just looking in the wrong places but I can't think of hand of discussions involving the ethics of choosing one theory over another. Except for behaviorism, everybody thinks behaviorism is morally evil but I mean besides that right choosing between say, constructive ism and connectivism.

Is there an ethical impact to that? Maybe there is, we'd have to think about that. So, the sort of theory that I just described is based on a theory about how we construct knowledge. The different processes that are involved. And again, this assumes a constructivist approach, but here we have no code from the same authors, again the model.

It builds on conditions, operations, products, evaluation and standards learners adopt in order to explain how they construct knowledge. So, in essence, learners construct, knowledge by using tools, to perform operations on raw information in order to create products of learning. Well, that's something that we can analyze and assess, right?

So we're looking for these five elements of a cope's model. And here we find them in this diagram so we can look at say, for example, task conditions, including resources, instructional cues times social context. All of these to find data points and then these lines represent either interactions associations, cause or influence on other data points.

I think these arrows our theoretical and are the sorts of things that would need to be empirically, observed or defined. But it might also be that these arrows represent connections, that we would hope to form in a viable AI system. And again your perspective is going to shape what you think counts as success on one of these models.

So here's another way of looking at something similar. So it's winds act teams for self-regulated learning. Now, I'm calling the winds axioms but I'm not sure that that title is used widely if at all in the field but it's certainly seems like it. The actions are pretty simple learners, construct.

Knowledge, learners are agents, that's important because they're going to do things you don't expect. And then data is well includes randomness and there are differences in instructional conditions, internal conditions and external conditions. Now, part of the problem to my mind or the model, like this is, it's hard to see how you get from this to an effective design of an AI or analytics project.

You know, I look at okay, I see concentric circles, right? So we're based on student regulated learning surrounding. That our teachers believe surrounding that are the pedagogical processes, which is a wheel of activities. Then around. That is the enact meeting classrooms around. That are the resources curriculum in school culture that you might find?

And then the exosystem of community, society and culture, okay? So these might suggest things that we could measure or detect as data, but society culture community and home. These are really broad data points, right? We we, we can't use those to design an analytic engine, the sorts of things.

We look at closer to the center of the circle, perhaps the pedagogical processes level here is the most definitive of these levels where we're looking at things like recreational support, instrumentals strategic support etc. And there may be indicators of those. But again, we'd need to draw some kind of model, connecting the specific indicators.

With these general terms. And so you know, the model that you choose is going to help you but only through a certain degree with respect to the learning analytics that you undertake. And in a most circumstances I think when an analytics or AI project is developed to a large degree, that may start with one of these models, but the people actually designing, the technology are building in large elements of the model, as they develop the AI.

Just as an aside, I found that happening in my own practice and not even doing a I or analytics. But as I write technology for my MOOCs, including this mook, I find, I might start from a theory, but each line of code that I write is a specific elaboration on that theory.

That wasn't considered time. The theory was written because there wasn't the practical requirement to actually implement it. Specifically in practice, You know, the more you apply these theories, the more you need to develop, refine, shape specify exactly what you're doing. Exactly what you want your technology to do or you teachers to do exactly what the outcome should be.

Again, in that model we had the conditions for learning and these again are mentioned by gastritic dosing and semens they include the instructional conditions which have clear includes instructions and sorry instructors. The models and the technology choices external conditions such as instruction of design, social contacts, etc. All of these are data points.

All of these interoperate differently depending on your model or theory and then the internal conditions. So this would be internal to the, the individual such as achievement goal orientation, cognitive load, or epistemic belief. And and they say, I think very diplomatically that these are yet to be fully understood in relation with their collection and measurement.

It takes something like, cognitive load it. This is something that to my mind is a completely theoretical construction based on a model of cognition that uses computational processes as a metaphor or an analogy. And so, just as a computer has an input buffer, so cognitive cognitive load theory thinks that a human has any input buffer and therefore, a certain size limit to the information, they can process and they use that to justify all kinds of practices that eliminate extraneous cognitive load.

Now I have all kinds of problems with that but the main problem I have with that is it is not at all. Clear. To me that a thing called cognitive load actually exists. And so if you're pedagogical model includes things like cognitive load, or any other of these theoretical terms, these need to be rendered in terms of actual observables.

What is it that you are talking about when you talk about cognitive load. Now, I know that it might be a theoretical entity and I know there can be theoretical entities mass, for example, if they're radical entity, we can't get at mass directly, but we know how to measure for mass and you measure for it using a scale and understanding what the force of gravity is in the environment where you're using the scale.

Not allows you to determine mass or you can count the molecules and a certain body and then, you know, the weight of each mole or the mass of each molecule and that allows you to work with mass as well. If you don't have something like that for cognitive load, then you can't implement a design or a test for cognitive load in your analytics and AI system.

Things to consider again from the same authors. When you're working with your pedagogical models, consider qualitative, research methods versus simple quantification. Let's say right, the primary emphasis in the learning analytics field has been memory recall, but there's more to learning than memory. And I've often said, learning is not memory, learning is not remembering learning, is an important ways becoming something.

And so, what you want to measure is not simply the recall of specific facts, but instead indicators of having become a certain type of person, for example, being able to perform a certain operation, or using a word correctly in new contexts, things like that. Secondly, it's important to think about how to design effective visualizations and dashboards.

The thing about learning theories that it's through a large degree, pretty opaque to most people over good reasons and you want to be able to translate that into something that people can access and use and understand. There's no point building a learning analytics for radio, high system at all if people can't use it.

So this is very practical consideration. Taking that data, taking that theory, combining it and putting it into a presentation that people can understand. And then third other development of a learning analytics culture, and policy. Generally this involves reshaping to a certain degree, the learning theories that people might already have.

If, for example, a certain percentage of your instructional staff, thanks of the outcomes of learning as being memory recall, then there's a disconnect between that perception that they have and the application of learning analytics in the institution. And so there needs to be a mechanism involving those people and this project to create a more comprehensive culture so that the different parts of it even if they're not all working toward a common goal and many times they won't be or at least aware of the existence of the other and at least have mechanisms to consider input and communication from the other.

And, you know, and and a learning analytics culture and learning analytics policy doesn't mean, everybody gets on board with the learning analytics training, it means that learning analytics exists in this environment. And it's one of the things that characterizes this environment much like, walls ceilings a power system, plumbing and the rest.

Finally, let's look at some of the constraints. We can divide constraints into external constraints and internal constraints, internal sorry. External constraints will involve many of this constraints that we've talked about, with respect to ethical issues. Now, these emerge as constraints not as ethical issues, specifically not as matters of ethic specifically, but rather in terms of policy and procedure environments, for example, privacy privacy is regulated in many environments.

There's regulated in Europe, when I was in Alberta, there was a freedom of information and personal privacy or foyt. There are also privacy considerations here in Ontario contrast that too with access to information protocols, which require the disclosure of information. That's what we work with here. In the government environment.

There are other ethical measures as well. You know, the treasury board for example, has come out with ethical guidelines for the use of artificial education. Sorry artificial intelligence, your school board or your provincial government. May also have guidelines ethical guidelines on the use of AI. Certainly, there will be more sweeping research.

Ethical guidelines. I sit on a research ethics board for example that are specific policies and procedures that we expect research projects in general including AI and analytics research projects to undertake having to do with protecting the subjects. Making sure there is there isn't harm making sure there's sufficient scientific merit etc.

There are also norms things, you might not think about. For example, intellectual property, rights legal data, protections, etc. So a lot of law will come into play when you're working with AI, an analytics. And then even the time scale. How quickly you're trying to do something? Are you doing just in time AI in order to provide real-time content recommendation for students that really impacts what you can do?

Now, all of these are constraints, all of these are the sorts of things that although, they're not directly related to ethics will impact, which you can do, and therefore, impact the decisions that you can make in an AI or analytics context in Europe. Of course, they have the GDPR and the GDPR has a direct impact on the application of analytics.

And AI. Here's a rough eights page summary of GDPR and of course it's a huge regulation, but things like the right to be informed, the right of access the right to rectification, the right to object to processing the right to restrict processing, the right to data portability, the right to be forgotten and writes in relations to automated decision making and profiling.

These are legislative imperatives that need to be followed which means that some kinds of AI can be performed. Some kinds of AI, can't be performed. Take something like rectification has the analytics project been designed in such a way that the withdrawal of data has an impact on the conclusion that result from the AI support you're using data to train a model.

And data has been withdrawn such that the new set of data would train the model slightly differently. So do people see that as an obligation to retrain them model. The need data. That's important because the model will be used to make predictions. And if the model is, it's the model currently in use was trained using data that has now been withdrawn from circulation.

The question comes up, should you be using that model? I think that's a, you know, that's the sort of question that needs to be asked when considering external constraints, internal constraints have to do with the capacity of the people working with the data analytics to use the data analytics.

Effectively, there are two areas that are drawn out in this paper here. First of all, with respect to interpretation, I've talked about this before. Amy said, of data, Amy said of evidence needs to be interpreted. That is to say, giving a significance, meaning purpose, whatever in order to be applied, simply knowing that 49 people finished the course and 48.

People dropped out that raw data needs to be interpreted. For example, is that a significant number of droplets? For example, we're dropouts important in the design of the course things like that, right? So you need to know you need to be able to take the results of the analytics, and then relate them back in some way to the work that you're doing or the environment you're working with and that's interpretation.

And if you don't have that skill is very hard to use the results of analytics and AI effectively secondly critical thinking, you know, there's the story about people who uncritically use Google Maps as a way to get directions when they're driving and end up driving into the river. They have uncritically used Google Maps.

They did not employ critical thinking, for example, probably driving into the rivers of that idea, as pretty simple critical thinking. But the same sort of thinking is going to be required in understanding and applying a eye, If the results of an AI are unreasonable. Then you shouldn't do them.

You shouldn't follow them. I have any my car and one of the things I have is adaptive crews control it, beautiful system, I love it. What it does is when I set my car on cruise control, it'll try to keep the speed but it's also watching me environment in front of me and we'll set a you know it won't go true.

Close to the thing that's in front of me well I still have to use that critically. Sometimes it stops or slows me down when there's nothing there it really does. And, and so I need to press on the gas to correct. It's slowing me down, other times, it doesn't slow down, even though there is something there and that's important because if I don't do anything, I'll just crash into the thing and I can't turn around so well, just the AI failed me.

Well, the AI does with the AI does, but it's up to me as a critical user to recognize it failed to detect the car in front of me. Now, it's up to me to act and that's what we mean by critical thinking with respect to AI. Now I know we haven't talked about any particular aspect of the traditional AI workflowingness and we'll talk about that beginning with the next module and talking specifically about data, what data is, how it's collected and all of that.

But I hope that this presentation has given you a sense of the range of decisions, the types of decisions, and the importance of the decisions that people need to make with respect to the specific learning context in the process of applying artificial intelligence in AI. Now, I've put the two most important papers into today's newsletter to accompany this presentation.

I do recommend that you read them, but even if you don't, I hope that this is, this has convinced you that, you know, at every point all of these decisions. How many decisions have I talked about? 100 200,000. Every one of these decisions is going to have an ethical impact.

And this is again why I have said in the past and I will continue to say the environment that we're working in. Now is far too complex to depend on simple, ethical theory and simple, ethical approaches. And it really does concern me that most of the discussion about ethics and analytics.

And AI in learning is focused on things like memory. Retention course, completion and bias in the training data. Yeah, they're all important. They're ones small part of a much broader picture and we need to understand this broader picture if we're going to do ethics in learning analytics correctly. So that's it for this.

I'm Steven Downs and I'll be back not too long from now with discussion about data and the use of data in AI analytics by for now

 

--------------

left out:

 

Generative Design

 I'm going to refer to generative design as a three-stage process where (1) designers define the project's goals, (2) algorithms produce a range of solutions, and (3) then designers pick the best result. https://www.danieldavis.com/generative-design-doomed-to-fail/ 

 

Force:yes