Content-type: text/html
Agency - Part One


Unedited audio transcription

Hello, everyone. Welcome back to ethics analytics, and the duty of care. I'm Stephen Downes. Written module eight, ethical practices and learning analytics. And today, what we're going to be talking about is agency. This would be the first of two parts has stopped it there before I got carried away.

Talking about agency, probably a good idea.

Agency is part of the discussion of the cultural aspects of ethical practices. And I thought it's been a little bit of a gap for me doing this particular presentation. It's now 2022, the Christmas holidays have passed and I spent a lot of time working on these slides probably too much time.

You know, again, this is one of these subjects where we could do an entire course on this one thing. But as he can see here, this is one part of one section of one module in the course. And so again with all due apologies to those who are experts in this particular area of inquiry, we can't go into all of the detail that we want to go into.

But nonetheless, ethical practice as agency follows from what we've been looking at. So far we began with thinking about ethical practices culture, ethical practices citizenship, perhaps, as democracy. Now, we get down to agency and as you can see in the overall scheme of things, we'll eventually get down to talking about.

Ethical practice, with respect to ourselves, and that's going to be the videos that wrap up this course. That'll finish this module. They'll finish. This course, there's a bunch of stuff to talk about there. Let's not get ahead of ourselves. So beginning, then we ought to think about two overall roles agency place in ethics.

And that this is more of a macro thinking about agency rather than thinking, about agencies, a type of cultural practice. But, and in fact, it takes us all the way back to the module and ethical theories, which is why I put up a photo, or I guess a drawing of a manual cunt to go along with this.

First of all, the idea that ethics require agency and this is the idea that ought entails. Can that is to say, if there's an ethical obligation to do something, then it must be possible for a person to meet that obligation. Now, there's a lot of discussion of this principle.

You can find exceptions to it, for example, where you have an obligation and then due to some contrivance of your own, you make it impossible to meet that obligation. But you know, arguably that doesn't discharge you of the obligation. You know, you were supposed to meet somebody for their birth day but you decided to go to Jamaica instead.

Well, you're in Jamaica. Obviously you can't meet them for their birthday but that's not the sense that's bent by art. Entails can clearly you could have done something other than go to Jamaica. The other side of this is agency requires ethics, and this is a bit of a tougher one to get our minds around.

This is the idea of that if we have agency, if we have some sort of capacity, for example, to think to reason to act, even that creates in us, the obligation to think reason and act correctly. You know, it's so the argument goes it's not a case of anything goes.

I'm for example if there's a wrong in the world and we have the capacity to correct that wrong then arguably, we have an obligation to correct that wrong. Now what we mean by capacity here is going to be very subjective, right? I have the capacity to give all of my money to poor people thus of making myself, one of the poor people.

It's not clear that it follows that I have the obligation to do that even though making all those poor people a little bit wealthier would be a good thing. Nonetheless, the fact that I can distinguish between right and wrong seems in some sense to create an obligation that I act right rather than wrong.

And the clearest case of this argument in principle is advanced by manual content. Here I'm just quoting from Wikipedia because it's said it. Well, can't believed that the shared ability of humans to reason should be the basis of morality and that it is the ability to reason that makes humans morally significant.

And I'm not an important point because it also gets at the idea of humans as being in themselves humans as having more or worth, why do they have more or worth? Because they can act moreally. That's, that's the consian idea here. Anyways, let's keep those things in mind that as we proceed to talk about what we mean by agency and how agency is going to play out in our discussion of ethics and AI.

So what is agency? You know, I gave you some examples, right? To to act or to reason, but we'll hear. Does that come from? Well, in what we might call folks. Psychology, there's a widespread commitment to this view of agency which could be called the desire, belief intention of agency.

And this, this characterization comes from the Stanford encyclopedia philosophy and also elsewhere. I've got some references to it, and the idea here is that agency? First comes from the desire or perhaps the will. And it's based on beliefs about the state of the world and these come together to allow us to form an intention or a purpose, or an objective, or a goal.

And then it's the capacity to fulfill that goal based on those desires or that will and based on our state of knowledge of the world that creates agency. And it's interesting because this idea carries over to software design as well. What we might call a belief to hire intention software model where we create and software a representation of the world, and then an obligation to fulfill a command.

And then the capacity of that software to carry out that command In more advanced software that command or that intention wouldn't be explicitly coded. Given example a car that stays between the lines. Right. So we have this overall representation state of the fairs of the world, which consists of a road with two edges and the objective of staying within those two edges.

But the particular intentions of that car are just deer laughter steer, right? In order to stay between the lines. Those intentions are formed automatically based on the representation of the state of affairs the world. Okay. So, let's one way of looking at agency, but could we do this with, without mental representations?

Because, you know, these BDI theories, typically, explain agency in terms of this internal mental picture that we have or more accurately in terms of intentional, mental states and events that have representational contents. And by that, we usually mean propositional contents. What I mean by that is we usually mean sentences or, you know, the digital equivalent of sentences.

So it would be like the semantic web. There would be subjects or objects and these subjects or objects would have properties. And they might be related to each other. You know, a representation of the world, much longer lines of Rudolph Carmen, Carnaps model of a representation of all possible states of affairs of the world.

But it's arguable that there are beings capable of genuine agency without representational states. That is they can take actions for themselves would first forming a picture of the world and this is the basis for what might be called an embodied, inactive approach to mind. So the idea here isn't or sorry.

The idea here is that the the mind doesn't work in terms of cognitive representational state, but rather related physical states and it's the physical states that create the agency.

Francisco Varela talks about this in his book principles of biological autonomy, where for example a cell, which doesn't have an internal state or any internal representation of state nonetheless, does things that could be thought of as things at an agent might do. For example, if protects its integrity, it reproduces the consumes.

These are the actions of something with agency but it's a non-cognitive sort of agency. It's an agency without mental representations. This is important because it allows us to think of agency and therefore think of even cognition or whatever it is that is behind agency, more broadly. If we think that only things that can form internal mental representations can have agency, then our list of things that can have agencies very small and and doesn't include some things that we do.

Think have agency one way in which we can enlarge our thinking of agency is through what might be called the extended mind thesis, which I would characterize, as well as, you know, connectivism George Siemens style, where our mural network extends beyond the bounds of our brain and therefore obviously has non-representational characteristics to it.

So we we can think of cognition as extended in several ways and there's a little chart here from them from the

An article called proposing, an extremely embedded mind. And we take a look at the chart. We think of cognition, as first of all, inactive, where, you know, the the mind is a living system, right? Like the cell that I just described, or it could be extended, the cognition could go beyond the skin.

It could include the artifacts around us. You look at my office here. I've got papers and notes and books and stuff. And all of that could be thought of as part of who I am. As an agent, right? Could be thought of as embodied the body in the mind.

You know, our status as agent, includes not just what we think of the world but how we feel about the world are sensations. Our immune system, which is what Varela in particular would talk about etc, cognition a situated in action. So here are agency, includes the environment around us and we see our agency as situated as part of that.

Environment. Distributed beyond the world are beyond the individual or mediated or socially embedded. Or these days, people might say, socially constructed are also ways of describing how our agency could extend beyond that small representational component, in our brain. In fact, we think of a couple of ways of expressing the thesis, right?

We can express the thesis as there is no representation at all. Required for agency. Or we can say, representation at least in part is allowed in a story of agency and that might turn out to be more. Correct. Although I have reasons for skepticism or we might say representation at least in part is required for agency or finally we might say the only thing that can have agency is the thing that has representational content and everything else is incidental, the body, the community, the books, etc, they're all incidental.

The only thing that actually has agency is the representational component. I'm kind of in the means of thinking of the first two, as being more likely to be correct. I'm much less inclined. To think that representation is required for agency. But let's look at embodied and enacted cognition a bit.

And this is again the idea that mental processes are embodied, which means they involve more than just the brain. They include a more general involvement bodily structures and processes your stomach aches. Your muscle aches, your fatigue, your emotions, your immune system your reactions to the world. All of that.

Also constitutes part of that part of you or that entire part of you, entire part of you. That is an agent. We can also think of embedded functioning in the sense of we can think of ourselves as functioning only with respect to that external environment. If it weren't for this video, if it weren't for this room, it wouldn't make sense to talk about me having agency at all.

In order to push, you need something that you're pushing again. Just we could think of cognition and agency not as only involving thought or rather, instead talking about it in terms of what the organism does. So this is, it's just sort of a way of looking at agency and taking the idea of that there are internal representations, say as theoretical and superfluous to any actual description of agency that we may wish to undertake this can cash out in the number of different ways.

It could cash out in pure behaviorism, right? Where the only account of agency that really we can talk about is the account that describes, your actual behavior in the actual real world. And anything else is theoretical, I would extend that to talking about your actual behavior, where your behavior includes all your internal processes, all the signals that your cells and each other, the way the blood flows through your brain.

The way your immune system works all of that. And then, finally, of course, the idea of this behavior being extended into the environment, the most obvious way I can extend my behavior into the environment is through the use of a tool. I'm looking around here for tools that I can demonstrate like this tool.

For example, this tool can extend my agency. I can use it to make marks on paper. I can use a hammer to make dents and things and so on. But of course, we're in the age of the computer. Now where the tools that we use can be complex can extend our agency in new and unexpected ways since signals on our behalf.

They can be robots with weapons on them, etc. Which leads us to the question of can there be agency in non-human entities. Let's not start with robots from the moment. Let's think about other non-human entities, one that we see a lot is the stock market. And you know, you watch the business news and they'll say things like the stock market react, to the war that broke out today or the stock market wants the unemployment rate to go up, things like that.

Right now, there's a sense in which that's a metaphor. But there's also a sense in which the person talking about the stock market, really thinks that it is a thing that has it's own agency and you know, depending on precisely how we define agency. It's possible. Now, a stock market doesn't have the same cognitive apparatus at a human desk.

I arguably only humans have that cognitive apparatus, but it does seem to have a sense of its own, not necessarily that the belief desire and intention kind of agency. But in agency, where we can say, it wants to do something, it is doing something, it is dropping steeply, because of some reason, etc.

And if we think of the stock market, we can think of analogies along the lines of, for example, Adam Smith's invisible hand of the marketplace and we say, you know, the marketplace wants what it wants, we're not sure what it wants, we're not sure how it comes to what what it wants, we have theories, but there, you know, they're not representational, there is, they're not cognitive theories.

But nonetheless, there are enough. Moving parts and enough interrelated entities that we can describe it. In the sense that has agency Eduardo Cone wrote and how a forest, think about how plants and especially assemblages of plants can have agency. And it's interesting, the more we read about forests. The, the more we understand that the forest taken as a whole could be thought of as a living breathing complex, entity with multiple parts and interactions among those parts, I've read examples of cases where one tree will take care of and provide nutrition to other trees.

Obviously there's the whole symbiosis of the ecology with different individual entities, occupying individual niches it's not coordinated or organized in any way. It's not of representational system. A forest does not have a theory of what the world is like. But when you look at a forest, you can see that it grows it thrives, maybe it expands and perpetuates itself.

It's sometimes protects itself, I think said to have agency. And again, I'm not totally sure along the same lines. We have and a sing talking about the global trade in Matsu, Nancy mushrooms as the sort of thing that has agency there are patterns of unintentional coordination between multiple actors, right?

This is this takes you back to that old saying, right? You don't have to have an organized Haitian to have a conspiracy. People can act together and a network or in an environment with without coordinating each other without forming actual, beliefs, desires and intense. But in such a way that they protect themselves, they advance their interests etc.

And we can see patterns of this organization as a whole, look at Hamilton and Mitchell. Sheep humans, and dogs are embedded in a complex web of relations, markets and terrains, especially when you think about the dogs interacting with the sheep. But clearly there's agency there. But does a dog?

Have a representation in the world because the dog even have intentions, maybe it does. But the system of human dog sheep, probably doesn't have intentions, but still, we can describe it in terms of agency. It's a church here of different kinds of agencies, and the different sorts of things and that can be agents.

So, I've hinted at some of them already. So here are sort of here are some agencys, some ways, we recognize that things are agents, they can produce a fact that they can act according to their own biological needs. They can act according to their own cultural needs. They can realize intentions of others like other human beings.

So natural things can produce effects, can't do the rest cultural things? You know, like organizations or nationalities can produce effects and realize the intentions of other human beings. Non-human living meanings have two types of agency. Non-human living beings that are cultural like, say Holly the sheep or bourbon roses can have three types of agency.

Humans can have four types of agency and social entities like say doctors for the borders can also have three types of entities and error agency. In fact, the only type of agency doctors with a borders does not have is the ability to act, according to its own biological needs.

And that's because it doesn't have any biological needs. It's an organization. So it seems clear from this chart. The, when we talk about agency, we're not talking about something simple. Like, I think I want something or it's an expression of the will when we expand our definition of agency, to include things like producing a facts or acting.

According to our biological needs are understanding of agency becomes wider and it now begins to allow for non-human entities to have agency.

And it follows, does it not that artificial intelligence could have agency or perhaps, we might say assisting including a human and an artificial intelligence could have agency. Or we can also say, I met work of interacting. Artificial intelligence could have agency. What does that mean? What follows from that?

Well, here in this next section just check the time half an hour. I'm gonna look at some concepts of agency. So some some ideas about agency that will play into or maybe play with are intuitions about the role of agency with respect to the ethics of heart official intelligence.

So, first of all, agency and power, We think of agency, as having power, lots of things can have power. What is power? Worth times time? Something like that, right? There's the old ethical advantage might make right, and that's, you know, a reduction of basically ethics to agency, ethics becomes, whatever, you can get away with doing.

And that actually is an ethical theory, but you know, a lot of people are happy with that. A lot of people aren't happy with the existing power structures of the existing mechanisms of agency in the world. And so we have people like say even a beer here writing in his blog, there is a need to revise and redefine existing power structures, while advocating for ethics and empathy in digital and hybrid spaces members of the community.

Need to problematize the complexities of these interactions and prepare all children to participate in complex democratic discourses using diverse digital tools. There's an awful lot wrapped up in that of but it's a representation of agency as power and different levels of the agency are different type of power. Now we can see from from our previous discussion, right?

That is a kind of representation of agency in terms of behavior or perhaps more accurately, in terms of effect, We might call it thinking back to some of our previous discussions, a consequentialist model of agency. And so, what's being argued for here, is that we need to modify how agency is produced and expressed in the world, in order to moderate for undesirable consequences, which have resulted from existing power structures.

And so, first of all, we need to analyze or understand just what is the consequence of these existing power structures? And then, secondly, as an educational task prepare children, and presumably, prepare all people to engage and be able to express power in their own, right? I find that this reduction of agency to power and therefore reduction of ethics to power.

You know, there's a value of this sort of discussion. There's a role that it can play, but clearly it's a one-dimensional representation of agency. And as we've seen already, we can think of agency in other ways the ethics of power is a theme though that will sound familiar to educators.

You know, we have great hassle back writing and data ethics of power. You know, get ethics is not only about power. It also is power, right? Power for governments companies self-proclaimed, experts, and advisors, and even academic disciplines think of all these things here, that have agency to point out the problems and their solutions to set in the priorities what role data technologies should play in our human lives and society.

Right? So this position basically says, don't ask if artificial intelligence is good or fair, ask how it shifts power or another way of putting it. Those who could be exploited to be a by AI should be shaping. Its products. So we need to think about just like before, right, what are the effects in terms of power that AI has and then manage those effects to mitigate the harmful consequences and arguably, arguably a way to mitigate those consequences are to have the people impacted by AI shape, how AI will impact it, you know?

And that that takes us back to the ethics of care. And, you know, the whole idea, even of, you know, nothing about me. Without me, the idea of that the people who are most vulnerable, should be first, to be talking about how to mitigate the power that AI produces in order to prevent further, harmful consequences to themselves, and perhaps even to be able to reap for themselves, some of the benefits, which otherwise would only accrue to the people who already have power.

This is a useful discussion. It's not the entire discussion by any means, but as a practice understanding AI through or from the perspective of agency allows us to think about AI as conferring power. Or at the very least, we shaping the relations of power and that creates for us and obligation perhaps to first of all, understand how power relationships are being changed by AI and then secondly to, you know, put into place practices that would mitigate the harmful effects of that whatever they might be.

Another way of looking at power is looking at it as limitation and I think this is an interesting way of looking at it. You normally just think of power, in terms of making things happen, you know, power is push right? But as generally near points out and was the internet, a horrible mistake, he says that the problem is not that the internet or social media in a broad.

Sorry, I'll try that again with the right emphasis this time. The problem is not the internet or social media in a broad sense. But rather specifically, the use of the algorithms people being directed rather than exploring and that makes the word, the world small. And let me pull this out for you in an intuitive sense.

At least in an intuitive sense for that works. For me, I go to YouTube and let's look at YouTube as I see it, why not? So let me let me open that up. Let me pull up. Okay, so let me go to YouTube. So this is YouTube as I see it.

Okay. So this is not actually live. Well, this is my current broadcast. Maybe Now, I looked at this once yesterday, and now, I'm gonna see it for a long time. I watch Colbert every day at noon. That's why it's third this mix in the air tonight is here, it won't go away.

I've never watched it. I've never watched anything remotely close to it. But for some reason, Chase Eagle sends Sierra egos and Jonathan. Roy and more. This mix is being promoted to me. This will stay here until I watch it this. The Asus launch event. I have no idea why it's here and I can go on, right?

But these choices that I have, they stay pretty static day after day, after day after day, result or some changes around the edges, like this clash of the bike fees and chilli is new, but most likely. This is what was here on my list yesterday. This was here on my list today before and that's the problem.

All right, I'm not exploring the internet anymore, I'm not seeing what's out there. But rather the algorithms are basically limiting what I see are limiting indeed, even what I can see. And the same is true. When we look at social media, if we look at the Facebook feed or the the Twitter most recommend, I forget what it is most relevant as opposed to latest or on tick.

Tock the four you page. These are all limiting what we can see in the world. So yeah that's why I say small. It's a really good word for it. I can feel the walls of the algorithm closing in on me and there's no there's there's a lot of discussion about this in the literature about what too much choice does to.

You know, there's what might be called a cognitive debt, or a cognitive load. Even that prevents us from attending immediately to the content, because we have to decide first, which content to look at and sometimes that overwhelms us. And so, of course, the idea here is that, well, we'll have mechanisms that reduce the range of choice for us, but when it's the AI, reducing this range of choice.

Then as generally a millionaire says, you know, it's this external force that making the world smaller and smaller. That's kind of like television. Yeah, back in the days when there were three channels, there were only three ways of seeing the world back then. Now, there's a few hundred but still there's only a few hundred ways of looking at the world with the internet.

Broadly conceived. There's a million ways to looking at the world. Maybe that's too many but going back to there's only a few hundred ways of looking at the world probably isn't the best idea. Another way of talking about agency is to talk about resilience. This is a word that we hear a lot, especially we've heard a lot of it during these days of the pandemic resilience and the sense of.

Well, as UNICEF says, resilient children, those equipment skills in areas such as communication, conflict, resolution and self-efficacy are more likely to make appropriate choices. And then they talk about how to foster resilience.

By supporting protective factors in three major categories, caring and supportive relationships, positive, and high expectations opportunities for meaningful participation. So, there's two senses here. First of all, the representation of agency has resilience this set of capacities and they're secondly the the means that we undertake in order to support that we can actually treat those separately because we don't necessarily need the theory to support.

The practice theory. Gives us a justification for the practice, but the practice might be worthwhile even without this particular justification. If let's look at the practice carrying the support of relationships positive and high expectations opportunities, for meaningful participation, there are arguments for those things that could be made without ever mentioning the word resilience.

This is actually a really common phenomenon, especially in the literature and education where people advance this theory in this case. It's a theory of resilience and then draw this recommended recommended behavior which is behavior that everybody already agrees is good. And the way this works is in some people's minds that acts as confirmation for the theory, must be a good theory because it's recommending things.

I already agree with but of course how doesn't follow at all? Is there a property called resilience is that a type of agency is resilience, properly, so-called worth promoting in and of itself? What's that train would stop? How can you toward? I don't know why he's hugging it so much, anyhow.

And now we've already looked at what we mean by agency. The horn. Stopped hunking. I have agency over the horn. Oh, there it goes again. She's, I don't know if you can hear it, but I can sure hear it. You still hook. That's really strange. Amy. How is agency best expressed in terms of communication, conflict, resolution and self-efficacy.

And other words these skills? Right? Is resilience. Best described as skills. Are these skills, ethically or morally relevant? Is it better to have these skills that cannot have these skills? And then are these skills, fostered through these protective factors, I think all of these are questions that need to be asked.

I think there's a lot of room in doubt. For, for these assertions, here he comes again. Well, they are testing new trains and this might be part of the testing that they're doing. It seems to me an awful lot of train traffic Just as I'm doing my video.

If we look at the research, this seems to me to be pretty important, correlates of resilient outcomes are generally. So modest that it is not possible to accurately identify who will be resilient to potential trauma and who will not

What? This tells me is that if agency, in this case is being resilient to potential trauma, we need to do more to understand exactly what agency means in that context because the properties that we've described here. Don't help us project who will be resilient, who won't be. And if they were genuine elements of resilience then presumably they would help us predict who was going to be resilient and who is not There's a whole set of approaches under the heading of self-determination theory or under the heading of basic psychological needs.

That also could be thought of under the heading of agency. Again these tend to break down into sets of skills for example, competence. Well actually no let me let me back up these in this case are described in terms of feelings or internal sensations competence feeling what is effective at meeting environment.

On demands autonomy feeling authentic acting with volition, having input relatedness, feeling connected with and cared for by significant others. This is an interesting aspect of agency because it's depicting agency. Not just in terms of the behavior, not just in terms of the skills or the capacities, but also in terms of our internal sensation of shall we say, self-determination is thinking of agency as this psychological state that we have of, if you will feeling in control of things interesting.

Trusting because this is the sort of thing that you can fake, and it's not on the slide here or anything. But you can convince yourself or maybe convince someone else that you're in so that you're in control even though you're not, I've seen this a lot in video games where you're playing the video game, you really feel like you're controlling the outcome but the algorithm is designed to give you that illusion.

While it is definitely moving you toward a predetermined outcome level games or like this where you feel like you're mastering the level with the whole point of the level is to get you to the next level. So there's a distinction between being in control, actually being in control and feeling that you're in control.

I think that some sports and exercise psychology is based on. This is based on generating this feeling of self-control or soft determination before the result is actually attained. There's a lot of literature on this. The idea that first comes the feeling of something and then the actuality I, it goes back to soaring caregiver.

And the idea of the leap of faith, where first, you cultivate in yourself, this faith and over time, you actually come to have that faith, you know, or, you know, there was on the news. There was a show where a segment about a gym and it was a slogan on the gym.

Something along the lines of your own sense of your own self-confidence, is the most attractive trait, right? If you feel attractive, then you will be attractive. I'm told although I have not read it that this is the basis for the book, the secret, right? Thoughts become things. First, you develop the feeling and then that generates the reality, it's not 100% false.

It is kind of an example of this belief desire intention approach where you're actually cultivating the belief in the desire, and then the, the actuality follows. So, you know, but I think the main thing here is that there is a sense of agency that has to do, with ones, own perspective to oneself.

And therefore, when we're talking about agency in other people or agency as a cultural value cultivating, feelings of competence autonomy relatedness in society, at large is 10 amount to cultivate cultivating. A sense of self-determination in a population at large. And indeed we can turn it around and talk about it.

The other way in a culture where self-determination is valued, then things like competence autonomy and relativeness might in turn. Also be valued these might become ethical values, moral worth, or the basis for moral worth. This leads us to again, the idea of self-efficacy, and to quote, Bandura self-efficacy refers to an individual's belief in his, or her capacity to execute behaviors necessary to produce specific performance attainments.

So square in the belief, desire intention framework, right? And the focus is here is on producing the sorts of experiments producing. The sorts of experiences that will produce the sorts of beliefs but lead to these feelings that produce actual outcomes. So, for example, some of the experiences mastery experiences vicarious experiences, verbal persuasion and feedback, physical signals, these are all listed by Laura Richie and her book and self-efficacy.

And so, here they are, there they're worded slightly differently here in this diagram, from who VP consulting, but still the same sort of thing, right? Performance accomplishments, vicarious, learning, social, persuasion, emotional arousal. And that's how you can tell. It's from a consultant to the perception or the belief or the feeling of self-efficacy which in turn leads to the outcomes of persistence performance and approach versus avoidance again, we could reverse this and say, these possible outcomes are cultural values.

Moral values that lead us in turn to favor practices that lead to these is outcomes practices such as the mastery experiences etc. You know, again it depends on your perspective. Now again it's possible, isn't it? That you don't need these intermediate stages, it's possible that mastery experiences can lead directly to persistence without the whole mechanism of self-efficacy in the middle.

This perceived self-activity is what we might call a representation step, but we've seen But persistence can be developed through. Well, simple reward and it doesn't matter what the actual representation is. It is certainly a theoretical possibility that we could go from these to these without this.

You know, there's different ways of talking about agency different ways of representing agency. Eric Simminger says, there are many strategies educators were implementing. Well before the pandemic but hold more value. Now, regardless of the terminology used these represent more personalized pathways that focus on student agency leading to empowerment and more ownership of learning experience.

I haven't talked about agency as ownership, although I have in the sense of agency as control. But again that's another way we could think of agency as ownership or possession, you know, in other words as a relation between oneself and either objects in the world or states of affairs in the world.

So a lot like being able to take credit for quality work that's been done. And, you know, there are the cases where people aren't, for example, able to accept praise, I was one such person and, and for me, it was a valuable lesson to be told. Look, somebody says something nice to you.

Just say, thank you, take ownership of it, right? And and that is certainly arguably a kind of agency. So here's a chart that Schillinger displayed in his blog, there's created by at rigor relevance. These are the ways that learner can demonstrate agency over the learning environment, through choice, pace place, path and voice.

That's kind of a two-dimensional thing, you know, it's taking something like agency, categorizing it, and that's it. But it still gives us a sense of the different sorts of agency that we can think of, when we're thinking of agency, with respect to the ethics of artificial, intelligence and analytics.

I think it's it's a good question to ask where this comes from. I think it's a good question to ask, you know, what is required in order for something to have agency. Understanding that there are different ways we can talk about agency different ways. We can define agency. You know, and I mind full of the fact that the concept of agency is in many ways of constructed concept, right?

It isn't this thing there in the world that we can study. It's this categorization that we have where we say somethings have agency or some things are agents and other things don't. And what are the properties by which we will recognize the things that have agency their behavior. Their feelings, their capacity?

Their ownership or possessions or let's four things I can think of right off the bat we'd probably extend that list and then what leads to each of those things.

For my own part. I've considered agency to be fairly central and how best to conduct learning. That's maybe not the best way to say that, but you know what I mean? And I drew a comparison between what I call free learning and control learning. This was first in response to Kershner sweller and Clarks paper.

Arguing against all forms of discovery learning constructivism, progressive learning in favor of some kind of instructivist approach. And I argued that it's preferable to take but what might be called the progressive approach and use that to define learning. But the mean thing is, there are two very different ways of looking at learning here which I called free learning and control learning or in another sense, I called personal learning or personalized to learning.

So in personal learning or free learning the idea, is that you do something for yourself. It begins with a desired state. So there is a sense of BDI in it, but it also be begins with practice. It's based on what you do and with importances what you can do, it's situated.

It's in an environment and that produces a result of content of some sort, you know, the project a paper, a piece of art and action. Something. And the, the question is, how closely that content produces or helps you reach the desired state. If you reach to the desired state, great, you're done the desired state, might be to win the game to achieve.

A certification to write a piece of software that does such and such to balance the books. Do your taxes, whatever? Right. More often than not know this to my own experience. We don't achieve that result right away so the content becomes an iteration we need to try again. And this is where education comes in education.

Is the person who helps you, who helps you treat the size of an opportunity to do it again and they make correct you. They may give you suggestions. They may eat examples, they may give you coaching etc. But you notice here you the learner are in control of this process throughout and the role of the person, the role of the educator is to help you contrast that with control learning where we begin with an ideal state of some sort where we should be.

Usually the ideal state is defined as having a particular piece of mental content. Or as I would characterize it being in a particular representation of state is definitely divined design. Defined cognitively as being in possession of some content. And then what we do is we engage in a practice of some sort.

So first, we try to acquire the content, then we practice. Typically, the practice is in the form of a test of some sort based on requirements, these requirements are independent of you typically, in-based, pretty much solely on the the nature of the content. So the requirements are to test elements on the content.

The theory is, if you test for certain elements of the content, you can infer that. You've mastered all over the content. Typically there's a gap between how you present the content to what the content actually is. And so you are corrected and sent back to go back and learn the content.

Again, the role of the educator is partially to define this ideal state, but mostly to function in the role of the evaluator, the person who tests here, whatever the people who are in charge of this process is not you. It's the people defining, the ideal state, and the people testing you for conformance to that ideal state.

And I think that this gets at some of this quiet that say, Jaron Lanier feels about artificial intelligence, and the concern is, and I think it's a pretty legitimate concern that artificial intelligence and analytics will produce an educational system that looks like this where it's the AI defining, what we need to be, what we want to be, what we ought to know.

And therefore, the AI actually limiting our view of the world to a view, specifically based on that particular representational state, that particular content, and that any deviants outside that is again, that needs to be corrected. And of course we would be corrected by the machine. My view of AI analytics is based on this other view where the AI is.

Something that helps us but where we are the ones that are deciding for ourselves. What is that? We would like to do and the learning is a mechanism that helps us get there. And the AI is a mechanism that helps that mechanism two, very different views. I think of the role of learning technology and therefore of the role of analytics on the free learning model, you have agency on the control learning model to a large degree.

You don't and therein. We see some of the dilemmas that are created when, you know, and let me actually rephrase this ever. So, slightly in the free learning, you have agency in the control, learning the system. Whatever it is has agency? No, let's go back to the beginning, about the requirement of agency for morality for ethics, right?

First of all ought employees can, but also can implies aunt the moral obligations or the ethical obligations here are within yourself. In free learning here are within the system in controller and that's why people are so concerned at least from my percent, about the ethics of AI an analytics.

Because they're working with a model where the ethical responsibility is almost completely in the hands of the system, whether it's a person or a machine doing the teaching and that makes it really critical that they get the ethics right. But let's much less this state when people are responsible for the wrong learning.

And yes, we do reach the question of, you know, well how can we be sure that the person themselves will be ethical people in this sort of environment, that's kind of argument that we hear a lot. You know, give people free learning. Well, what if they don't actually go and do learning things, or what if they do the wrong things, what if they just fritter away their time?

Things like that, right? The ethical question that gets asked here and yeah, me maybe the individual will make ethically bad choices. And that's a question we can address, but here the choices taken out of their hands in control learning here. The machine does the work or the system does the work, and here, the misapplication of agency, whether it's in terms of power, in terms of capacities, in terms of feeling, in terms of ownership, the miss application of AI analytics can produce ethical issues around agency.

You know what capacities does a person have to have in control learning is compared to free learning. You know, we talked about resilience and that sort of thing in control learning. It's not resilience so much as what obedience showing up on time. Being respectful. Following instructions, in terms of power, who has the power?

Well, here we have the power there. The system has the power in terms of feeling if you're working. Or if you're doing things based on your own, needs your more likely to feel in control? Perhaps, here's a question of agency is, if the AI leaves you feeling out of control, who owns the result?

Here, you're doing something for yourself here. You're doing something for your country, your company for the educational system, the people that the educators report to are not the learners there, you know, the funders Benward Mueller, just today, how to call upon this where he says, you know, I didn't realize it first that the importance of this, the importance of where people need to, in some way.

Show that they've learned something rather than actually learn something. And the people need to show the people that need to do this showing are the companies or the usestitutions, and that's their objective, that's their primary motivation. So a company needs to demonstrate compliance to a regulator or a school needs to demonstrate a pass rate to the the registration authority or yeah.

So agency comes into play and ethics and AI differently depending on the model of learning that you're working with and it's a much more significant thing when we're working with a model of control learning where because of the power and because of the capacity because of our own agency to impact the outcome as educators, we assume much more of the ethical responsibility for what that outcome is.

Let me and part one on that note. And when I pick it up again, just in a few minutes for me. And who knows, how long for you, we'll talk more about where agency comes from, what we think it actually is, and what lessons in terms of ethical practices, we can draw from that.

So, that's it for now, I'm Steven Downs and I'll be back shortly.

Force:yes