Unedited audio transcript from Google Recorder
Hi everyone. Welcome back to ethics analytics and the duty of care. I'm Stephen Downes, and I want to wrap up this module with the last of the videos for this module, from module six, which is the duty of care. Almost, going back to the roots of theories. Like those of Carol Gilligan to explore the concept of moral sentiment.
And here, I'm going to be revisiting the ideas of people like David Hume and Adam Smith writing during the Scottish enlightenment about. The idea of a moral sense and particularly the motivations for such an approach has opposed to moral reasoning talking, a bit about methods of ethics and experience talking about moral sentiments as opposed to moral judgments.
And then some reflections on learning ethics. This isn't intended to be the final word on any of these subjects and will explore some of them in more detail in module 8. But, you know, it seems like a natural way to wrap up the discussion on the ethics of care and they do the care because this idea of moral sentiment underlies, a lot of it.
And I find that it's fairly easy to get. I don't want to say bogged down because that's not quite the right word. But but to focus, perhaps too much on the details of an ethics of care. Not saying that the details aren't important, but we want to understand why these details become what they are.
What is it that makes them what they are. And so looking at this underlying yeah, that informs people like Carol Gilligan and Ned Nottingson John Toronto and others I think brings us to a point where we have a concrete starting point. When we look at ethical practices in module eight, so that's the plan.
I'm going to try again to keep this brief, but you know what, my track record on that has been so good to what we can here. So, it's interesting because in working out, how to present this topic, the obvious contrast comes up between a moral sense with moral judgments.
And I actually have a slide later on called moral sense versus moral judgments. And a lot of ethical thinking is presented as a fairly complex set of moral judgments. And so it's fairly common to see especially in the non-philosophical literature frameworks developed in order to describe how people actually go through a process of moral judgment.
Now, what I found really interesting is that the starting point for that sort of discussion, which is the work of Colberg is also the starting point for the ethics of care discussion. And what we find is the contrast here that can be drawn. Both between my ethics of care and this descriptions, like, Colbergs, and a more moral sentiment approach in descriptions.
Like co-works. Because, you know, you look at it, you know, we have a six step process, of course, it's a very Piaget influence, kind of model from the perspective of education. We'd say it's a very social, constructivist, informed model, but also to when you look at it, it involves a lot of presumptions about what might be called a physical symbol system hypothesis or the idea that the brain is in many important ways, like the computer, or at, very least, text-based processing system.
Look at the steps and coding of cues, interpretation of cues, clarification of goals, step four response, access or construction. And then life, moral response, decision and then finally behavior enactment and these occur around an environment based on brain development. So, there's the Piaget element emotion process is because morality does involve emotion and also social factors because morality involved social factors.
But all those combine to create a database, a memory store, morals moral and social schemas rules principles etc and you get the perception. Again, it's cold word says that moral principles are necessary even if they're not sufficient for moral behavior. And I think that Gilligan challenges this directly and I know that David him challenges this directly.
So, where we come at this is from a perspective of what we might call personal knowledge. Now, when I see the phrase, personal knowledge, I think about Michael Polanyi and his book personal knowledge, which talks about tacit knowledge, as opposed to explicit knowledge and one of the big differences between tacit knowledge and explicit knowledge.
Is that task knowledge is ineffable. It has the property of not being able to be expressed in words. It is also therefore, not a system of rule or any kind of physical system. The way we would see on a cohort type theory. The same thing is happening with different language and different backgrounds in different framing.
I think in feminist theory. It's interesting the game. We have Craig done and Brian Burton writing an encyclopedia Britannica two guys trying to do with this theory and they write feminine moral theory, instead of feminist moral theory, which is kind of interesting. And, you know, they they're putting it as a opposed to exclusively rational systems within anyhow with a gender biased nature of knowledge construction and I you know, and again I think that's all trying to interpret ethics of care through this rationalist, you know, rule-based, symbol-based kind of perspective.
But anyhow, what they say and I quote, feminine, moral, theory, deals, a blow to the exclusively rational systems of thought bracket or any rational system of thought close bracket, which may have as their grounding and inherent disregard for the inherently personal, and sometimes, gender biased nature of knowledge construction.
So that's it. Terrible way of representing it. But the essential statement is true feminist. Ethical theory does deal a blow to reason-based and rationalist and rural-based, ethical theory and I think that's really important. And what that means is, it moves ethical knowledge from the realm of explicit. Knowledge is pollinated would describe it to the realm of tacit knowledge.
That doesn't mean that it's no longer knowledge. It just means it's different. It's a lot more complex and it's a lot harder to get at. And, you know, I'm reminded of all of these information managements and knowledge management systems from the late 90s, $32,000 to mine tacit knowledge and make it explicit.
You know, I remember IBM had a thing for learning emails to pull out the classic knowledge and emails and all of that. But the thing is what none of these systems got. I think is that if you take task and knowledge and you make it explicit, you've actually broken the task acknowledge the explicit version of toss.
It knowledge isn't the same, you know, as the tasset version of tacit knowledge, they are actually different pieces of knowledge. So all of this is motivated from what might be called huge skepticism. Now human has a variety of skeptical arguments. He's skeptical about causation, you know, there's no necessary connection between one event in the world in the next event in the world.
The keyword there is necessary. It doesn't mean there's no cause what it does mean. Is that a causal relation isn't the same as a deductive relation? Simply. Because a causes be that does not give us warrants to deduce. Be from a and similar sorts of things are happening with respect to moral knowledge.
So in in the summary by Sam Rayner, we read. We cannot be motivated to act morally through reason alone, because reason is only concerned with determining truths about objects already existing in the world. This is in reference to the principle that one cannot do arrive and ought from an is, you know, I mean, it's the idea of that these are two different domains of knowledge, it's similar to the argument regarding causation, it doesn't mean you can't infer from the way things are to the way things should be.
There are ways of doing that but it's not a reason based inference. It's not a logical deduction.
The other thing about humans skepticism is that he's very skeptical with ethics and morality as it transpires in the world. And and it's interesting you know he we look at the morality the statements that people have drawn and and we ask where did they come from? And him's point here is there is no view of human life and I'm quoting or of the condition of mankind from which without the greatest violence, we can infer the moral attributes.
We have all of these moral statements, but again, these cannot be deduced from or inferred from the actual state of affairs. In the world, doesn't stop us from trying conclusions about them. Just like, we still draw conclusions about cause and effect, but let's understand the status of these inclusions, right?
They're not deductions, they're not inferences. So, what are they? Well, one way of getting at this, like, getting at him's feelings or sort of him. Sorry is to think about what Jack Marshall calls, ethics alarms quoting from his about page, on his website, ethics, alarms are the feelings in your gut.
The twins is in your conscience and the sense of caution in your brain. When situations involving choice of, right and wrong, are beginning to develop faster and avoidable. It's not the same as discussed are an e-response, right? I mean, but it is the sort of thing that is more characteristic of a feeling than it is and inference like two plus two equals four, you know, the, the physical feel of the two are quite different.
Your reaction to a moral conclusion. You're feeling of arriving at one is very different from your feeling of arriving at a conclusion. Completely based on the abstract manipulation of ideas and so it's more like a sensation than a type of cognition and that's core to humans view. We can call this a moral sense.
Now, I think it's important here to be clear that this is different from moral intuition, you know, and, and just to speak to the, the, the ethics of care. A lot of people equate, what they're talking about with intuition, as, in women's intuition or whatever. And I don't think that that's what's intended there either is, well, rather, it's more like a sentiment or a feeling, you know, it's more equivalent to your sense of balance and you get this feeling when you're off balance.
But what's important to about it is we're not appealing to some abstract external or abstract weave and concrete external reality here. It's just a sensation, right? I might feel off balance for any number of reasons, which may include the fact that I'm off balance, but the fact that I feel off balance does not entail that I am off balance, and it never would, which is why we we have skepticism about that.
So now let's draw this out into some sort of story about moral sense and I'll quote from Elizabeth Radcliffe here, are moral distinctions depend on our experience sentiments or feelings. We do not rely exclusively on the employment of reason to make our moral discernments to a large degree. We do not rely at all on the employment of reason I would argue, but we'll go with what Radcliffe is saying, note is well and this is not a theory of innateness or natural morality.
We're not saying that we have an inborn awareness of what morality is, you know, it's not a cartesian. I think therefore I am, I am. Therefore, I am moral or anything like that, but it is the idea that we can learn ethics, but we learn ethics in such a way that we feel or experience a moral sense rather than fully formed general principles and you might be wonderful.
How can you learn a sense? Think about training your taste buds? A Somalia, for example, a taster of wine will over time, learn how to distinguish different types of wines. Similarly, someone who is a coffee aficionado, like me will learn to distinguish different types of coffee. I could tell you, if I taste coffee, whether it probably came from South America, or East Africa, or Hawaii.
And they know to me, they're very distinct, right? Or whether it came from, Tim Hortons, the old Tim Hortons, not the new awful Tim Hortons, so their sensations, but we can augment our capacity to experience these sensations. Okay. So, what sort of sensations are they? Well, they're the reference and we can go to Adam Smith here.
His theory of moral sentiments is to call them a sentiment. Now by sentiment here, we don't mean fond reflections of times past. But what we do mean, is a feeling similar to fund reflections of time pass. You know, any sort of affective feeling that we might have, it's not emotion in like in the sense of anger, or fear, or hope or desire, it's actually a much more gentle and subtle kind of feeling but it is a feeling.
So here to quotes Smith to be amiable and to be meritorious that is to deserve love and to deserve reward are the great characters of virtual and to be odious and punishable of vice. But all these characters, he writes have an immediate reference to the sentiment of others. In other words, the idea of being amiable is being perceived by others.
As being anyable, you know, there is one other people interact with you. They have a sensation that we would describe as something like that person is amiable just like love, right? Would we interact with the person? We may experience a sensation that we have after the fact called love.
Now, I think love is a good example here because Amy ability, love odiousness, these are all experienced in different ways to different degrees, by different people. And there's no presumption. And this is why we say, it's not a kind of naturalism or are not, are kind of innatism. There's no presumption that everybody experiences all of these in the same way.
In fact, these words that we are using to characterize these sentiments. A rough, approximations or categorizations at best. They're what we can do the tools that we have. So, but there is this sense that we have of this classification of feelings, that will call sentiments. And these sentiments are what constitutes our inclinations to call something meritorious or to call something odious and punishable.
Now that is a very different model from. I think most people's models of what morality is. Most people I think would follow a rationalist model where they think of about what is right and what is wrong, they make judgments about the actions of people in the world and then they feel the emotion you know so that aside oh stealing that bread was one you should be punished.
Sometimes we can have a dual process model which is a combination of reason and emotion to render a judgment. Sometimes we can have an intuitionist model which begins with the emotion but actually plays out as a judgment which is then applied through reason. But the, the sense sentimentalist model is more about what our emotions are.
And sometimes the interaction between that emotion and reason Jonathan Hates talks about this, most recently, talking about the idea that moral judgments are for the most part intuitions, proximately caused by gut reactions, quick and automatic flashes of effect. And why do we say this? Well, a couple of things people, he says are easily dumbfounded when challenged on their moral views.
We mean, you, you're not sure why I think merger is wrong and everybody thinks murderers were wrong. That's sort of reaction, right? And when you press them, says hate, they can't really give reasons for why they disapprove of a moral action. Now that could be questioned, right? Because people do give reasons, but he says, that's just rationalization after the fact, and it may well be Again, though, the idea is that morality originates in sentiment or emotion, emotion, really is the wrong word here.
Right? I really prefer to use a word like sentiment rather than emotion because I think emotion refers to one class of feelings sentiment refers to another class of feelings. There's some overlap, but not all emotions are sentiments and not all sentiments are emotions And I think sentiment is more descriptive of the feeling's that result in, you know, feeling some morality, then emotion.
So okay, we have humans position name, we have the idea that it's a type of sentimentalism because he believes that morality arises from human sentiments and it is something that he repeats many times in different places in his work and he says very explicitly when you pronounce any action or character to be vicious, you mean nothing.
But that from the constitution of your nature, whatever it happens to be, you have a feeling or sentiment of blame from the contemplation of it. Now, here, when we're talking about the constitution of your nature, he's not talking about one's essential human nature, or anything like that. The constitution of your nature, is the state, the physical state of affairs, of your body and your brain.
At this particular time, This is sometimes called naturalism and humans approach to sometimes called a naturalist approach to ethics, but it's a bit different from the naturalism that can be thought of as you know, whatever is in nature that must be the case, thats where we get our inferences about moral judgments from That's not quite a.
I think it's better to think of huge naturalism as an explanation of where we get our moral sentiments, from rather than an argument or reasoning for the viability or the the soundness of our moral judgments. We can't argue for the viability or soundness of our moral judgments where we can say that I have this moral judgment, because this is how I feel.
And that is actually a line of reasoning that makes a lot of sense to a lot of people, when you put it to them that way, right. Why is murder wrong of because I feel that it's wrong. Is there any argument that would overrule your feeling that murders wrong?
Well, maybe not right, maybe evidence or experiences, but not some kind of rationalist argument ethical sentimental. Try that again, ethical sentimentalism. Promises a conception of morality that is grounded in a realistic account of human psychology. That's the explanation part, which correspondingly acknowledges the central place of emotion in our moral lives or, as human would say, reason isn't always, must be a slave of the passions.
So this leads to something like, what my perspective on and ethics would be and ethics, or a morality for a person, is something that is learned through experience. So, over time, through our development and growth in life, and interactions with the community and other people, and maybe things like volcanoes and tigers.
We develop an ethical perspective in, not in the sense of, we develop a set of rules, but we develop a nature such that when we're presented with a scent, a state of affairs, we have a certain experience and this are very similar tomes moral sentiment. I think of this as something that happens at a sub symbolic level.
In other words, it's a type of tacit knowledge, a type of personal knowledge. It's inevitable. It's not a matter of rationality as it would say, but rather one of sympathy and we'll talk about that in a few moments. And I wrote about that a while back and opposed to called the failure of reason.
And so, how we react in a particular case, because we really can't generalize on this. Well, we might find patterns regularities, but really, we need to look at the particular case and how we react in a particular case, depends on our ethical background. That is to say all of those experiences that we've had and is the result of multiple simultaneous, factors not large, print, key statements like thousand, not kill
So, sympathy again, when we talk about sympathy, we're probably talking more about an explanation for the feeling that we have as opposed to a justification, or argument as to why we should have them. So human and here, we're quoting the article. We've been quoting all along sees what he calls sympathy as the underlying foundation of the interpersonal nature of human morality by.
Sympathy human is referring to the human ability to convey, our moral sentiments, to one another upon a observing the outward effects of someone else's internal. Moral sentiments are ability to actually feel a little sentiments as though they were our own. So maybe empathy might be the better word than sympathy but human is using the word sympathy.
Now in modern usage, sympathy means you know like like pity or, you know, desired console someone and and empathy means feeling what they're feeling or less, right? So hum is talking about feeling what they're feeling and there are different ways of feeling what they're feeling, mostly is through their outward, expression, of whatever it is, you know, they say ouch, I mean pain or something like that.
That's very similar to the concept of expressed needs that we talk about in ethics of care. But also sometimes on observing an event we as it were naturally have this mirroring sensation in ourselves corresponding to what we think they must be feeling. I'm not what we think. They must be feeling but corresponding to what?
We bracket infer come to conclude, none of those words are right. Right? Because it's not a cognitive mental process. But, you know, you see somebody being eaten by a tiger, you go? Yeah. No, right. You haven't done any reasoning or cognitive projection or anything like that? You're just feeling what they're feeling not exactly what they're feeling.
Because, of course, they're being eaten by a tiger and you're not but there's some sympathy there, right? Some awareness of what they must be feeling these days. This is attributed to things like mirror neurons and so there may be an explanation right at an easy explanation, right? The narrow level for that.
There were probably also other factors involved as well. I wouldn't say that it's going to be simply mirror neurons. Others going to be a whole host of things involved. Nonetheless, there is this mechanism we have. And this is what the important part here is of communicating, what our sentiments are to other people, either directly through expression or indirectly through the other persons sympathy with us, and that allows ethics a morality to be, not just a personal thing.
But a community thing, the expressed totally expression of our moral sentiments. Can be talked about in language of moral, judgments, You see the distinction here, right to have a moral sentiment is to have a feeling or emotion, which is our personal reaction ineffable and all of that to a particular situation.
A moral judgment is when we actually say something about that, Our moral judgment may arise from actual sympathy but we express it. And then as glossop says, by correcting them through reflecting on them with an imagined. The impartiality we can attempt to make more judgments by adopting the sentiments, right?
So if you see somebody push somebody toward a tiger and you see the person being eaten by a tiger, you may feel that what that person did was wrong and they should not have pushed the person toward a tiger. If you say, you should not have pushed that person toward the tiger, you've taken your sentiment and converted it to a moral judgment.
And then you can now began to gradually abstract on that, right? People shouldn't push other people toward tigers. Now, that's probably not a good basis for a morality. What is? But you can see how a community-based morality would be begin to get developed through a set of these interactions.
And so there's been a lot of work, especially on these sociological and ethnographic side of the literature on just how society and how these interactions work together to create some kind of community sense of morality. And I just want to put in a bracketed remark here because I had an argument.
And I think I referred to this earlier in the course with somebody on Twitter about what appropriate behavior was on Twitter. And so what's happening is that this person believes that they're creating some kind of community morality by expressing moral judgments about that behavior on Twitter. I don't think it works that way so easily and explicitly right there are many other ways in which these moral sentiments are communicated one person to the other and explicit.
Moral judgments is just one and probably the least important of them all. And certainly at least in my experience the least effective of them all because there's this this idea that oh no, you just get together in a room you argue without and then you get morality. It just no.
This is the whole concept of the idea that morality and ethics are not things. You will arrive at through reason argument. So, you can't take a bunch of moral judgments and make an ethical system out of it. That's not how it works.
Because if it did work, then would have a very strange kind of ontology that were working with. So and this is and this is a hard quote, it's from pink just last year. Take away the very concept of power of a capacity to produce or prevent outcomes and there's nothing left to base a distinctive distinctively moral responsibility.
Okay, so if the other person can just argue and force me to accept a particular perspective, then where is my moral responsibility in that continuing the quote, but nor is there, anything left of something very much part of our conception of rationality, a power of justifications to move us that true.
But I think that, as I said, you know, with the Twitter example, the justification's quote unquote do not have that power to move us. That that's not what produces the moral sentiments. Which are the things that move us shirt any given. Moral judgment may contribute to the overall experiences, the overall set of experiences in lifetime that produce moral sentiments.
But you don't go from one judgment to one moral sentiment to compliant behavior. The causal chain just isn't that neat? And I've never seen it. Work here, find it, find counterexamples if you want. I mean, this is an empirical thing, right? That we can test. So how do we learn ethics, right?
Because we do learn ethics just like, you know, we've said in the past, people know, people aren't born racist, they learn to be racist, you know, people aren't born ethical. They learn to be ethical Russo to the contrary, perhaps not with standing, you know, and here's how I think it works more or less, so we'll begin.
And this is Jacqueline, Taylor summarizing of him, says that our sense of humanity allows us to form general views about the useful and degreeable to which the relative is, does not subscribe, and that we do. So, on the basis of conversations and debates in which we must make ourselves mutually intelligible to one another.
Now, I'm not really sure that that's a good interpretation of him. I think that's kind of a conventional interpretation, you know, a, you know this. It's almost like, I don't want to say a liberal interpretation of human liberal small, in the sense of, you know, the liberal society where you get a bunch of educated people in a room and they talk very agreeably and have conversations and all eventually get together to find what morality must be.
I really don't think that that's human at all. But I do think that if you take that description and remove all the specificity out of it, like general views or even usefulness or agreeableness or mutual intelligibility, take all of that out of it because all of that is imposing a structure on him which really doesn't deserve to be imposed.
And then we might say it takes a community to learn ethics, All we do, all we experience, everything is the data from which a person develops and ethical sense Just like any other part of our knowledge here. All of our experiences of grandma including seeing pictures of grandma in the photo album are what enable us to recognize grandma when she comes through the train station.
All of our experiences of good behavior, bad behavior in different behavior are what leads us to developing ethical sense about what's happening in front of us now where we recognize good behavior, where we recognize vice and all of this is happening not on the basis of rules or anything like that.
But subs symbolically in the, the inner workings of our brain, our neurons connecting with each other. So, you know, it makes it hard to model ethics. This is a paper which is interesting, but it's sort of makes my point and sort of doesn't make my point. Because earlier, we talk about modeling as being part of the learning process.
And this, this paper looked at modeling ethics specifically and it compared two things leader, ethical role modeling and leader safety role modeling and it found that the safety modeling worked but the ethical modeling did it? However it also found an eye. Quote, the mediation of moral sympathy, morality contempt and moral anger and discussed in the relationship between leader, ethical role modeling and moral morally behaved courageous behaviors.
So there's no direct link between the role modeling of ethical behavior and the morally courageous behavior among employees, but these moral emotions do participate as mediating factors. So what does that tell me? But here's what it tells me. You don't have a nice one. To one relationship between a leader modeling, ethical behavior and employees modeling or employees emulating, that ethical behavior.
And why would you all kinds of things impact employees? Especially with regard to morality with safety and safety processes? There's not nearly as much exposure to those sorts of specific recommendations just in the day-to-day world. So you might expect somebody explicitly modeling safety, behavior, might have more of an impact, but everybody exhibits morally, you know, moral judgments, ethical behavior, good.
And bad indifferent. And so a particular example is probably going to be less impactful on an employee. But what really bothered me about this paper was the way it sort of set up a relationship between leader and staff, right? And with respect to ethics, I don't think that that relation necessarily holds.
I don't think that people simply by virtue of being a leader, thereby become an ethical role model. At least I hope not because our leaders have been disappointing us since the beginning of time with respect to being ethical role models. And I think we learn as much from each other as we do from our leaders, our leaders call that peer pressure, but we might just call it community and that's what it takes, right?
It takes a community as an entire system rather than one individual making a decision. Any given individual may experience a moral sentiment may express a moral judgment and that's fine but all the people that were connected to have an influence on what our moral sense develops into. And we need to keep in mind how we're connected with the nature of this connection is.
And, and what's influencing those connections to understand how our old personal sense of morality and ethics is forming. You know, what's important to ethics is, how we learn to be ethical in the first place, right? It's not about what the rules are or what counts is, ethical, or non-ethical.
That doesn't matter at all. What matters is how do we learn to be? Ethical, because that is going to be, what is ethics in our own mind, whatever it is for good or bad. And we know that communities can organize themselves in biased ways. Unfair ways in ways that disadvantage communities in ways that a sign proportional influence to some members, etc.
That's what that whole previous young manager, employee thing was trying to do is to set up one of these imbalance relationships. So it takes a community, but the community isn't perfect. There's no inference from the community believes P. Therefore P. That would be a fallacy. So where do we conclude?
We have a thing that we can call the ethical mind? The ethical mind is that part of ourselves that produces moral sentiments a person's mind or a person's brain because you know, I mean, basically the same thing when I say the word mind or brain is at least as complex as so from that works.
So just like we have a moral community out there in a community, with a moral community in here in our head. What that mind learns is going to be based on the data, it's going to be based on the totality of the input. That's also going to be true of a computer system.
Any network, and this isn't something that can be corrected simply by wrote or by rules, right? You can't just argue for a certain type of morality. You can't, you know, by a rule or principle or fiat, convince a person that they're sent to morality is wrong etc. And it is, you know, our moral sense is what it is.
And the only way to change our moral sense is to go through more experiences. So if we want to develop an ethical mind, we need to do something like provide an ethical culture because that's what's creating the ethical mind. What does that mean? Well, that's where a lot of the feminist ethics and the duty of care comes in and is really useful for our purposes.
For example, developing a diversity of perspective, to create a water sense of community, or for example, encouragement of openness, and interaction art drama, whatever to develop empathy and it capacity to see from the perspective of others. Now, in the next module, we're going to be looking at the data flow.
The the workflow involved in machine learning, artificial intelligence and analytics but we want to be keeping these things in mind because that every step of the process, we're making decisions about how we're training. Our analytics engine how we're developing our models and we need to be aware that instead of trying to train our models, with ethical principles, we need to be thinking about the overall community.
In other words, the overall body of data that they're exposed to as a result of their training even including elements of that data which we might not think have an ethical import. And then that's going to lead us to some kind of description of what we think. Ethical practices are overall how we develop this ethical community, which would result in the ethical mind either of the human kind of the artificial kind.
So that's it for this for now. Thanks for joining me. I'm Steven Downs. And this last is the end of module six. See you again.
- Course Outline
- Course Newsletter
- Activity Centre
- -1. Getting Ready
- 1. Introduction
- 2. Applications of Learning Analytics
- 3. Ethical Issues in Learning Analytics
- 4. Ethical Codes
- 5. Approaches to Ethics
- 6. The Duty of Care
- 7. The Decisions We Make
- 8. Ethical Practices in Learning Analytics
- Videos
- Podcast
- Course Events
- Your Feeds
- Submit Feed
- Privacy Policy
- Terms of Service