Auto transcribed by Google, three different speakers (who are not distinguished in the text below).
There you go. So once again, we have one person in the live discussion but it's a different person. Each time or taking turns yeah, you're taking turns it's almost like it's organized. So we might have more people join us. Who knows? I only just put out the tweet. I mean, I mentioned it earlier and of course it's in the course description and all and you found it.
But a lot of times people, wait and don't join until they see something at the last minute that suggests that they could join. So anyhow, welcome to the course, and welcome to this particular part of the discussion. This is the module two introductory discussion, but this is your first time joining us.
So why don't I start to see if you have any reactions to the course so far?
I do have reactions and my the most important reaction I think for me is that I keep chewing on the word ethics, not necessarily relating to learning analytics, maybe in a broader sense of analytics. But I keep chewing on the word ethics and I and I thinking of either said this or alluded to this, that it's not a thing, right?
It's almost like a process, and it's almost like it changes depending on context. In time, my background is in first career was health care. So I will always go back to do no harm. My second career was an academic and I did do research and I found that ethics always changed depending on the type of research that I did.
If I were doing research with community groups or especially marginalized groups. I I really felt that research at ethics was a negotiation that it was a conversation that you had it the beginning, but it kept going all the way through and what was an interesting part of. It was to look at the data that you collected and who it belong to, which really left an impression on me in terms of ethics.
How could I guarantee? Let's say that I would do no harm. Mm-hmm. When ultimately, I don't know what the result of the research is when I started it. So how can I guarantee that, right? I can't. So when I have something I have to share it with the people that I do research with and what if they say the you know, they will be harmed.
If I let if let's say I published this, do I not publish? It's a good question. If it's it's this it's this conversation that happens over and over and over you know, and that's what I think about. Ethics and I think I think about it. It's the same thing in many ways in everyday life.
It's it keeps. It's a moving target. It's something that emerges. Mm-hmm. So, one of the things that you said that quite interested me was to look at it from, let's say, the the lens or the view of a mesh, right? And I'm trying to grapple with ethics and mesh and I those and what I just said are my beginning, thoughts on it and then trying to look at it in terms of learning analytics because learning analytics.
I mean that learning analytics and digital analytics in general. Add a whole new dimension to this certainly. In terms of the scale, for example, how do you negotiate with a hundred million people? The exactly exactly and I did research you know sometime ago and I did observations on use net.
I'm sure you have remember use that. Oh yeah. And how do you, how do you get permission? How do you you know at etc. In terms of a public open online discussion forum. How do you get it Twitter? Good example. Yes. Yeah, so, you know, so, anyway, that's that's where I am at and I look forward to, you know, discussion blogs, etc because I think it'll probably get me closer to this amorphous mesh that I'm trying to put together.
Yeah, it's, it's interesting and I'm glad I have the live transcription running so that I can capture and spiel your thoughts because I have no shame. But you know, a lot of what you said about the process and the conversation actually anticipates where this course is going to by the time we get to limit eight.
So I find that quite interesting. But the to me, huge question that the digital analytics brings up is yeah, most Carl Sagan once said, who speaks for us. And, you know, I even a number of ethical codes and preparing for this, I read essence and dozens and dozens of them, we have a whole section on them later a whole module on them.
A number of them says well there's no practical way to get permission from people so we just assume we have it.
Which struck me as a pretty convenient and maybe a little bit self-serving, and it's interesting. You also raised the question of ownership of data and, you know, we see again, you know, companies out there saying, yeah, well, there's no practical way that people could own their own data. So we own it and again, that's pretty convenient or active.
Very least, you know, companies are saying. Yeah, we have a perpetual no limit non-exclusive or in some cases exclusive right to use this data. It's yours but we can do anything we want with it which ultimately ends up including transfer ownership of it. Yeah and yeah, and I guess the presumption is that's on ethical but I'm what basis, you know, because we've never had this question come up before in society, you know, just, you know, and just hasn't come up it's new and that's what's really interesting to me.
So one of the things I wanted to do as the beginning of module 2 which is now was to see if you're using artificial intelligence in your daily life. Now in any way. So I wonder about that. You think of any uses that you're currently making artificial intelligence? Yeah.
That I'm using or is being used on me, I thinking more specifically of you using them. I mean, we can imagine the other case pretty easily, but yeah.
Not a lot. Not as the moment. Yeah, no. I'm certainly in the in the past. I I've taught on lines. So right. You know that various kinds of dashboards where you could you could see, you know, when students participated or didn't participate. Yeah. Or how many likes they gave something or not.
So that would be probably the most recent. Yeah I'd classify all of that under the heading of descriptive analytics. Yeah. And yeah, indications of how many visits you've had, how many tweets there have been props. Even scores people got on their tests. Yeah, which you can see in a dashboard or presentation or a nice pie chart or whatever.
That's pretty common. Want to say that I'm I did not necessarily use those. Yeah, that's interesting. Probably did. Yeah. Yeah. And that, that creates a case of them using it on the students and it being used on them. Yeah, one area where I, I've used this quite a bit is in the area of physical activity and as you can see, I need physical activity.
And I use a used to use an application called runkeeper and now I use an application called Strava not because I run, but I do do other activities. I do hiking and I do quite a bit of cycling and Strava in particular, which is why I use it. Now it shows me my roots, he shows me my time.
It shows how much elevation I gained and, you know, a bunch of related statistics and I find that really an interesting use of analytics. It's not artificial intelligence per se. Yeah, but it's definitely in the realm of analytics. I would think so. Just actually I I do use things like that.
So, fit bit. Yeah, yeah, yeah. So there's something, what would you think? If here's I'm just thinking out loud here this isn't part of a plan. If one of these applications chirped or whatever and said, okay go out and exercise. Now, actually, it does chirp to me when I sit for too long.
Okay, but I can ignore it really well. Yeah. Here that's it's probably a good thing having that choice and not having to. Yeah, so and in my power bill, I'm based in Ontario here and they've got analytics and I don't get this anymore because I told them to stop but they've got analytics that actually break down how much power I'm using on heating.
How much power I'm using for the refrigerator. How much power I'm using for? What they called. Always on applications. Not sure what they meant, but probably like computer and that can be useful. Although I thought that was kind of invasive. Mm-hmm. Yes. Yeah. Wherever you based you're on a, you're in trial.
Okay. So yeah, you poke I get my power from auto while power so obviously you're getting yours from a different company. Probably Ontario, high door or something. Yes. Yeah, it doesn't, it doesn't break it down under that it's, it's more comparison throughout to last year, okay? Well, it's a minimal, kind of analytics, I suppose, you know, it is a comparison.
Then you can draw your own conclusions. A bet. I can beat that for next year. The power companies trying to gain me. Yeah, well yeah, I'm sure they are and but I just don't know which direction they're trying to gain you into, right? More consumption or less. Yeah. So if they're one of these companies, that loses money with each unit sold, then they probably want you to be used last, so they lose last money.
Yeah. So so we're using analytics right here, actually in this session because I have the live transcription turned on, which means that there's analytics interpreting my voice and turning that into text, which I still think of, as a miracle here. And I I've been testing different types of this over the last number of weeks and months.
So the best one I found was something called otter.AI, which did a really nice, you know, nice conversion of text or speech to text, but it's a private company and you have to pay them money. So yeah, I got to try it for three times and that now it wants money from me and I'm too cheap to give money for the mere convenience of turning audio into text, even though it's a miracle.
Let's funnel. Okay, I just saw the transcription here is an even hook. So America also, yeah, it wasn't very good. Yeah, and so these zoom meeting transcriptions. Generally, they haven't been bad. It's interesting to watch it. Correct itself. As I'm speaking, I don't know if you're seeing the transcription right in front of you.
No, I'm not hit the able to see it. Maybe. Do you see the? Do you have a CC button?
That is showing up, oh, wait a second. I do try. Clicking that. Okay, beautiful transcription. I see it. There we go. But it said beautiful transcription, rather than view full transcription. Yeah. So maybe it has an attitude. Yeah, so and a use on my phone. Let's my own Henry.
Bar has stuck to my phone. This was lunch, 5 is foolishly scheduled these at noon. I don't know what I was thinking. So what we've got here going? Whoops, up higher. Okay. Yeah, this is on my Google pixel. 4 phone. And it does a live transcription as well. You don't know if you're seeing it real time or yeah.
I can't really read it. Now, it just, it just blurred. Yeah, just blurs. That's too bad. Yeah, too bad. I don't have a better camera, but that's less the trade off. If I had a better camera than I'd be using more bandwidth and then I might get stuttering images and so so it's recording the actual audio.
And in real time transcribing it to text and he's a pretty good. It's not bad. It's, it's not quite as good as otter but it's better than the other types of. Oh, we got a person. We have Tim Topper coming and joining us. So it's better than the other type.
I've tried. I've also tried the transcription in Microsoft Teams and also Microsoft Word, Word is nice because there's a, you can just click a button that's on the, the ribbon bar that says, dictate. And then if you're if you're using the online version, I don't think it works on the yeah.
It doesn't work on the desktop. Oh no. This is my so anyhow it's also in power points and I assume it's on the online version but not on the desktop. Oh, and we've got someone else coming in. So, and this looks like mark. Yep, maybe well, I don't know.
Because he's just a image now. So there's no name and what I think we've lost Tim guess I should have been more welcoming. I didn't want to interrupt what we were doing to welcome him and I just thought well you enjoying him on the flying and we lost him connecting to Odia.
Okay, so we're getting there. I'm in. He's in. Okay, he's he's in with his LA hat but I don't know that he's wearing it right now. He's probably not feeling great at the moment. But anyhow, the the nice thing about the windows one are sorry, they might persoft word.
One is I can just import an audio file and then it'll it takes a few moments but it'll convert that, it's not bad, not quite as good as Google, but pretty good. And they've used that as a transcription source as well, so I think closer to the main ones that I've used.
I don't do anything with Apple. I have no idea. Whether Apple has transcripts audio detects or not? I'm not sure either. I I don't use Apple very much. Yeah, actually I don't even know how to use this. Talk about myself. Yeah, I swore off all Apple products and so it's a while ago now, I forget how long it was.
Just because there were so concerned about locking you into this single product eco system. And yeah, that's basically why, you know, I have not really engaged with them. Yeah. And and plus they're, you know, they look everything down. Yeah and do any. Yeah, irritates me, because they take away my choice.
My don't like that. Exactly. When we come back to choice again, mark, we were asking or I was asking whether you're using artificial intelligence in your daily life. So, I wonder if you think of any ways, you might be are not just artificial intelligence, but even animal analytics in general.
Other than being in the Google eco system not consciously using it. Now, you may actually inject the term artificial intelligence side. Wish they picked a different turn. But anyway, it's probably not the best term. This is now. Well, you know, it's it's it's to that do you believe there's a universe or mobile diversity's and I'm one of those who think too is one big thing and so intelligence is everywhere and none of the articles.
Okay, yeah. So but analytics I guess would be determined I'd reach for that. I want to learn to use analytics consciously and non-discriminant oral. I'm going to use another one, you know, use a plant. So that's where I'm at. I'd like to use it but not yet. I'm watching the automated transcription happening here in zoom and it said and it interpreted what you said as not from Laura Lee.
So you should be able to see the transcription on the screen. There's a live transcript CC or closed to caption button in zoom, it should be. It's on the bottom a little bit to the right. Actually minds loading up on the top, huh? Okay? Or maybe that's just another thing cage.
Yeah maybe it's just telling you those. It's right at the bottom where there's chat. Yeah here screen. Yeah. Okay. Yeah, that was another vacation floating on the top. Now I right. So there you go. Now you're using artificial intelligence or analytics. It's funny when you said you didn't like the term artificial intelligence.
I thought your complaint would be with the term intelligence and not what the term artificials. It's quite interesting. I could, I could have reached to. Yeah, yeah, the search for intelligence continues. Yeah. Yeah. But it's an interesting point and I guess it is your it is a theoretical perspective that you might or might not take as to, whether intelligence is something that is limited to humans.
Or at the very least life forms as opposed to machines, or whether any system properly, constructed could have intelligence. And the same sort of question, gets raised a lot with respect to other things. For example, consciousness could there be such a thing as machine consciousness perception? Feeling sensation. There's the whole list of attributes of thought that humans have that we've categorized over time and that we all have experience having.
And we think some people think that they're unique to humans and others not so much. And actually, I fall in the not so much camp myself. You know, accepts you know, perhaps in the sense of perceptual feel like what Thomas Nagel. Once wrote a book. But a runt's wrote an article called what is it like to be a bat?
And yes, the question, you know, what does it feel like to perceive the world the way about does? And the answer is essentially is well, we can't know, we're not bats and you actually have to be a bat to know what it's like to be about. And so perhaps, you know, what is it like to be a robot is also something that we can't know and what is it like to be a human?
Something a machine can. No. But that's not what intelligence is. And it's not even what analytics is. And yeah, the term intelligence is too broad for what we're doing. Now, if I am, absolutely, I would say in the synopsis and I broke it down into six categories. I stole from Gartner again because I have no shame.
There's a gardener categorization, which is see if I can remember it because it's not right in front of me descriptive analytics, which is where we started looking at, you know, the systems that pretty dashboards for us. There's diagnostic or diagnostic analytics, I see that zoom spelled both pronunciation the same way, that's pretty good.
Where we're doing some sort of a interpretation or inference. For example, maybe clustering regression classification then there's predictive analytics which is as the name suggests right predicting what's going to happen and then prescriptive analytics, which darkness is something like how could we make something happen and that makes sense.
But I I didn't think it was sufficient coverage because I went through and, you know, I'm one of these completionists. So I just tried to read everything. You can't read everything, but that doesn't stop me from trying. And I ended up with two more categories, one, which I called generative analytics, which is, you know, the use of AI systems and especially neural nets to create new content.
You may have seen, you know, this is not a person and it's a images of people that are artificially generated or there's a, an application GPT - three. GBD three which finishes poems for you rights songs somewhere out there there's a 24/7 all automated death metal generator which actually is a bad but you know you get tired of it after a little while.
So I added that category and then I also added a category that I called Deonte analytics and that was analytics that tell us what things should be. For example, andalics that tell us what's fair analytics. That tell us what's ethical and analytics that tell us, you know what principles we should use in order to divide resources etc and there's a bunch of different applications that I found.
So that's the categorization, I used, it's purely arbitrary in a certain sense. I was thinking about it today and I was thinking, what is the basis for that? And it's actually well linguistically based, right? There's present tense or future chance there's different modalities like what's possible. What's probable? And then you go into what should be.
So maybe I don't know. What do you think of that? How would you categorize if you had the option or what's missing?
Hard question? Yes.
Well and you know this is certainly not the field. I'm that familiar with these cover from the past in the future as he pointed out. So, I guess by defining these
Categories of bandwidths what it points to for me. When it then points to is, what's the human this period soon? In relations to me. So given that these are different ways of looking, you know, non human machine based or everyone to say it silicon-based ways of looking at things.
Then to me it points back at me to say, okay how do I operate differently? Because I'm always so I I have a brand new well-known palette regional poet here in Los Angeles and I watched him, you know off and on basically as whole life spraggled the springboards together and then now we have these machines.
Yeah. And so I'm okay. So and not being languished or professional poet. I could see where they look very similar. The results look very similar. Yeah, one is from a lifetime of trying to communicate between humans and one is a new and shiny random generator not random. Yes. Yeah.
I'm not random. Yeah, machine generator. Yeah. And to me, they look very similar. And yet, I have a feeling a suspicious that they are a very different product, but I've been wrong before.
That's an interesting question. So oppose this to Sharita? Can you imagine a machine doing the work that you've done over the years?
Some of the work that I've done. Yes.
Certainly, you know, obviously work, that's repetitive.
I have done. I've engaged in some forms of therapy and my my first career and certainly I've done counseling with students in my second career and I'm wondering if
Something with some form of AI could have done those things. That least the, you know, the when you're doing therapy you know, well tell me more about that right? Or, you know, that kind of thing. And an big playing around with that in the past, I'm remembering. Mm-hmm. Right.
So I think some of the stuff that I did. Yes, it could be. But then I go back to where I come from as well as not just language and not just inserting the right word here and there, but I also operate on the experience. I've accumulated over, I don't know 60 years.
Sure, etc. And how does a machine do that? Because I will use that experience. Contextually different ways, depending on how I'm feeling. How I perceive a student is interacting with me. So those serendipitous type of things. I wonder if a machine can do that, maybe they can. I don't know, let's not now.
But in the future we can imagine it can't. We I mean, because you're talking about experiences that you've had that lead to your ability to do this. Well, we can give machines experiences. Yeah. And we I we can actually give it to them a lot faster. Yeah. That's the thing.
Yeah. She they don't have to take 60 years to do it. Yeah.
So, let's wonder and we create artificial empathy.
Does empathy depend on the person who is giving it or the person receiving it who interprets. What you say is empathy? That's good question. Yes. So, you know, I may be saying something that I feel is quite, you know, has a lot of empathy to somebody else but the other person may say God she's full of it.
Yeah. So although it would be sort of weird to be an accounting session with a machine and artificial intelligence and you say something, you know and you know I miss my cat and the machine says yeah I can relate I wouldn't feel right with it. No, not to us.
Yeah. Yeah, but let's stay you. You're a child and a young child. Yeah. And she have your little pet gadget, and you say, something to your pet gadget. The gadget says it back to you or, you know, says something to you. Yeah, you might feel completely comfortable with that.
Yeah. Now, that maybe unfortunate, but I think it with some kids, like, you know, they have their companions, but it brings to mind, the cartoon strip Calvin, and Hobbes, and Calvin, of course, is completely comfortable conversing with his stuffed tiger. Yes. And I was probably completely comfortable communicating with my teddy bear.
When I was in jail, I thought, you know. But that again, that's a certain type of thinking. Yeah, developmental. Yeah, so do machines that do these kind of thing. Do we put them through developmental phases, so they respond to appropriately within context.
Are you familiar with the book by William Gibson? Yeah, where the cybernauts, whatever term you want to call it fully machine, integrated human falls in love with an artificial intelligence and marries the artificial. No, I haven't, I haven't read that book and it's, you know it's 20 years old.
Yes, maybe older, but I think, I think you're right. Serena that if you grow up in a world full of talking tablets, which is what's happening, right? So you might will break these, you know, at two reaching for the iPhone or families. Yeah. Captured in in Santa Clara, California, completely surrounded by this stuff, you know, downtown's over a valley normal kind of that way.
That's where she's growing up and yeah, it's completely normal for her to just talk to gathers and they're not particularly rich. So her friends are literally talking to the refrigerators and they're downstairs. Yes, you know. So yeah it's easy to imagine that world but then you know again it makes me wonder about the human component you know.
Our is there something unique about humans not unique in the sense of you know American exception wasn't bringing like that or Christian except but just unique in a world full of gadgets. Yeah. How do we differentiate and caring Lanier wrote, I am not a gadget or you are not a gadget, and I think the overwhelming question we have to ask here is, how does he know?
Yeah, excuse me. My dog is society today. Something that's. Yeah, just for the record. Pets are always welcome in these. Yeah, conferences. And as they should be because maybe pets were our original gatches. So if it's pretty easy to imagine and AI, that's smarter than a cat and especially a dog.
I'm a cat person. So yes. And I'm done well. So, yes, I'm agnostic. But that brings up the robo. Whatever. Dogs you're being promoted is friendly little pets. Yeah. At the same time, they're being sold. That weapon is ours. That's fully armorable. Yes, yeah. That's I saw that just within the past month, really?
Yeah, they're selling them with, you know, the international arms bizarre as as a robot that can be armed. And then the question is can they buy or independently? Yeah. And we're there. It's not. This isn't speculation. We're there. They can be trained to target and fire. Yeah without you that's a world.
I don't want to live. No, I don't think that's a world that we were done want to live in, but we think about the process that you've dressed described and and what we've hinted at already in this discussion where younger people become habituated to these artificial forms of life, maybe that's too strong a statement.
But these artificial responsive devices that they can develop a rapport with and then having been raised under that sort of condition. Then living in a world where there are these robot dogs with guns? It's in a certain sense, not really different to them than police with guns and humans with guns robots, with guns, it's all the same to them.
And in a certain sense, they're not wrong. You know, especially if you think of the police as you know, a part of society from which we the rest of us have become more and more alienated over time. So they really are in other in the same sort of way a machine could be in other, then it's not so outlandish to think.
Well maybe it's not wrong to have robot dogs with guns. It's safer. I mean, we're also, we're always talking about the sacrifices that police and the military have to make to protect us. Maybe they wouldn't have to make these sacrifices anymore. And you know, the issue is well, they might shoot someone who's innocent but you know, that certainly a problem now.
And maybe they might be better at distinguishing people. They should shoot and people. They shouldn't shoot and imagine that was the case. Imagine the track record of dogs are robot dogs. With guns was better than existing police forces and we have an analogy to draw from and that's the automated marking of tests, you know, not just multiple choice, but essay style tests, where there's research shows that the AI marks more consistently and more fairly
Yes, but we come from a tradition. Hmm, I'm guessing the three of us white people. Speak English. Come from a tree edition. That delineated the authority of God from the authority of peers. Yeah and before being executed we would get to make our case before peers and sort of crowd sourcing the judgment as opposed to automating the judgment.
And so this is quite a diversion, I would say from our cultural backgrounds. And then I immediately reached for once these and I think we should stop calling them dogs. And yeah, their yeah, as cutifyings part of the propaganda. Yeah. Presentation. But you know so there are two and four legged machines and others and there's six eight legends kind of and flying machines are here and you know.
So, anyway machines. Yeah. So as a worker and I was in a circle back to good, my job had been automated but it made me actually too far in the past once the automation of these machines is fully underway and we're I think you know we're right around the corner from that you know two years.
It's something. Yeah. Where are these things are going to be mass produced to me it's just it. We're across the line into authoritarianism because for the first time there will be a counter to the mass. The people that can be controlled for the central abort, and, you know, even today, there can be mass uprisings that change before of history by conflicting, with what the authorities want, and we are just one step away from that.
And where when those with power have four billion robots that will kill on command, then are 8 billion where we can bet. Yeah, 8 billion once, they have 8 billion robots, then the massive humanity can be negated. So to to somehow, in a way, bring it a little bit closer to ethics.
So the the people that will build those robots, that will provide the information for the machine learning, etc. They're the ones that will bring their beliefs or biases into the development of that those machines. So, do we have to go back to, what are the ethics involved with doing this?
And how would we make, how would we, as a group of humans, inform those in authority, that they can't do that. That they're too biased and not directly not sure if I'm making myself clear. But I I'm having to, you know, go back that and I programming, or whatever this machine in terms of the learning that, you know, etc.
So it's my value's that are going in there and, you know, it does raise the general question of responsibility. You know, one of the things that I noticed when I was looking at a lot of these applications of artificial intelligence, I was looking for the benefits or the value produce because I thought, but that's also a way of thinking about categorizing them and the description of the benefits or the value.
Produced is very often from the perspective of the manager or the institution, or the funder, as opposed to, at least in our domain, the students, or the society, the end user the citizen. Yeah. And it's it's too bad Matthias Melchers and here, one of the things that I've said over the years is that we need to relocate the locus of the benefit from the institutional authority, to the individual user, and, and he raises the question.
Well, how keeps you do that? How could you make that possible? But sure that you hint at that when you talk about, you know, the people who design them, the people who give them the data and all of that. So maybe there's a root there to finding ethical uses of these applications.
By looking at how we attribute the benefits of these applications thinking out loud here, something like that. Yeah, but then you would also need to look at short-term benefit versus long-term. Many of the applications or platforms, you know, I'm thinking Facebook, etc, etc. That at first will welcome to as, you know, a more democratic way of everybody.
Communicating, etc. Yeah, we now know that there is harm and it's not long a period of time. Yeah. Yeah, it's funny. It was like five six, maybe seven years ago. People are saying oh yeah the role of Twitter in promoting something like the Arab Spring and now today the role of Twitter in promoting you know, radically dangerous and yeah.
Yeah, you know, Bruce that human tendency to weaponize. Yeah, with my one of my favorite examples, of course is abstract art and Cold War. So here a group of people artists just reaching for new means of expression and no intention of cultural. Yeah, combination or, you know, anything like that and yet there are art was culturally weaponized.
Yeah. I don't know if you're familiar with that story. Sure. Either the CIA promoted abstract art, No bringing the polar war. Yeah. So, you know, so it's I use that as an extreme example. But here, I mean, literally paint on canvas can be recognized culture, not literally recognized, but culture.
Yeah, yeah. And, and the same thing, you know, with these digital platforms, zoombots surveillance, capitalism describes how they will weapon. I they were originally away for college students, but find each other across campuses and then decade later or however, long or now weaponized to support the rising and white nationalism and other kind of political parties, the we're probably getting off the topic but the brilliance of something like Facebook, Twitter, etc, is based on social network theory and which literally in, you know, encourages these bubbles of people reinforcing their own ideas, literally.
That's what happens when you look at it in terms of social networks. But I mean, you know, we were all happy with it 10 years ago or so, we didn't always see the bad points now. We are. So, how can we tell? How can you tell now something that you think is, you know, represents your values?
And you think is ethically, correct, etc. How can you predict that 15 years from? Now that won't be detrimental to what you think of as society. It's really hard. Yeah here I go with my you know, my efforts. Yeah. Well, that is the course. So we've I was go ahead.
Okay, just quickly, I see one another time and so I'm always been interested in if not fascinating, I traditional cultures that seem to hold in their culture. This idea that progress, as we say in Canada is easily weaponized. And so by holding on to the original culture, and not letting it evolve the change it avoids.
Some of these problems of more and more modern weaponization, it makes them look primitive or backwards, or whatever. From our point of view. And yet they typically don't have the means to commit genocide. They may proxy to each other or in each other whatever on some very limited level but they never get to that mix level that we're at.
And now we've ready to step into this automated genocide feature. That's yeah, I'm just flatable.
Okay. Well, I think we'll call it that part in. I'm not careful. Not that careful. No. Well I mean with this is the issue though so and this is why I wanted to be gained with looking at the applications of analytics and artificial intelligence and to look at the benefits and and see, you know what we're getting from all of this.
We can easily imagine. And next week, we will imagine in great detail all the ways. It can turn dystopian and that'll be fun. But you know, seeing what we're trying to use this stuff for now I think is probably a good place to start and energy such where we're starting, so we'll come back to together on Friday at noon for another zoom chat.
So I I hope you'll both be able to join me. And of course, one of the things I'm doing is producing videos and I've got more videos coming, there'll be a task having to do with identifying uses or applications of analytics and AI and hopefully if I can get my software working, a kind of classification tasks, so it's not all writing blog posts.
I want to increase the variety of tasks and I think classification would be fun. And it's interesting because that's one of the things that machines do and I wonder if humans would do it differently, but that's a separate question. So a wrap it up here. Thanks a lot for joining me here and I'm sure there were people watching on YouTube because we did stream live on YouTube, or they'll watch this recording in the future.
And I know that they'll benefit from your contributions to this. Whoops, there we go. I don't know what happened. Yeah. Oh, how weird? I'm not sure what happened there, but I think it was the world telling me time to into this puppy. So, all right. Thanks a lot and to see you next time.
- Course Outline
- Course Newsletter
- Activity Centre
- -1. Getting Ready
- 1. Introduction
- 2. Applications of Learning Analytics
- 3. Ethical Issues in Learning Analytics
- 4. Ethical Codes
- 5. Approaches to Ethics
- 6. The Duty of Care
- 7. The Decisions We Make
- 8. Ethical Practices in Learning Analytics
- Videos
- Podcast
- Course Events
- Your Feeds
- Submit Feed
- Privacy Policy
- Terms of Service