The Search for the Social Algorithm

(Transcribed by MS Office 360, edited to correct errors and make it readable)


Hello and welcome to the first live session in the first week of Ethics, Analytics and the Duty of care. We're no longer on the "how to take this course" segment, we're actually onto the real contents.

With me I've got Bernie here in the zoom chat and an unknown number of people watching the live YouTube stream either on YouTube directly or in the course activity center, and any number of people following either the video, the audio or the text transcript, all of which are being made available as part of this. With just one of us Bernie, you should feel free to jump in anytime you want.



So because why not, right?


Well, I can tell you that. So I have at our meeting last week, you reminded me of the MC I took from Emma and I went back and looked at that, and yeah, it's you even who directed me to that through a Facebook post, and it turned out to be a really effective course for me. It was all, a lot of it, was based on when I when I first saw you a number of years ago and you talked about, you know, going out, finding some information or searching something, digesting it or making meaning of it then.

And that's what I've been trying to do ever since, and it's it's been good and I'm I'm looking forward to being in this course with you. And of course, we're busy doing what we do, and I've already read one person blog post and it looks like it's going to be good. It's starting for me, so I appreciate the opportunity to be here with you and I'm committed here, 'cause I'm working on it.


Yeah, it's been easy so far. It's it's going to get a lot tougher, there's a lot of content this course. But you know it's a MOOC, so pick and choose and you know you don't. I mean, the idea isn't to remember at all. The idea is to, you know, change the way you see the world. I guess this is one way of putting it. Or maybe inform the way you see the world.

What I want to do with this session is introduce not just this week, but this course as a whole. So if you were thinking of this whole course as a book, which you should because I am recording all the transcripts (so think about that) then you should think of this session as the forward. So it's not the actual content, but it's what comes before the actual content and what I want to do is set up the course and and the topic and put it into context.

And I have slides for it with a provocative title, the scandalous title... not really scandalous, but you know this could be a bit controversial if we think about it: "The search for a social algorithm." And if you're wondering, yes, that is me in the picture and I'm at Occupy Wall Street almost exactly 10 years ago today (actually October 29, 2011 - ed). Occupy Wall Street. So you might not think it but this course actually does have a genesis in Occupy Wall Street. Certainly a lot of the thinking that I've put into the course starts with the thinking around Occupy Wall Street. And and the people who were involved in that should know their activism had a wider and a longer influence of which this is one of just just one of many, many outputs.

So here we are, 10 years later, and we've reached a point in history where we don't know how to govern ourselves. Look at what's. Happening in the US, we look at what's happening in Europe, even China, Japan, I can go around the world and point to examples. We're struggling. We're struggling individually with fake news, disinformation, too much information, information that's triggering, information that is oppressive, things we can't say anymore, things we should say now.

And all of that in a world that's getting increasingly more difficult to live in thriving and surviving, you know, simple things like the the way wages have not kept up to inflation, let alone productivity, the arguments and fights over minimum wage and and as parts of communities who are living in a world of me-too and black-lives-matter and similar movements around the world.

And also at a time with refugees coming in from conflict zones around the world, and as a society we're struggling with issues of power, of disinformation, propaganda campaigns, and with the global crisis of global warming, the global supply chain breakdown, and of course, everybody's recent topic, the pandemic. And that's just three things, there's much more than that.

It's a time of complexity and chaos, right? It's a time of rapid change, events piling up on each other. You know, it's like the hurricanes: we're done one we got the next one coming down the the Atlantic Freeway. Information literally moving at the speed of light. When an event happens in Turkmenistan, we know about it right away, or virtually right away.

We're in a world of globalization. I mentioned the supply chains earlier. Global information networks. People use the term "context collapse" to describe it. What we say isn't just heard in our own communities anymore. It's heard around the world by people we never intended the message to go to. We're seeing division and polarization. You know, left and right. Environmental oil industry. Vax, non-vax.

Every society, every country around the world, is doing this in its own different way, facing the breakdown of communities and institutions. We look at the struggles the university has faced over the last two years with the rapid transformation to remote learning. How do wecope in that sort of environment? But that's nothing, I think, compared to what's coming over the next two or three years after we recover from the pandemic and start to figure out as a society how we're going to pay for it all.

Then there's the mismanagement of complex events. In the Guardian, either yesterday or today, I'm not sure which, they're talking about, how the mismanagements of the pandemic in the early days in the UK cost thousands of lives. Of course, in the United States, 600,000 people dead again, arguably because of mismanagement.

So there's a challenge, and it's within this wide context of challenge that this course takes place. You know the topic is "ethics, analytics and the duty of care." But let's not for a minute think that that's all we're talking about. Indeed, as a community - by 'community' now I'm talking about the online learning community, the Learning, analytics community, etc. - our response has been far too limited. That's why I call it the paucity of our response. The poverty of our response.

Our understanding as a community of, shall we say, analytics needs to be expanded much more than it is right now. We're looking at analytics as a way of looking at how students are progressing through courses to predict outcomes. But we need we need to think about this much more broadly than merely using data about students and their activities not only to understand and improve educational processes, but to support learning itself.

And you know, I've done a study of the applications of learning analytics and artificial intelligence over the last two years, three years. We'll see that in the second module. There's a huge range of applications that people don't even touch when they're talking about this, and we're beginning to see in some sources now the the suggestion, at least, that we need to think more broadly in terms of what we mean by learning analytics. What we mean by artificial intelligence and education.

It's not all bad. It's not automatically wrong. This wouldn't even be an issue for any of us if there weren't a huge upside to using this technology and using it precisely to address some of the problems I've just pointed out.

And we haven't as a community come to grips with the concept of ethics. We're presenting them simply as rules and principles. We're focused on a few issues, such as diversity, equity, inclusion, which to be sure are important issues, but do not constitute the broad sweep of ethics.

And we aren't even, I would argue, coming to terms with the changes that have happened in our understanding of ethics. Generally it's no longer simply teaching about rules and principles, despite what we might see in the academic response. Sternberg, who I quote here, says we should be teaching ethical reasoning rather than just ethical principles. But what does that mean? I mean, people can't even agree on what constitutes critical thinking, much less ethical reasoning. How do we decide? Or do we decide what's right and wrong? Are right and wrong even the right concepts that we ought to be applying here?

You know, we think of ethics in, shall we say, the old fashioned way: it's a set of principles for deciding, using reason, for decising what constitutes a right action and a wrong act. Well, that's a definition that doesn't work any more precisely because we live in a rapidly changing, dynamic, complex world. And in fact the breakdown of the institutions and the social structures that I've described are precisely because that kind of reasoning doesn't work any more. So what do we do?

And all of this arguably - and and I will argue shortly - is happening in a climate of change, huge sweeping social change that we don't yet fully grasp. Now, it's not just, "hey, we've introduced artificial intelligence, now the world changes." It's much more than that. If I had to characterize it in slogan form, I'd say that society is transforming from a tree to a mesh.

And Occupy Wall Street, in its beginning was pointing out what was wrong with the tree, what was wrong with the traditional structure and organization of society. We we can represent it here with this model of a traditional social network and you can see what what really is a fairly familiar hub and spoke kind of construction. And we can see that reflected in society as a whole, whether it's business and industry. You know how Apple, Facebook, Microsoft, Amazon might be these hubs? Or we might think of it in terms of websites. I guess we'd list the same list of websites. Or in other industries, other major companies. Or perhaps global social structures with Russia, the United States, China and all their vassal nations.

And the individuals who are in this network are profiting disproportionately. We see in the lower right hand side of that slide there a characteristic power law of the distribution of influence, and therefore also the distribution of wealth, in the society. And when you have this kind of structure, that's the kind of distribution that you receive. Also, when you have this kind of structure, it's much more vulnerable to disruption, for example, to disruption by pandemic (it's not something that Occupy Wall Street was talking about, but it was still there as a possibility), disruption by supply chain disruption, disruption by war and conflict.

Target the nodes and you can break down society. Control the nodes and your control society. And that's why everybody is going after Facebook. I heard someone say on it was one of Leo LaPorte's podcasts on the TWIT network, "People aren't trying to stop Facebook, politicians aren't trying to stop Facebook, they're trying to control it." I think that's true. They're working within this structure, and if you can control the node, you can control society to your own benefit. That's the way it works. That's why people were protesting.

The alternative toward which we are inevitably moving is a mesh structure. A mesh structure is the sort of structure that characterizes road networks, email networks, anything peer to peer, anything place to place, anything where you don't have to go through the hub to get from one place to another place. It's more distributed. It resembles discussion more than a lecture. It's more balanced in terms of power. And arguably, it's more reflective and more democratic.

I've made this argument in the past and I'll continue to make this argument. And if we look at or analyze power, wealth, influence, cinemais structure, we no longer have the power law. We have a distribution which is much more along the lines of what people when polled think is appropriate. Not absolute equality. Nobody nobody argues for that. A reasonable range of influence from the most influential to the least influential, where instead of one person having millions of times more power or influence than another, they might have 10 times or even 100 times. People are actually pretty comfortable with that, especially when we see the other lines represented on the chart here, especially when people who are in, shall we say the long tail, or, shall we say, making minimum wage aren't below the poverty line anymore, aren't struggling to get my to make a living.

So we're moving into that organization, but by fits and starts and not uniformly. Many of our technologies are already mesh structures. I mentioned the road system, I could talk about the power grid, etc. But they're being managed by hierarchical structures, and therein lies the dissonance. Therein lies the the clash of cultures within our system.

What I'm wondering when I'm studying with this course and other work is: what is it like to live in the mesh? We know what it's like to live in a hierarchy: you follow rules, you do what you're told, you rise up through the ranks. That's how it works.

In the mesh even our values, goals and objectives change. In the hierarchy these are pretty clearly defined: power, money, wealth, influence. But we're seeing more and more different values expressed by different people.

How do we know? In the hierarchy we're just told to believe. I the mesh there are no authorities anymore, and you can't just go around picking authorities. In many ways, unless the authorities are in roughly the same position you are, they're going to misunderstand your perspective. That's how we get arguments about colonialism and cultural imperialism. But even more to the point, they'll lie because they're in it for power, wealth, influence, etc.

What can we do? What are the practical steps we can take? What is it like to thrive, and shall we say, live ethically in a mesh? We're only beginning to learn that, and frankly, I am not going to be producing an answer to that question despite deeply looking at it for 8 weeks. Any anyway, that shouldn't be the output, that shouldn't be the outcome.

How do we learn what it's like to live in the mesh? There are two major approaches that I'm going to take as starting points.

One of them, as suggested by the analytics in the title, as the use of AI and neural networks. I'm going to characterize these as connected sets of entities with inputs and outputs, and therefore, an input layer and an output layer. The study is of the algorithms that add or strengthen or weaken those connections. And related topics such as activation functions, network, topographies, labeling. There's a whole bunch of factors that go into the design of a neural network.

And I'm understanding this endeavor as the intent to produce the set of algorithms that produce the best result. That's how they approach it. They'll take a challenge like "can you translate text from one language into another?" And you get the result and you're looking for the algorithms that produce the best translation, for example. That's how I'm going to look at it.

But also, in the other of them, we have the study of neural and social networks as they exist in the world. What's important here is (to this point anyways) we don't have the liberty to just go in and start tweaking the algorithm. The algorithm is the algorithm, whatever it is. You know the brain is the brain. Society is society. And so this is the study of these networks in the world.

It includes (things like) the identification of the entities. So we could talk about that. We probably will. Is the right identification of entities in society, the individual, the community, the cultural group? The linguistic group? Or do we take an intersectional approach? And what does that mean for network analysis?

It also means the study of network topology, the growth and development of networks, how selective attraction, for example, gives people more power and more privilege in a network. How these hub and spoke networks develop and why they develop? And the objective here kind of is to explain why the things they are, why the things are the way they are.

That's may sound straightforward. And again the work of creating the network or the series of algorithms that will produce the best result may seem fairly straightforward and fairly simple. But they're not, because there are no easy explanations. There are no easy prescriptions. Things are going to change from context to context. In a world where there are multiple simultaneous interacting variables, you can't just give a simple cause and effect explanation anymore.

So I'm structuring my work over the next year, not just in this course, but overall this way, so I'm looking at the networks and I'm looking at the analysis the two subjects that we've just talked about and these resolve into work, on the one hand, about ethics, and on the other hand, about literacy. So I'm looking at what we value, what we want, what we desire, what's right, what's wrong, and then how we go about reasoning toward these things. How how we manage in this world of data to come up with mechanisms that produce the best result not only in computer systems, but also in ourselves and also in society.

And is there a way to do that? I don't even know if there is a way to do that. I think we can approach one. I'm not sure if we can ever ultimately get there.

So the work that I've been doing over the last couple of years - and this is the current snapshot of what that looks like:

- the MOOC that we're looking at now:
- I've been working in a Government of Canada subcommittee on AI learning.
- I'm a member of the NRC Research Ethics Board and all that that entails.
- Participating in NRC data Equity working group I've.
- I've been participating in things like the creative of Creative Commons, ethics of sharing report,
- and I've and published on ethical codes and learning analytics.

So that's the one side of it. The other side of it, which will be next winter, February to March 2022. We focus on data literacy. And I construe that pretty widely to include data literacy, data management, etc. You know, again, it's an equally large topic.

- Data Literacy MOOC
- I've been involved with DRDC, which is defense research and development here in Canada
- I've been involved in something called the Fair's Fair book projects for finding accessible and interoperable, reasonable resources.
- A presentation of what does it mean to enroll in a course?
- Even a series of presentations accompanying this course about how to build a MOOC
- A thing called CovidEA which addresses a lot of these topics
- and even the work that I've been doing in blockchain and consensus.

All of these inputs are coming into these two courses. So let's look at that.

When I think of reasoning generally, I think in terms of critical literacies, and this is my background as a philosopher speaking here, not so much as an ethicist, but as someone who's studied how we learn, how we think, how we create. And I've drawn up (I don't want to say taxonomy, that's not the right word) a set of overall approaches which I'm grouping here into three categories, applications, values and practices. And we're going to look at all of these in some detail, but not under these headings necessarily, but this kind of thinking informs the background to a lot of what we're talking about.

The applications are simply in the mechanics of how things work and the mechanics of how things work. There's there's the two sides of that. There's the syntax, which is the mechanisms that are being created by artificial intelligence theory and neural network theory. This is where AI is now. We have pattern recognizers, We have systems that spot regularities, systems that classify, et cetera. And then there are the current issues in AI, including things like value, meaning, goals, the ethics of AI reference: what are we talking about? What kind of models in the world are we creating? All of that.

But moving beyond fact and where we really need to be thinking for a topic like ethics, analytics and the duty of care, especially in a learning context, but really, in any social context, are the values. First of all, how we use these technologies? What kind of actions do we take? Do we persuade, do we interrogate each other or the environment? What are scientific method, propaganda, all of these things? And also there is the context in which these applications take place, and how we define that, and how we describe that.

And that leads us to the practices: how we take these things and bring them together to give us a story about how learning, inference and discovery happen in society. And so I can break these down arbitrarily into cognition and change. Cognition is about how we argue or explain things. Change is about how we recognize, and work toward progress and development in society (or just spin around in circles, whatever the case may be).

This course is basically here on this slide. It's a comprehensive study of what analytics actually are and how they're established in our field, and maybe generally. So we're looking at the applications how we apply AI so, and that'll be module 2. And then later on in the course in the second half of the course, we're going to be looking at what decisions we actually make when we apply artificial intelligence analytics, neural networks, to any of the applications that we've been talking about.

Because we do, we make the series of decisions. People talk about, for example, the need to avoid bias in the selection of the population that we study. Quite so I agree. But I'm looking at this from the perspective of we are selecting a population to study. What are the decisions that we make when we do that? Because we're still in old world thinking: we want "bias causes bad results", and we and make simple explanations. But there's a range of decisions that we make when we're selecting a population for a study as input data for a neural network analysis. We need to know what they are. Then we apply the ethical dimension to all of this.

Module 3 looks at ethical issues. And you know I'm nothing if not dogged and comprehensive. Some people do literature surveys where they break down the list of papers into a small number of methodologically valuable studies. I just inhale everything like a vacuum cleaner. What's interesting to me is whether something exists, and if somebody raises an ethical issue, it doesn't matter what the context is. That issue exists. Now we can argue about whether it's salient or not, but the existence proof is simply the fact that there's a piece of writing or an infographic or a video and this issue is raised. So that's what I've done. I spent the last two years inhaling ethical issues.

Similarly, with approaches to ethics, the discussions around ethics and learning, analytics and ethics and artificial intelligence generally sort of skip over this step. They assume that the ethics have been solved - "we know what ethical uses of AI are, and we just just shouldn't do what's not ethical" but I'm going to argue, and more to the point, so I think, pretty conclusively, that these issues are not solved, that the 2500 year Long Quest to find reasons for deciding what's right and what's wrong is an effort that was ultimately a failure, and that we haven't been able to find reasons to make these determinations. We can certainly rationalize things after the fact, and we've done a lot of that, but the manner in which we actually determine what's right and what's wrong is not a rationalist project.

And that leads us to the duty of care. The duty of care is a feminist theory that is has its origins in recent years with the writings of people at Carol Gilligan and Nel Noddings and others from the perspective of practices, from the perspective of context, and especially cultural context, and from the perspective of putting the needs and the interests of general the patient, but more generally the client, first.

And there's a whole discussion there. And this is not a rationalist case of "I reasoned out that this is the right thing to do in all cases." It's nothing like that. It's not universal. It's not argued for. It's based on - well it's hard to say when it's based on. The caring intuition, the specifically female capacity and need to show care towards the young. I think there's reasonable argument there, and I don't think it's specifically a feminist argument. I think we all have our capacity to decide for ourselves or to make ethical decisions for ourselves in a non rationalist way. And this is a way of approaching that subject.

And that leads us to the practices. The ethical codes is what we do now, and so I study that practice closely. I've analyzed, I don't know, sixty, seventy, eighty different ethical codes, and then they're still coming in. And I'm still looking at them, and people say, well, no, there are common things about the ethics here that we all agree to, but if you look at these ethical codes, you find very quickly there is no such definition of ethics as we've quantified it in the different disciplines and in different circumstances. There is some overlap - fairness is something that comes up a lot, for example. But what we think is fair varies a lot from one circumstance to another. Similarly with equity, diversity and other ethical values. Justice - you know people think, "yeah, ethics should be about justice," but the understanding of justice is very different, not just from one society to the next but from one person to the next.

So that leads us to the question: if not ethical codes, what are the ethical practices? And that's the section that I'm going to use to finish off the course and take all that stuff that we looked at before, and think about it. How do we actually decide what's right and wrong? What are the processes of this? What do we actually do?

And looking at this from this mesh perspective that I talked about, we get an understanding of how we can move from ethics as determined for us by an authority or by an ethical code or by a set of rules to something that we can determine for ourselves as individuals and as a society. That's the objective.

It will be followed in in February, March, with a similar sort of approach. I've done an analysis of data literacy models and as well an analysis of elements of data literacy. I've done needs analysis and looked at other needs analysis for data literacy and for things like digital literacy and other kinds of literacy, in general, information literacy, computer literacy, even emotional literacy.

And then we look at the practices: rirst, how we measure and assess literacy? And based on what we've seen so far, we know that it's not just going to be "Can you do this? Can you do that?" Literacy is not knowledge of a set of specific facts, it's something else, and in fact, if we think about ethics and we think about ethical literacy, the same model can be applied to literacy more generally, I think.

And then we talk about enhancing data literacy. How do we become a more data literate society? And even talking about that back to how we become a more ethical society. But all of that is in 2022.

So that's the story. That's what I have in mind. That's the background. I probably shouldn't be doing this. But I I can't help myself. I think the issues are as huge as they get. The the need is as persistent as it gets. And I think there's something unique in the value in this approach that's worth sharing.


I like the fact Stephen that you say you can't help yourself. I've noticed that you don't settle for the status quo in technology. You're constantly trying out new things and not settling for just what Google or somebody gives you. You'll use whatever tool and if the tool isn't there, you'll make the tool.

One of the reasons I enjoy following you is 'cause you got this sort of life long drive to keep going and when I'm trying to do what I do with my students, I'm just trying to get them slightly - you know some of them are struggling. Like I got an email from one and I got "I'm not feeling well, so it's not going to connect with me today." And I think your approach, I'm hoping it's going to help me with that, those students to somehow through osmosis or some other way, catch this virus you have, constantly seeking stuff out.


I'm not going to be able to solve that particular problem, but I think we know what the story is that can be told here, and it's not a story of just you and the student, it's not even a story of what the students should be doing or shouldn't be doing what you should be doing, what you shouldn't be doing.

You're both working in the hub and spoke kind of model for learning. But if we think about the perspective of the student more generally, they're not in a hub and spoke, they're in a community, they're that are in an environment, they're in that culture where calling in like this is appropriate behavior.

Now we know that because that's what they did, right? It's John Stewart Mill. You can judge what people think is good by what they do, by what they actually say is good. You know you, you don't need to come up with a version of 'good' for them. They already have their own definition.

And and that's why an intervention at your level so hard. Because you're working against all of that. And maybe in a classroom you can intimidate them enough. But when you're online you lose that power.

And that's what's been happening in society as a whole. It used to be, and it still is the case in some societies, where "we'll just intimidate people and they'll do what we say." But this is working less and less. And there are good reasons for that: global connectivity, all of that. But also, you know, just this consciousness that people just don't want to take that anymore. I think it's a great thing, although it results in your student calling in sick when they're probably not even sick.

So hat's why I say you can't come up with a solution to some of these problems, because there is no solution to some of these problems and and the very idea that you are thinking that there's a solution, that's the mistake. There's so many ed reform movements based on this sort of solutionism (I guess other people have talked about this as well) without realizing that.

In an environment of authority-based information and power transfer, it's something different. But how would you? And here's the question right? How would you, at least in part? That particular ethic that that particular kid has, knowing that that ethic is created by and fed by their entire community and cultural surround, of which you are a tiny fraction and not even not important on that child-scale of important things.

And the best answer I have, the only answer I have, is to model and demonstrate. Which is where this doggedness comes in, where this curiosity comes in. And my thinking is that people see that and the results that that produces, and over time more people emulate it. So the practical thing, if I had to offer a practical thing, the practical thing in this case is for that student to be exposed to models of good practice, ethical behavior, etc.

Which - as an aside - society is providing exactly the opposite of. And therein lies our problem. You know? You know the sorts of activities that we think we should value, everything from hard work, curiosity, persistence, resilience, fairness, justice equity? All the examples that this particular student who called into your class has are the opposite of that. Their politicians, their business leaders, maybe even their parents, their friends? Hopefully not their school, but who knows, right? School is not the most just and equitable place in the world.

Yeah, so that's my answer. And you know, you know. I mean, it's the the old Clinton thing. It takes a village. It does take a village. And that's the problem. The village right now isn't really up to the task. And we we can't just will it or give it a set of rules to follow. Change has to be more fundamental than that. That's why this was so hard. Fascinating, but hard.


Fascinating is a great word. I like it. Yeah, like they are fascinating.


 So what am I missing? Or am I overlooking?


What do you want me to do next? I'm supposed to start reading here. I gotta dig in. OK. I'm supposed to put a blog post together and a minimum blog post there. Yeah, if if. How to do it?


OK, if you haven't done a blog post for the first part of the course yet, module minus one, then yeah, yeah, you want to do that. You know, get your blog being harvested. Submit your blog.

Write a blog post so that it could be included in the minus one module. I'll keep harvesting posts from every part of the course all the way through to the end of the course, so it doesn't matter how late you started.

There will be tasks for each part of the course, each module. They'll also come out in Monday, so there'll be one that comes out in your newsletter today and it'll be of the form, "Write a blog post about your thoughts on ethics and analytics at this point in time." What questions do you have? Look, yeah, look like the example you gave me with the student who calls in. You know how does that apply, right? Well, that's the sort of question that should be talked about in the blog post, right? How can what we're doing address that.

Something like that. I haven't actually written the task yet. Well, but it'll be something like that and it'll come out in today's newsletter. (Update - it wan't. Tomorrow. -ed)

That's that's the thing with this course too. Like I'm I'm building it as we go with binder twine and cobbled together code and you know. As I said, it would start many moving parts and they don't always mesh. You know, yeah, sure I could just use Moodle, but then it wouldn't be the kind of course I want. Because it's like back in the early days of connectivism, the 2008 course, we created the course to model the kind of thinking that we are doing and that still continues to this day. And I'll be getting people hopefully to to do more than just write blog posts, but actually go out, find things, share things.

I plan - no, I don't know how much of this I can carry through - my plan to actually take all of these concepts and put them in a big graph, put them in a big network and see what the relations actually are. So in other words, to try to do a little bit of network analysis. As we progress through the course and then maybe even if I can possibly figure out how to do it. Maybe even do a little AI as we go through this course.

You know tomorrow when I do my view, I actually do several video segments, but one of them will be some of the ways I'm already using AI for this course, Or even just in general. So I'll probably try to get people to do some of that right now, and instead of writing your blog post, say your speak your blog post and get it transcribed. I should do that!


OK. Catch up on your past activities and prepare for your future activities, and then they'll be reading some videos and such in the newsletter as it comes out. OK, super.