Content-type: text/html
Democracy


Unedited

Hello and welcome to ethics analytics and the duty of care. It's Christmas Eve in 2021. Second here of the pandemic eighth module of the course or almost at the end of the section on culture which brings us almost to the end of module eight. I'm not going to finish the course today but I have a plan for tomorrow to wrap it all up nicely.

But okay, we are where we are. This video is on democracy and it follows from the video on culture and the video on citizenship. And where we've gotten to is, if I had to characterize it in a couple of words, culture is what we do. Citizenship is how we participate and interact with each other.

Not those are very broad general definitions and probably not very original at all. Sure, they're not. And I'm sure there are ways that we could object to them, but we're using this as a frame for thinking about ethics analytics, and the duty of care. Haven't gotten to care so much in this discussion, but believe me, it hasn't left the back of my mind at all for a second and where we left off in the citizenship discussion is that the questions of ethics for artificial intelligence and analytics are probably ultimately going to need to be talked about in terms of AI citizenship and here I don't mean treating AI as a person or anything like that.

Oh, maybe one day. But but yeah, that's not what this hinges on. But the sorts of concepts and ideas that we use to talk about citizenship are the sorts of concepts and ideas that we need to use to talk about AI and ethics because we're not going to be successful.

Simply prescribing a set of values or attributes, or virtues, or characters, or anything consequences, or even defining some kind of social contract. We're not going to be successful doing that where we're going to be successful in talking about the ethics of AI. And analytics is talking about how we interact with these things.

Can we participate successfully in a culture or in a society over that subject is addressed by the notion of democracy? Which is the subject of this video horrors, which highest vulture would say the subject of this audio with a moving picture and slot.

And I want to be careful here. I mean, I mentioned it in the previous video, what I'm going to say here by democracy. I don't mean everybody going to the polls once every four or five years and voting on our representatives, that's one form of democracy. It's one of the formal structures that constitutes democracy, but it isn't what democracy is, what democracy is, is about harder to define, and obviously, with 200 and some odd countries around the world.

We have 200 and some odd definitions of democracy. Minus a few obvious exceptions. Syria North Korea but the rest of the country strive to be in their own way and according to their own values in something way in some way democratic. And that might mean life liberty in the pursuit of happiness as in the United States or peace order and good government as in Canada or even a platonic idea of everybody in their place, doing the thing that they were designed to do different kinds of democracy for different people.

One, I'm after here, when I talked about democracy is therefore, something kind of fuzzy, but it's also a kind of, you know, what, when you see it and, and, and I forget who it was Vickenstein says, you know, they they the truth of the thing is shown in its exceptions.

That's why I have the rising fist. Kind of slide to lead this off. We know what democracy is because we know when we take exception to a violation of it, unless use that. As our starting point, I'm gonna begin, oddly, perhaps, but I think relevantly by the concept of corporate citizenship and so we're picking up right?

Where we left off in the citizenship video, but I think this is a good part because simply of the nature of the state of the art right now in the state of the art is that most AI is produced by large companies, Google Microsoft, Amazon, whomever and volume large, it represents the interests of these corporations or it may represent the interests of national governments.

As in the case of AI produced by China. Six of one have doesn't of the other and the problem is by and large corporations, and especially digital corporations. Haven't been very good citizens. No, I could expand an entire video talking about how corporations have been bad citizens. I'm not going to so you be thankful.

You've been spared that but you know, we, you know, even have some slides on, you know, Facebook did this Google did that, but I don't need to because we all know that corporations haven't been very good citizens and going some people analyze it in terms of, you know, well, corporations they only have one value in life and that's to make money.

We're back and max of ever again, and they don't have any responsibilities beyond that, but it's a diagram here shows, right? There's more than just the corporation and it's customers corporations, interact with owners employees, suppliers communities. And there's all kinds of ways here where the relationship isn't just about making money.

But the thing is, they haven't been very good corporate citizens. They haven't been very good citizens and that's why we fear that the artificial intelligence and analytics engines won't be either. And let me be clear. It's not just corporations properly so-called indeed. It's all of these entities that seem to have taken on a life of their own and that we somehow have separated from the responsibility of individual people from that was an awkward sentence University of Miami.

For example, was the subject of a year-long study by student journalists? Who got utterly? No, cooperation from the board or administrators when it was discovered and the students discovered this, that the university has been using facial recognition technology on the fly and way they discovered it, is that a bunch of students protested something, they're all massed.

So no one was students knew who anyone who any of the other protesters were they were all completely anonymous to each other but the administration called them all into account for their actions. And so it transpires. Oh yeah well we just use facial recognition and there you are. Now, you won't answer for processing and that showed, you know, both a stealthy and unethical use of facial recognition technology.

And that's the sort of behavior that people fear, maybe not facial recognition in particular, although as we've seen already that's a flash point, but just this general idea of behaving like a bad citizen. So what with a good corporate citizen look like. Well, let's look at how they are bags citizens.

I'm going to quote that length from Kent Greenfield who wrote in 2015 in the Atlantic. The power of corporations is frequently misused usually to the advantage of the financial and managerial, elite employees communities, consumers, the environment, and the public interest in general, are elbowed aside incorporate decision making, unless the corporation can make money by taking them into account, corporations are managed aggressively to maximize shareholder return.

As a result, the risks they run whether of oil spills in the golf or a financial crisis, erupting from Wall Street are often unrecognized until too late. The executives who run American corporations? Do not generally think of themselves as having obligations to the public. The social contract of American corporations is pretty thin.

There's a bunch of things happening in that. The obvious here. Is this suggestion that there is a social contract between all of us and and corporations? And it's not clear that that's the case, they have a social contract perhaps among each other, but they don't consider individual people to be their equals.

So there's no soca social contract. The best we can do is force them into one either through legislation or through activism such as union campaigns.

But the main thing is the the separation of the interests of the corporation from the interests of everyone else. The isolation of corporations or the isolation of the rest of us from the corporate decision, making practice. Now, I talked earlier and one of the previous videos in this series, about the idea that an ethical framework requires some kind of democracy, but that the ethical frameworks that we're being presented are either legalistic legislative formats or chaos or anarchy and you're nothing in between and it's in in between that we're going to need to build our ethics where that's where we will find the ethics in our frameworks.

And I could observe just as easily here. Corporations are basically a type of monarchy with very few exceptions, the vast majority of people inside the corporation and outside the corporation have no say in how the corporation is run. And that's why and how they can simply elbow aside, the interest of communities consumers, the environment, etc.

Nothing here I'm saying is new nothing here. I'm saying is not known generally to be true. The question is how to get out of it. The concept that leads us out of it might be something like platform democracy. And the idea here is to think of our technology as a whole, as something that is run democratically, rather than by the corporations that created.

So here, we're talking about platform democracy because platforms are how we instantiate our technology. Right now, AI runs on a platform analytics, run on a platform. A student courses. Run on a platform platforms, are Facebook, Amazon, web services, the the Microsoft environment on which I'm creating this entire video.

So we use the term platforms but really, what we mean is technology. Technology infrastructure. And the idea here is not quote from IV of idea platforms working with governments. Civil society can have experience and neutral facilitators deploy these new product processes for the toughest policy questions. Policy decisions will then be made by the impacted populations and informed by key stakeholders.

Often leading to a strong public mandate, which may even help defend against partisan or authoritarian overreach again, as always is a bunch of stuff. They're in order that the end of it is this libertarian concern about quote, unquote, overreach usually thought of as government overreach of all these days participle.

Overreach might mean something different. There's this idea of stakeholders which we've commented about before and I made the remark that I might not be a stakeholder and what happens to the Rohingya people but I certainly have an interest in this and we have this idea that we can build these processes that manage our solving of these toughest policy decisions and there's a schematic diagram of one such process on the on the slide.

I'm very simple one and one that probably wouldn't work because it's based on voting and thing and approved proposals and things like that. It's a very, it's very high level. Kind of thing. For something, where we're going to want very low level kinds of interactions. You know, debating policy proposals isn't how we reach democracy, interacting with each other, on a one-on-one basis, is how we reach democracy and it has always been that, but it's a start, right?

This idea of platform, democracy is a start. So it's not how enough on the details. And let's think about the idea of managing our technology infrastructure as a democracy. I know bold and radical. Right. So what would it look like? Well, this is from network weaver something. Called principles for ecosystem.

Governance, it's pretty good framework again, so sort of thing we could talk about. And again, it's the petri dish not the culture but it's a start, right? So, the foundations are of what they call societal and foundational values aligning with mission. And, you know, the problems I'm going to have with that right off the bat but believe that aside be four enough, this society, make success make accessible co-creation a habit.

Embed accountability design for evolvability. So I'd wear those a bit differently and I'd say something like having pillars of being answerable to society supporting co-creation or supporting creation. Embedding accountability and designing for evolvability in another place they say bias interactions to deliver impact and what they mean by that is are things like build capacity to amplify values foster societal innovations nurture, relational management, etc.

Etc. You know, we can argue about this particular platform governance platform and I would right. I want something more distributed, more decentralized, not based on social, and financial values per se, but one that does contain these essential ingredients for an architecture of participation as opposed to say an architecture of control.

You see the difference?

Which gives us this idea of design as participation and in education I've used to slogan learning isn't something that we do for you. It's something that you do for yourself, maybe with our help if you need it, it's the same kind of thing here, right technology that we are the architects of our own society.

And in some cases, we're going to actually have to, you know, take control of the slide rule as it will but mostly it's a realization as Kevin. Slavin says that you know you're not stuck in traffic, you are traffic. This thing that you think is this separate independent entity traffic, isn't you by your actions?

In your decisions are making this thing joy. Isho says, instead of trying to control or design, or even understand systems, it's more important to design systems that participate, as responsible aware and robust elements of even more complex systems. Now, I want to be careful here because Joey Idaho, is this part of that crowd that I can characterize by referring to people like Richard Stahlman and Stewart brand and others that West Coast silicon valley libertarian ethos characterized by the wired article on digital citizenship and it's this is not what I mean by that, right?

So it's not some kind of new libertarianism that I'm describing. Here it is. Describing a mechanism that allows us to interact with each other instead of having that interaction managed for us. We see the distinction. Let's let's try to refine that a bit.

In the real world as opposed to the world that the the libertarians live in. We have issues of access and inequality, I wrote noel daily. This is a while ago. Now, the following we've been engaged recently in a project to define and assess digital or data literacy and topics related to this word.

In my mind, as I attended, the AI policy community of practice discussion this afternoon, these topics overlap in the concept of data feminism. A quote way of thinking about data science and data ethics that is informed by the ideas of intersectional feminism. And then I refer to an open access book by Martin Engelbertson and others and Helen Kennedy.

But contains a number of papers that talk about this and the difference between what will call I guess data libertarianism and data feminism, is that in the latter, but not in the farmer issues of accessing inequality are important. And similarly, when we talk about design is participation, being able to actually, participate is important.

And when we talk about platform governance, being able to govern is important and that's what comes up with the discussion of accessing inequality.

Little while ago, Sash, Sasha Costanza chalk, wrote, design justice, and the idea here is to fold to my mind. First as captured in this phrase, universalist design principles and practices. Erase certain groups of people specifically those who are intersectionally disadvantaged or multiplying case, and point the Rohingya among others, right?

And secondly, a proposes design led lead, by marginalized communities, to dismantle structural inequality and advance collective liberation and ecological survival. Well, collective liberation and ecological survival are admirable goals, but are really depends on what these marginalized communities actually want as opposed to what we think they want, right? We can't say that.

You are participating in this and then tell us what the conclusions of your participation will be. It's, let's not how it works. But again, you know, we're here talking about ways of creating architectures ways of creating technology infrastructure where the people who use that infrastructure are the people who create it and where we take special attention to ensure, everybody is able to use that infrastructure and even extra special attention to make sure that the needs of those who are most disadvantaged are addressed.

First radical. Right. What would that look like as you know, global policy? Well, first you feed all the people who are starving, then you move to the next step, you know, you know, first you make sure everybody who doesn't have a home has a home then. You move to the next step, that's what it looks like.

And I'm working on the presumption here that the people who are starving and the people who are without a home, would view those as priorities. But of course to do this properly, you would involve the people who are starving and the people who are homeless in the process of providing food and shelter, so that he'd give them food that they can eat and you give them shelter that they can survive in or maybe you don't give them food that they can eat and shelter that they can survive in, but they participate in the creation of food that they can eat and in the creation of shelter, that they can survive in to see how this works, right?

It's it's moving away from this commerce-based client server, sort of model of democracy and concepts. Like data, feminism, informed. This approach just a simple quote that I've pulled out of the data, feminism, book by Kathleen, Dignacio, and Lauren Klein. The most complete knowledge comes from synthesizing, multiple perspectives with priority given to local indigenous and experiential ways of knowing.

Well, again, local indigenous and experiential ways of knowing also happened to be those that are most at risk. And, and to me, the priority here is not to privilege certain groups of people simply because of what they are, but privilege them, because they are the most impacted, by whatever it is that we're doing.

And they are the most vulnerable to hiring or other aspects of whatever it is, that they're doing that we're doing. So we can look at things, like, Amy Colliers. Inclusive design, we can look at an article on KQED about care. Being taken, not to create experiences that harm people, I should have but don't have a slide that references all of the work that you to Trevor anise is doing on accessibility and accessible design and a host of other initiatives that are working in that direction.

And what's interesting is that these other all of these initiatives are from the most part small, scattered independent they're precisely not what's being created and advanced by the corporations and the institutions. And in fact there's a significant backlash against them. Most recently characterized, for example, in the backlash against teaching critical race theory, which is, you know, an outcome of this sort of approach.

Another way of representing this is in the concept of algorithmic justice right now. Here, we're not going to presume that. Justice is some specific characterized set of duties or virtues or responsibilities or whatever by algorithmic justice. What we actually mean here is give members of communities most impacted by algorithmic bias, more direct democratic power over crucial, decisions, by democratic power.

I believe that zimmerman and company mean actual effective power. All right. And not merely after it's deployment, but also at its design stage with meaningful opportunities and these are quotes, meaningful opportunities for bottom up, democratic deliberation and democratic contestation of algorithmic tools. Ideally before their deployed. The only thing I don't like about this is the language and note that it's written very much of a form.

We will give them such and such right and speaking for myself I'm not in a position to do that. Most people are it's maybe the people here writing. This this particular piece of writing aren't in a position to do that or maybe they think there in a position to do that, but we need to get past.

I think this idea that this is something we give them, right? Because I think, but more and more, what will be seeing of these type of, it's a hope, I guess, is that it's more and more something that they will take.

In that same time, in that same place where I wrote that the price of democracy is participation

I also wrote a thing called Zen and the art of autonomy for student newspapers and the thrust of this document, it was quite fun and maybe one day I'll actually publish it somewhere or post it online. At the very least, but the thrust of it was that you can't give the student newspaper.

Autonomy, you can set up the structure and all of that and you can, as I've been saying build, the petri dish, but autonomy, for student newspapers, only comes when the people who are running the student newspaper act autonomously. That's the only way. And similarly, the only way for members of communities impacted by algorithmic and other biases to have more democratic power is for them to take democratic power backs.

The only way if they don't exercise that power, then it doesn't exist. And it is a necessary condition that they exercise that power. It's not something that can be given to them. They have to do it and that's the part of the equation that I think a lot of people miss.

And I think that all but we'll just give on the opportunity and oh well they didn't do it, I guess they don't care. I don't think it really works that way. You might be wondering about the image on this slide. It's a representation of the land that is controlled.

Either directly or being directly in New Brunswick by JD Irving. And it's an indication of how democratic control over since a. Large part of the province has been removed and put in the hands of a corporation that exists only for its own benefit. And to properly manage those areas, the people, not just living those areas.

But the people in new brands of generally and in northern Maine because it's equally affected and even areas of Nova Scotia, should actually take democratic power over those lands. And they've been trying what you need, the structure, you need to framework, which you also need the actions.

Was also algorithmic injustice that we need to think about. And this again is still quoting Zimmerman and colleagues we need to first make it possible for society as a whole, not just tech industry employees to ask the deeper x anti questions xantes, just a fancy word for asking questions.

Before the thing happens, rather than after the thing happens, and I would say, again, it's not just asking the questions, but it's knowing the answers and then it's being able to change those answers so that they're not negatively, affected Zimmerman, could continue changing. The democratic agenda, is a prerequisite to chackling.

Algorithmic, injustice, not just one policy. Goal among many 50413 you know, Ontario the benefit is that it can save you up the 30 minutes but look at the size of the land area that's impacted. Including a lot of land that was preserved as wildlife or non-urban land. And this is one of these cases where the people who are most affected are not going to be able to affect the impact or the outcome of this decision, at least under current, you know, as it stands now.

I once wrote a defense of the concept of direct democracy by quoting spider Robinson, who was living in the Annapolis Valley of Nova Scotia and he wrote about the expense of consultation process that the government had with the people of the valley about where they should put the highway and the people said, well, don't run it right through the middle of the valley.

This is prime agricultural land, it's where people live, put it up on the hill where it won't bother people and then the government came along and put the highway right down the middle of the valley and that's the sort of thing again, that people are worried about with respect to artificial intelligence.

And again, it's not the specific values, it's not the specific ethics. It's just that they are separated from any real power or control over the outcome, positive, or negative.

I've written about what needs to be in place to enable this sort of participatory democracy. I call it this semantic condition when I first talked about it, it was in the context of describing success when that works. It was a talk that I gave in Palermo and they describe it as the democratic condition and the idea of the semantic condition is that this is what enables networks to express, goals, values, desires, intentions, anything semantical, in other words also, meaning and truth.

And so I listed those under the heading of what I called networks, and contrasted them with what I call groups now bad terminology. I'm sorry, I used it, but there you go. And what characterizes the the organization of technology and to all art degree society. Today can be characterized as based on unity coordination of being closed with memberships and lock in and being distributed in the sense that everything flows from the center.

Everything flows from the authority, and we've seen that in a lot of the characterizations of what an ethical citizen should be etc. Where we talk about, you know, sharing a common vision, working toward common and working collaboration, etc. Etc. But I think the framework or the structure that we need in order to be able to realize ethics and anything larger than an individual is going to be in network, coming the structure.

And we've seen a lot of these values represented in some of the different materials that we've looked at as well. And the form major elements of these four, major properties of an effective semantical and therefore ethical network are diversity, autonomy, openness and interactivity. And in fact it was the last one interactivity that I keyed in on as one of the core elements of citizenship.

Right. The idea of participation, the idea that everybody working together are, what defining the values, the conditions, the outcomes of a civic enterprise, but the other things are also important diversity. We've emphasized that from the start of the course, we can't get to unity on ethics. And it's not desirable that we do.

Diversity is a better way of doing it because then we're able to have those conversations. We're not all saying the same thing autonomy. Similarly, it's not that there is no identity of group. There is no identity of a collective or anything like that. It's that this identity is freely chosen and freely participated in by the people in it.

People make their own decisions whether or not to join something or not. They're not automatically lumped in openness is a virtue of promoted from many years now. You don't have to agree with this particular character characterization, but I do think that the requirement here is to rethink how it is.

We make decisions, how are the decisions being made about a I analytics, are they decisions that include everyone? Are they decisions that are freely? Undertaken are. Are they decisions where all the communication is open? Are they the result of a genuine and meaningful discussion? Especially including those who are the most impacted.

Here's what it comes down to and this is something that I wrote again not too long ago in relation to the firing of Timnet. Gebru, who was the ethics coordinator for Google until she wasn't. And here's what I wrote, the algorithmic fairness in the opacity group. A fog at Berkeley has written an open letter to Google executive supporting fired ethics researcher.

Tim net gebroot and the response of AI researchers inside. Google The telling point is this quote Ultimately change requires a dominant groups seed. Power institutional commitment must be embodied in practices and processes to enact meaningful change. In this letter dominant, groups are defined in terms of position and race but the same basic equation.

This is mean now applies. No matter how power is defined whether by whether it be by income, ethnicity religion, language, or whatever, change requires a dominant groups seed, power, and even more, particularly change requires a dominant individuals, seed power. And what this means is that no single group or individual has ultimate power, it means moving from, a hierarchy to something different and I wrote I'm not sure.

Google has the skills capacities or even the legal right to do this, but it's the only way to replace rapaciousness with ethical behavior. And I think that's true. We're going to talk, are we have talked a lot about what ethics is for artificial intelligence and analytics maybe with the idea that if we define it clearly enough and gave a good enough argument for it, the powers that be would make it.

So, but we're never going to define it well enough to do that, and it's never going to happen until the powers that be let go of that power in the final part of. This course, I'm going to say something along the lines of what is ethical in a society, is what we all do, right?

The ethics of a society is how a society behaves ethically, and there really isn't any difference between the two. A personal individual ethic might be different from the ethics of the society. But again, a personal individual ethic isn't separated, from what we do is persons or individuals either. And if that's true and given that, that's true, then the only mechanism to create ethics in artificial intelligence and in analytics is for them to become products of the society as a whole and not self-interested individuals or companies within that society.

There's no way around that.

There's, there's no way to have an ethical AI or analytics system without having ethical organizations that create it management. Run it, deploy it. And the only way to have those are to have organizations that are. If you will socially run in some way or another, to have some kind of platform democracies we characterize it earlier in the video.

You don't get one without the other. That's what I believe. Anyways, that could be wrong, right? Maybe there is a CEO out there somewhere, a Bill Gates, Steve Jobs, and Elon Musk. The has an insight into ethics that I don't have in the rest of society doesn't have perhaps.

That's the case. But to this point, there's no evidence that it is. So, and to me, that's the most telling point of all, if we want platform democracy, those in power will have to give up power. How that happens is probably the most important question of our lifetimes. That's it for this video tomorrow.

Christmas Day. I'll do a little bit more on personal agency and an ethics of harmony and then how to wrap up the course, and thank you for all your patience. You've been really, if you're still here, you've been really patient and I really appreciate it. Talk to you soon.

I'm Steven Downs.

Force:yes