Content-type: text/html
How AI Works


Unedited audio transcript from Google Recorder.

Hi everyone. It's Steven Downs here, again with another episode in ethics analytics and the duty of care and we're into module seven on the decisions we make. And this presentation is called how AI works. Now we're not going to explain how AI works in this presentation. So let me be clear about that right off the bat, but by the same token, it's important for the purpose of our subject to have a sense of what's going on in a typical artificial intelligence or analytics application.

Because a lot of the discussion gets really hand wavy and what I like about artificial intelligence especially when it comes to talking about you know, and neural networks especially when it comes to talking about cognitive phenomena learning training development. And all of that is that we can actually describe the process.

We can describe it mathematically. We can describe it physically. We can even describe it sociologically in terms of say messages or communications between entities. Now, people might say well yeah, you're just being reductivist or you're displaying your quantitative bias and and my reaction is well, whatever. If you have an alternative theory of how these things work, tell me what it is.

But in such a way that I couldn't make one or reproduce it, or use it to actually make predictions or any of the other things that we expect to science to do. Now, I've been kind of playing my hair out today, getting this ready, and I actually ran one of the videos in the discussion session.

I want to say there are many good introductory videos out there for this subject and I want to say these are better than mine. This one here is. This is the one the top one is the one that we showed during the discussion today. So I'm not sure if YouTube allows me to run a video containing another video.

We'll find out I suppose this is another the last one here is another really good in-depth discussion of how a neural network works. So if you're really serious about it, watch these other videos and maybe follow up in more detail. The purpose of this particular video is a bit different.

I want to talk about how it works. Yeah, but I want to do. So, in a way that is focused on understanding the concepts, understanding the terminology, and especially getting a sense of the sorts of decisions, and the sorts of decision points that will come up in the made of artificial intelligence and analytic software, It's not going to cover everything by any means.

This is super introductory and even then there's no doubt that some of the people who actually know what they're doing in the field. Will complain about this presentation Still, let's go with it and I'll put it up for correction, and we'll be corrected later if we're wrong. And I think most of this is pretty solid.

I feel good enough about it. That I'm actually committing it to videos, So that should say something Because nobody wants to look bad, right, Okay. What does artificial intelligence do? And and by implication, what do analytics engines or even learning analytics? Do well, ultimately, they're nothing more than a statistical function but what makes them different is their big walk in statistical functions, you know, a simple, a simple system that takes an image of a number and determines.

What number it is can have 50,000 different input variables and designing. And AI function is basically a mechanism of tweaking these variables, and some of the tweaking we do manually and some of the tweaking we do using algorithms like back propagation, which I'll talk about, but ultimately these functions our statistical associations between large numbers of individual data points, That's why they sometimes call it big data.

You know, we're not, depending like, we might in ordinary human calculations of things through or four different kinds of pieces of data. We're depending on tens of thousands or in some cases, millions of pieces of data, That's really important to understanding what's going on in AI. Because at core the concept for pretty simple.

It's when you scale them and you start messing around the structures that you get these big results. So unlike to talk about what AI does keeping in mind, it's a statistical sort of thing in terms of four major categories. Again, this is just me interpreting. What AI does regression or another words finding lines, or patterns in data, feature detection.

Let's identifying bits of things. We'll look at an example of that, in this presentation, clustering. Now, I don't talk too much about clustering fact. I only briefly refer to it on one slide, but that's organizing data into categories. And then, finally prediction, which is test. The word suggests making prediction, what will happen next time?

Put new data into the system. What will come out the other end. So, the core of today's artificial intelligence engines is something called the perceptron or the artificial. Neuron, I want to be careful here because this is one of the key points where people actually get confused about AI and software generally, because you always hear people, say things like, you know, artificial intelligence can only do what you programmed it to do or artificial intelligence depends on the rules that you set for things like that.

Now, there was an old type of artificial intelligence called expert systems, which are based on series of rules, and then you would apply the rules to data or to situations or whatever medical symptoms. I know it would pop the answer based on the rules for various reasons which I won't get into that failed and virtually all artificial intelligence today is done using a mechanism of artificial neural networks.

So as you can, guess the artificial neuron is modeled on the human neuron. So I've got an side by side here. And again, this is from one of these videos, all of these images are from the, the videos, because why not? So you see the comparison on the right hand side, I have a neuron and on the neuron, we have some input axons as what they're called, then the neuron itself and then some output more accents, right?

And they're connected to whether neurons similarly, in an, artificial neuron called a perceptron, we have some inputs labeled here, a, b and c. It's the last time we'll see this labeling but we'll leave out aside. Then the neuron itself, which is called processing, which is a bit inaccurate. And then neo put the signal that it sends to one or more other neurons.

So, that's the basic idea. So, let's look at this concept of a perceptron in more detail. And now, we'll introduce the terminology that will be using for the rest of this presentation. So, we have a set of one or more input values, we'll call these x. So, we have x1 x to x3, and so.

On In a simple example, x can either be 0 or 1. But as we can see later on x could be anything between 0 or 1. So, 0.5 0.634 whatever, Then the output of the neuron and we'll sometimes talk about that as the value of the neuron, or the activation value of the neuron is why, and why is related to, that's what this telly means here related to or proportional to the sum.

That's what the sigmoid does of all of the axes. So if you have, I, in this case, high is three. You have the sum of three x's. Okay. So basically, get your input put them together somehow, that's how you get your rope, put pretty simple. So the very simplest neuron, right.

Take x value. Give us values for x, y and z, right? So, two three and five. So and then y could just be adding them up. That's 10. That's the output of course. That's not very helpful and let's modify this now to make this. Do some work for us.

So what we're going to do is instead of just adding up the input and passing it on this output, what we'll do is we'll say that. Why that's the output value. Remember has to be either a zero or a one. All right. So remember the input is either as a role or a one and now the output has could be a several or one.

So how are we going to do that? Well what we're going to do is we'll define a threshold value so that if the sum of the inputs is greater or equal to that threshold, the output will be one. Alternatively if the sum of the inputs is less than the threshold, y, will be zero.

So remember before the sum was 10 to 3 and 5 added up to 10. So if I set my threshold to minus 5 then or two five then 10 is greater than five. I get a one. But if I set my threshold to 20, then my thresh, then my input, which is 10 is lower than my threshold.

So I'll send over to zero. So you see what's happening right now. The threshold really determines whether the input is enough for me to send some output. Okay, that's pretty simple. All right, but let's express. The threshold slightly differently. Will express it as what we call a bias. And a bias is a negative number that we will add to the sum of the input values.

So I'll let's make my bias five. Okay well it's got to be a negative number. So let's make my bias minus 5. So my input is 10 just like before. Now, I will add minus 5, so that's like subtracting five. Some I result is five and it fires.

Effects less. So I'll suppose my bias is minus or my bias. Yeah, my voice is minus 20 then it will come out as 0 so far so good. Okay, there's nothing here but not really should be a zero or something. Okay, so we have now the concepts of the input values, we have the bias and then the idea of whether the neuron will output a one or is zero.

So we call that number the one or the zero, the activation value, the number that is produced by the neuron. So just like you have an input value. Now you have your output value or your activation value. It's the same thing here on this terminology. We can also define an activation function in the neuron, which is the algorithm or the instructions, that a neuron uses to determine what the activation value is.

So here, back here, this is an activation function, all right? If it's larger or less than zero, or larger, or less than whatever are biases. Then it produces a one or a zero that's the activation function. Okay, so now we have the idea input values, neuron output value and the activation function determines CO put value.

That's all. We've got so far. No, what can happen is that the input, the different each individual inputs can be weighted differently. And we might think of that as, as a way of saying, how much influence each input value has on the neuron, how important it is, how significant it is?

How sailing it is in different ways of expressing, what we mean by awaiting? But basically the idea here is you take your input value. You multiply it by your weight and you get your overall input to the neuron. So, the output value is going to be is going to be proportional to the sum of all of the weight times input values, from each of the connected neurons.

We multiply these together and typically the way this works is, I then put value will be something between 0 and 1 and the weight will be a number between 0 and 1 so that when you multiply them it'll always be something between 0 and 1, it keeps everything nice and consistent and that way too.

It's, you know, it's we're not dealing with absolute values of things and we're not going to have a situation where, you know, you have a value of a million or something like that. And you know, you just don't know how to consider that and comparison with a value of two.

So keeping everything between zero and one keeps everything in this nice neat range. So what we're really measuring are proportions and influences, and things like that. So that's it. So, let's see how this works, right? So, imagine the question is, should I go outside today? There's three values, three possible values.

Whether the weather looks nice, whether the forecast says, it would be nice all day and whether I have a jacket now, we wait these differently because each of these matters to me differently, all right? Doesn't really matter to me whether I wear a jacket or not, because I'm just not a jacket person, but it does matter.

Therefore a lot if the weather looks nice and the kind of matters. If it'll be nice all day. So we assign our weights accordingly and that's how we're going to calculate our output. We're also going to set a bias value for this example, in this case of minus 2.5.

So now we apply our calculations, in fact, it looks nice out today. So it gets a 1. In fact, it does not say, it's going to be nice all day, so that gets a 0 and in fact, I have a jacket so that gets a 1. So now we're going to do our calculations here are our weights as I talked about before, here's the bias which I talked about before.

So we're going to sum all of these products. So two times one, which is two, one point five times zero, which is zero and one times one is one. So that's going to add up to three. Minus two point, five is going to give me a final value of zero point five, which actually is exactly 50/50, should I go out today or not?

Not really a very good example from that perspective but you get the idea, right? So the way this works now is that a single perceptron doesn't do very much but if you take them and put them into networks of perceptrons, then you can do some really neat. Things. Now, typically the perceptrons are organized into what we call layers.

So this is kind of an odd example, because there's only one input on the input. Layer. Usually you've got many more than just one and then you'll have one or more hidden layers. Finally, resulting in an output layer and this output layer, may have one value or have may have two, three, four, five, or whatever.

And unless I'm wrong, it has a number K of possible. He'll put values like could be wrong about that but that's how I think of it my own mind anyways. Okay, so here's an example, here's an input layer. Now, this is a little bit bigger than one. This is an input layer consisting of a bunch of pixels on a screen.

So 28 by 28 matrix of pixels that adds up to 784 individual pixels and each pixel can have a value between 0 and 1. If it's 0, the pixel is dark, it's black. If it's 1, the pixel is light, it's white. And then values in between our various shades of gray.

And that's from the video this video here, which is a really good video. So that's our input layer. Now, what we want our AI to do is recognize what numbers this, and this is the neat thing, right? You and I look at that. Number we go nine, right is, it's an easy thing.

It's a hard thing to get a computer to do, but it can be done and and the science of this is pretty spot on now. I can't believe I just said pretty spot on. Never mind. So here is a network of neurons. Now, here's our input layer, I'm indicating it with the most here 784, individual neurons.

We've abbreviated that a bit and then we have two layers of 16 neurons and then an output layer of 10 neurons corresponding to the digits that we're looking for 0, 1, 2, 3, 4, 5, 6, etc. Can't believe I went all the way up to 6 with that. So what we do is we feed in this matrix as input and what we want to get is the identification of that particular.

Number two as the output. Now, a couple of things here. First of all, this is all pretty arbitrary. I've defined in input matrix of 784, which is 28 by 28. Why that? Yeah, I don't know. That's just, yeah. The fidelity of the data that I have. That's how good it is.

You see, it's not very good, right? It's pretty pixelated. But you know, if I took off my glasses, I don't really notice that. Now I have two layers of 16 neurons each, that's purely arbitrary. I could pick any number of individual neurons for each layer 16 and 16, sure whatever.

Now, I have 10 possible outcomes that the argument here is that, it's because I'm looking for one of 10 digits fair enough. But there's nothing about any one of these outputs or these. Yeah, these outputs that make it this number, this number, this number. And so on these digits, here are what we would call labels for these output neurons.

And these labels are something that we are bringing in to the picture, right? It's not part of the neural network itself, it's an interpretation we're applying to it. All right, and maximum important concept, I think because you know what is the digit one, you know, it really is nothing more than and interpretation, that we have of a visual perception.

If you put something in front of us and ask us, what is that? We will say the number one. But there's nothing inherent in the thing that you've put in front of us. That makes it. The number one, this is a recognition task that we've done that results in associating, whatever we've seen with the word's, the number one, or the formalized character here, which is the number one.

Okay. So, what's happening here is we feed the data through the input layer, that results in the activation of some though, not all of the neurons on the input layer. Now these again may have values between 0 and 1. So the ones that are white. Have a value of 1 ones that are black.

Have the values of 0, the ones that are gray have something in between. Then the next layer neurons are activated, now, none of them are going to be purely white. None of them will have a value of one. Some of them may have a value of zero, and then, finally, we're going to output into the output layer, and hopefully we'll identify one number with a good shade of gray, and the rest will be black.

So, one one and the rest will be zeros and that'll tell us that this figure that we fed into the system is in fact. The number nine. Now, that's how it works. But there a couple of problems it's it's a pretty crude model that I've described so far and for a lot of these tasks it's not actually going to work out.

Well there's a couple of things. First of all, I've been saying that that all my neurons have to have a value of zero or one, which is a really big jump. And I actually, I've even eluded already to the fact that some neurons can have an intermediate value 0.5.

Say, but the activation function that I described earlier allows us only to have 0 or 1. So we need to fix that. And the second thing is, we've got a calculator all these weights manually. That's how many was that? So, we have 781 neurons and then 16 neurons in our input layer.

So 781 times 16, that's how many weights we have to calculate here and then 16 squared here and then 160 more weights here. That's crazy. So we're not going to be able to do that manually. We could if we had time, but it doesn't make sense to do that.

So, let's address both of these functions these problems. First of all, with the activation function. Now, this one as I said, you're either at zero or you're out one. This is an activation function, here's our bias, right? So if we're less than the bias by any value, it's going to be zero if we're greater than the bias.

At any value, it'll be one. This is called a binary step, activation function for obvious reasons. What is commonly done is to use a function that smooth side out somewhat. So this is a popular activation function. It's used in a lot of examples. It's called a sigmoid activation function and instead of a perceptron now we're talking about a sigmoid neuron, so we calculate the output in the same way.

So remember we had the sum of the wait times the input, plus the bias we'll call that z because why, I don't know who just call it Z. Then we'll use that z value that we've calculated as the input value for a sigmoid function. So that's a function that uses an exponential and one of the values of this is it'll keep the output of the function somewhere between 0 and 1, which is nice because now we're still dealing with proportions, but it will smooth it, right?

We still have the bias involved here but it'll smooth it out so that depending on the input we might get a value of 0.2, we might get as value of 0.7 or all the way up to 1 depending, right? So that's just one type of activation function. There are many activation functions and they're used for different purposes and they produce different kinds of calculations.

One of the decisions that needs to be taken, is with respect to the sort of activation function that you are going to use networks with different activation functions. I'm going to have different properties. They'll behave differently. And so you is as a designer of neural network, algorithms is going to need.

You are going to need to look at the sort of task that you're trying to do is that a regression task. Is it a prediction task? Maybe you're writing generative software, whatever. And then think about what kind of function is going to produce the neural network behavior that you're looking for, Or it might even be a mixture of functions.

You might have some neurons that use one kind of activation function other neurons that you've used a different activation function. All kinds of ways of mixing and matching activation functions here. These are decisions that are normally what we would associate with bias. A, you know, they're not biased in the data center or anything like that, But nonetheless, they are decisions that are going to impact how are AI performs and what kind of characteristics it will have.

It might even think of it as like setting the personality of the AI. Okay. It's probably a bad example, but we get the idea. The next thing we need to do is adjust the weights, so we don't have to do it manually because nobody has enough lifetime to do that.

So the way this works is through. Well, one way, this works, There are other ways to do this, but a popular way that it was first invented back in the, I think is 1980s. I forget exactly when it came out, but I'm pretty sure of the 80s, Rumor Heart and McClellan.

In parallel, distributed processing. These slats were Ryan, counted it first. And what it does is it adjusts the weights of a neural network through a process of feedback. This is a very simple representation of it but basically, you put input into the network that produces output and then you correct that output based on feedback.

So using our example, we put in our image me 8 by 28 grid of dots, it would produce an output saying say it's the number six. We'd say no it's not the number six, go back and do it again. And that feedback would be used to adjust all of the weights.

The way this is done is through the creation of a cost function. What the cost function is is basically a way of working with the difference between what you wanted and what you got which you want it is represented with this. Why and what you got is that is represented with this y with a hat.

So this sums up all of the values that you got as compared to the values that you wanted. That applies it to a number one, over two times n and that is going to give us our cost. Then what we're going to do is try to find the gradient of the cost function, for each layer and you can sort of see it here in an intuitive way on this graph.

Each of these elves here, these are the individual layers of our network and WVs are our individual weights. And what we're doing is we're propagating the error back through the different layers and but it's not even, right? It doesn't go evenly. We look at the overall function and say, well, okay, given the overall function that describes this cost.

What is the gradient of the function between each of the layers? And then that tells us how much of a correction to make in the way layer by layer by layer. So that each layer basically assumes, an appropriate amount of responsibility for the error that was produced as the output.

Now, what you want to do is you want to be kind of careful about this, so we'll produce the error and here it, here it is. Right, what I got versus what I wanted. So we've identified the air. And now what we're going to do is calculate it for each individual layer as we go up through the mirror network.

Now we want to not try to correct all of the error once because what would that what that would do is create perhaps wild and oscillations? So what we try to do is just move iteratively. We'll correct the layer, the error a little bit, and that will try again, corrected a little bit more, and then try again.

And that way, we sort of move step by step toward the optimal value, that most reliably predicts the output. Now, there's things that can go wrong and ways that we can tweak this and I'm not going to talk about all of that. But the learning rate, here is an important calculation to make define it to narrowly, you know, make it too small.

Your system's going to take forever to run this calculation, Do it to broadly. It's going to continually overshoot and won't reach its calculation at all. So yeah, again, missing one of these things sort of have to try different values to see how closely you can get to the optimal weights in a reasonable amount of time.

Can also adjust biases through back propagation and I'm not going to get into the mathematics of that because that would have taken me more time to figure all out. But the idea here is that when you adjust to biases, your basically adjusting, the sensitivity of each neuro, how likely is it to have a higher activation value, based on what it received?

They practical effective that is that you can tune your neural net to be more or less sensitive to things. Here's an example on the screen here how this can be used in clustering. So the the input phenomena that we have might be input values for for different neurons. Look like they sort of break into two clusters with a zero bias though.

You know, we can see sort of intuitively. It's not going to be very good clustering because it's going to put some of these triangles in the same cluster as the circles. But really, you can see just by looking at it, they really belong with the other triangles. So we adjust the bias and now we're going to fire only when we see the circles is going to take more to get us to detect a circle, and that not adjustment can be used to tweak for categories.

Now I just have a simple up and down line there. But if we mess around with mathematics and we have more kinds of clusters that we want to create. Like, for example, instead of just two clusters here, we might well, four clusters or whatever, so we need diagonal lines and such.

So there are other ways to tune our neural network in order to, you know, get results that look into us. And again, you know, that's how I'm saying that. Right? You know, there's always a comparison between the person running the neural network and the the tweaking of all the parameters that produce is the result that the neural network is result.

Is is producing here? I'm tweaking the bias through the back propagation process in order to produce good clustering that I like. And that's similar to what happens elsewhere in other. Another decisions that we make in how the neural network operates, It's not exactly like, you know, the system does, whatever the algorithm says it does because the system isn't that hard and set in stone following rules.

It really is a matter of tweaking this you know fiddling with that you know but justing a bunch of variables hoping you get the right result but an awful lot depends on what you think. The right result is and there are some instances of neural networks where you don't have training sets and it's all done by tweaking the variables and you get what you've got.

That's a bad approximation of that. But nonetheless and then all of the decision is being made in the tweaking and the twiddling tweaking and twiddling by the way, are not technical terms. But again, trying to give you an intuitive feel for what's going on here. Speaking of intuitive feels, you might have asked what the what are the layers doing well, here is a way of thinking of it, and I want to stress.

This is only a way of thinking of it. It might not be what's actually happening in any given neural net. Now you could if you were managing all the weights manually, set up your neural net to do this. But if you're training through back propagation, it might not actually be doing this.

And so this is just a way of thinking about it. So let's think about the output that we have. And so we have these 10 digits here. These 10 digits. Here's three of them could be characterized as a combination of features. For example a nine is sort of like a circle near the top and a long stem and eight is a circle near the top and a circle near the bottom.

A four is a line and another line, and I've clipped it, but a horizontal line. So, these individual neurons in the third layer might be thought of as corresponding to features. Not were found in the input data. Now, again, I stress. This is only an interpretation. It might or might not be happening but you know, there's an argument for it, you know, in human visual processing.

For example, there is an argument to be made that we are doing something like this when we recognize objects out there in the world. For example, to get the features are other layer might be doing something like edge detection. So, remember the little circle at the top right? Well, really that circle is made up of four, maybe five edges that all combined to create the circle.

Similarly, that stroke from the top, to the bottom, really consists of three sort of again edges. And so the first layer of our network might be doing edge detection and then we're moving from the detection of edges to the detection of the features. So we go from raw input to edge detection, to feature detection to identification of the number.

That's a way of interpreting what's happening now, you know. And I know that's not whats actually happening, which I actually happening is the mathematics, I just described. But when we think about why the numbers are the way, the numbers are in a successfully functioning neural network, it can be thought of as functioning this way.

So the interpretation is something that we put on to the network. And, is it part of the actual network itself? It's not inherent to the network interpretations of obviously or something that we bring to the table. And if we think one interpretation matters more than another, then we may want to tweak our network to work toward that, sort of interpretation.

For example, I might work with just the first two layers until I've got it tweets about, they really do recognize edges and so on. So, I can get my hands quite far into this neural network, how to the calculations get made, because as I said, it's nothing more than a big mathematical algorithm.

Well, here's here's some example. So here are all of our input values and these correspond to what is called a vector. A vector is just simply a row of values. So we have here, say, eight input values. We have a vector of eight values long then we take that, we take the matrix of all of the connections between the weights of all of the connections between each layer.

So here's our input layer. Here is our first layer. Our first layers connected to each of these inputs with a weight. So that's this, the set of weights here. And then, for the next one, we have another set of weights. So we have these sort of like horizontal vectors, but if you take them all together, that gives you a matrix.

Now, what can be done? Instead of calculating, all of these things individually, we can just do a bit of matrix multiplication. And when we do that, we get our output values which are expressed here as the output of that function. So,

The reason why this is important is that I would hear, here's another example of of the matrix calculation happening. So we have the matrix product here. Plus we've done some matric edition to include the effect of the of the bias and then all of that is put into the sigmoid function.

So all of that is is going to result in a value that's between 0 and 1. The reason why we think of it this way is because a lot of software, libraries are really optimized for matrix multiplication. I don't know what this language. This is. It's not one that I use could be Java.

Could be Python. I don't know. Doesn't matter The main point here is that you can do matrix multiplication in just a couple lines of code and that's really handy. And it's a lot easier than writing out all of these calculations for yourself in software. Just one more thing before we finish because this calculation of all of the different weights isn't done all at once and it isn't done only once remember we're iterating back through it.

So each time we adjust all of the weights in the entire network, we call an epoch, but those are usually too big to do all at once. So what we'll do is we'll break down our calculations into batches. So we'll run the network with just some of the inputs and then do our corrections and so that results in a batch and now it'll take you a certain number of batches to complete the equivalent of adjusting all the weights.

And that number is the number of iterations. All right? That's the last slide. There's so much more I can talk about on this subject. There are entire courses devoted to it. I've looked at a bunch of those courses. One of the things I really don't like about them, is they, they dive right into the deep and right away, assuming that you have a background in all of them mathematics or, you know, you're really comfortable with vector algebra and stuff.

And and who is really, you know, especially when I'm talking to an audience that consists mostly of educators or ethicists or some combination, perhaps some managers teachers, etc. I wanted to be clear about this part of the course and again, not to go into a ridiculous amount of of depth but to draw out some of the complexities involved in neural network design and also to show that, you know, an awful lot of what happens in the design of analytics.

And AI has nothing to do with the usual villains of biased AI. And, you know, the tendency is always well, we'll blame it on the data, right? Well, maybe it's the data. Maybe you see interpretation of the data? Maybe it's the selection of entities for the input layer. Maybe, it's the model that we're using, the algorithms that we're using, maybe it's the way we tweaked.

Our bias. There's a whole bunch of stuff that can happen here, that can impact how your AI or analytics system performs. So, when we talk about the decisions that are made, these are some of the important decisions that are made. And the, the overall framework for how AI works.

Also informs us as to what the decisions are around the periphery of that will be, and what kind of impact they'll have on the overall, ethics of the application of AI in this case? So I hope I didn't make it more confusing for you. It can be a really complex subject but once again you know we're really working with simple concepts, right?

Input values, activation functions by us iterations. You know but propagation iteration values how big the correction is stuff like that. Just, you know, really just like knobs and dials on a dashboard that we can tweak to to make our RAI system do this, rather than that. That's it for this video.

Next video. I'm gonna look at the large large subject of data. So, thanks for now. See you again. I'm Stephen Downs.

Force:yes