
00:03
Adam Stofsky
Hey Shannon, how are you?
00:04
Shannon Yavorsky
Hey Adam.
00:04
Adam Stofsky
Good.
00:05
Shannon Yavorsky
How's it going?
00:06
Adam Stofsky
Good, good. So I remember, I don't know, we started doing this a few years ago. We did a video on basic privacy law principles and I thought it was really very interesting. So now we're fast forwarding a few years and I want to ask you the same question about AI. Is there a kind of set of principles around kind of burgeoning AI law or AI governance like there is with privacy law. Can you kind of map out what some of those principles might be?
00:36
Shannon Yavorsky
Yeah, so we have the OECD principles in the same way we had the OECD principles for privacy. There are the OECD principles for AI.
00:47
Adam Stofsky
So the OECD is a international organization, but it's not like a law making body. So what is this? What is, how important are these principles?
00:58
Shannon Yavorsky
So they're really key to and foundational for companies and countries around the world considering developing policies and programs. And for countries developing legislation. They often look to the OECD principles as an international sort of guiding light. So they tend to be very influential even if they don't have the force of law.
01:23
Adam Stofsky
So it's just like privacy. So. Right, so there's like the privacy. Right, so. So the. I'm kind of setting this up as a softball, but if you want to understand like basic AI law and what you're going to have to worry about, these provide a good framework basically.
01:38
Shannon Yavorsky
Absolutely. Yeah. That is a good way of thinking about it.
01:41
Adam Stofsky
Okay, so what are they?
01:44
Shannon Yavorsky
So what are they? The, the first one I talk about is fairness. So AI.
01:50
Adam Stofsky
Hang on, can you like list them before, like before we start to kind of get organized? Is it possible to do that?
01:57
Shannon Yavorsky
So there are five core OECD AI principles that guide the responsible development and deployment of AI. And they are Inclusive growth, sustainable development, well being. That's one. Human centered values and fairness, transparency and explainability, robustness, security and safety and accountability.
02:21
Adam Stofsky
Okay, so those five, why don't we just kind of do quick overviews in order? Can you kind of explain what these are? So let's start with inclusive growth, sustainable development and well being. Those sound like a lot of different things to me. How does that all kind of wrap together?
02:35
Shannon Yavorsky
So the first, that first principle emphasizes that AI should benefit people in the planet. AI should promote inclusive growth and social progress, not just technological advancement or economic gain. This is a really central to the discussion around AI regulation as well. And this principle sort of underscores that AI should help solve real problems like improving Healthcare outcomes, advancing climate goals and expanding access to education.
03:09
Adam Stofsky
For for example, is this like to do with the concept of AI alignment that I've heard about?
03:16
Shannon Yavorsky
I think so. Right. It's looking at AI and the success of AI not by what it can do, but how it contributes to human and environmental well being.
03:29
Adam Stofsky
Okay, so next is human centered values and fairness.
03:33
Shannon Yavorsky
This is a really important principle about putting people at the center of AI and essentially that AI systems respect human rights, the rule of law, democratic values, and they should be designed to avoid bias, prevent discrimination and uphold fairness for all users. So AI should support human decision making, not replace decision making, which is another feature that's very central to the emerging AI legislation. This concept of supporting human decision making, not entirely replacing it. So putting, ensuring the humans remain in control, especially in high impact areas. If you think about hiring credit, for example healthcare, access to justice.
04:31
Adam Stofsky
So the first one then really inclusive growth well being is about the idea that AI should ultimately be in the service of improving human flourishing. Whereas the human centered values principle is more that AI shouldn't replace human decision making. So they're related but distinct.
04:50
Shannon Yavorsky
Yeah, that's right. The second one is more like what we think about as human in the loop.
04:55
Adam Stofsky
Right, right. Okay, so I have so many questions as I go through this. I know we're doing an overview, but this opens up so many like not just legal questions, but philosophical ones that are political ones that are really interesting. But let's just move on to transparency and explainability. What does this mean in the context of AI governance?
05:16
Shannon Yavorsky
So AI systems should be transparent enough that people can understand how and why they produce certain results. And that can in part mean defining what data was used to train a model, how decisions are made or influenced, and how users can question or appeal an AI outcome. Transparency is another through line in all of the emerging legislation, like making it clear on two sides. One, that people are interacting with AI, if it's an artificial voice, for example, and then on the other side, making it clear when someone's interacting with AI generated content, for example, because that can be, you know, really build trust. That kind of transparency really builds trust when people understand where content comes from or that something is AI and not, you know, not human.
06:19
Adam Stofsky
So I noticed the language says that the transparency means that people should understand both when and how AI is being used. So when, like you said, you got to know if you're talking to a, an AI or a human, that should be disclosed to whoever is interacting with the AI, but also how right so the models and the technology needs to be explained and explainable, is that right?
06:41
Shannon Yavorsky
Yeah, that's right. That's right. We say it's a little bit of a catchphrase, but turning it from a black box into a glass box.
06:49
Adam Stofsky
Okay. I'm sure this interacts with like, you can still have trade secrets and your secret sauce and your intellectual property, but how does that work? How does the interaction between. We're not saying that people have to disclose their code, right?
07:03
Shannon Yavorsky
Well, you have to disclose your. In some cases, you're going to have to disclose what training data you've used to train your model. And there are lots of questions there because companies think of that in some cases as their proprietary information or their secret sauce about how they trained a particular model. So I feel like that's a question that the answer is going to, or practices are going to shake out over time. And exactly how much people disclose about how they've built a model, for example.
07:35
Adam Stofsky
Right, Very interesting. Okay, let's go to the fourth. Security, Robustness, Security and safety. This one seems a little more obvious to me, but explain. Yeah. What does it mean?
07:46
Shannon Yavorsky
So AI systems have to work as intended. They have to be resilient to errors or attacks and not pose unreasonable risk to people or society. So testing and validating systems before deployment, monitoring them continuously, and then building in fail safes to prevent unintended harm. AI requires rigorous quality assurance. And again, some of the emerging laws are really focused on this idea of safety and ensuring that AI systems remain safe for humans.
08:24
Adam Stofsky
Okay, so now the fifth principle, and what I'm assuming is kind of the one that ties it all together, but is accountability. So tell us a bit about what that means.
08:33
Shannon Yavorsky
So accountability is a really important principle. Those who design, develop and deploy AI have to remain responsible for its outcomes. And that means that in organizations there need to be clear lines of responsibility, government's frameworks for AI risk, and mechanisms for audit and redress when things go, you know, when things go wrong. So accountability ensures that AI systems are never ownerless. Right. There's always a person, a team or an institution answerable for their behavior and impact. It's the principle that really, it's like real world oversight into a program, that idea of accountability.
09:23
Adam Stofsky
Okay, so I'm going to now quickly recap these as best I can. I'm not doing this for memory. I try to do these from memory sometimes. This time I'm actually looking at them. But the five OECD AI principles which provide a kind of framework for how we think about AI law and regulation are inclusive growth, sustainable development and well being, which is the principle that AI should exist for human benefit. Right. Tell me if I'm getting any of these summary.
09:51
Shannon Yavorsky
That's right, you got it.
09:53
Adam Stofsky
Human centered values and fairness is that the systems essentially have to, I guess put humans first. Right. And sort of, you know, respect human rights and existing laws that, that protect humans. Transparency is that we should know how AI systems work and how they're being used and if indeed you're interacting with an AI system at all. Next is robustness, security and safety, which is that AI systems should be. Now I'm tripping up on my summary here. AI systems should be essentially made. Well, should be. Hang on, let me recap this one for me. Robustness, security and safety.
10:40
Shannon Yavorsky
Robustness, security and safety. Essentially, that AI systems should be robust, secure and safe throughout their whole life cycle so that, you know, they function appropriately and don't pose unreasonable risks to safety or security.
10:58
Adam Stofsky
That's what it was. That AI systems do not pose unreasonable risks to people. And finally, accountability, that is what it sounds like, right? That humans and organizations kind of own AI systems should be accountable for their proper functioning and ultimately what they do. Okay, I'm just going to ask you one last question to wrap this up because you know, we're going to go into much more detail down the road on all these different areas. But it sounds to me that every one of these five principles just opens the door for a million other legal questions that need to be answered. That's why I've been kind of tripping up as I go to this video, is because there's so much here. So this isn't the law. Right.
11:39
Adam Stofsky
This is essentially a set of guidelines that pave the way for what are probably going to be many hundreds of laws, right?
11:46
Shannon Yavorsky
That's exactly right. And we're already starting to see that in the US this year alone there have been over a thousand laws proposed at the state level on AI. And a lot of those laws are, you know, directly responsive to some, to these core principles. Ensuring that they're AI systems are safe, ensuring that they're fair, ensuring that they're secure. So while the OECD principles don't have the force of law, they certainly inform a lot of the developing legal landscape.
12:20
Adam Stofsky
Great. Well, this is incredibly interesting. Thank you so much, Shannon.
12:23
Shannon Yavorsky
Thanks for having me.
<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1154469346?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Shannon Yavorsky - Principles of AI Regulation and Ethics_5"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>
