
00:00
Adam Stofsky
Hi, this is Adam Stofsky from Briefly, and I'm here with Shannon Yavorsky from Orrick. Hey Shannon, how are you today?
00:10
Shannon Yavorsky
Hey Adam, Good, how's it going?
00:12
Adam Stofsky
Yeah, good. Today we're going to talk about AI law and laws that companies need to follow with respect to their use of AI tools. So Shannon, what is AI law like? Is that a thing yet? Like what is AI law? When we say that, what do we mean?
00:30
Shannon Yavorsky
Yeah, when people hear AI law they often are thinking about like some single global rule book on AI. But the reality is more complex. AI law is a collection of existing legal doctrines like privacy, consumer protection, for example, intellectual property discrimination, combined with an emerging class of AI state laws and then global legislation on AI like the EU AI act, as well as a lot of guidance that's coming out of different from different places, like governmental agencies about the use of AI. Alongside all of that there are these emerging, you know, self regulatory frameworks like from the OECD has some AI principles, ISO has a standard 40 2001, the NIST, AI risk management framework. So when we're talking about AI law, we're not talking about a single set of law like AI laws.
01:41
Shannon Yavorsky
It's actually existing legal doctrines, emerging AI laws and then this sort of set of guidelines and guidance that's emerging. Oh, in addition to case law. So I guess there's like a fourth bucket.
01:56
Adam Stofsky
Right, so just the first one, just for an example, very quickly this is, might be something like I use a, some kind of HR support tool, talent acquisition tool to screen employees that I might interview for a job. And that tool screens out all the people of a certain religion or race or gender. I might be on the hook for violations of not some AI law, but just like traditional, you know, Title 7 in the US like employment discrimination law. That's kind of what you, that's a clear example of that sort of thing, right?
02:29
Shannon Yavorsky
Exactly, exactly. And the EEOC has issued guidelines and said, you know, pretty clearly we're going to regulate AI as it falls within our jurisdictional authority. So just because there's no AI specific law in a particular area doesn't mean existing laws aren't going to apply.
02:49
Adam Stofsky
Okay, so for this video let's focus more on the actual emerging AI laws so that folks kind of know what they are and what we're dealing with. So why don't you lay out what are the major laws that are either in force or coming into force.
03:03
Shannon Yavorsky
So the first one that people think about is the EU AI Act. And this is, it's a regulation in Europe and a regulation means that it has direct effect. So it's the same law across all of the European member states. And it classifies AI systems by risk level. So certain things are prohibited, like social scoring systems. And then there's this bucket of risk, but high risk AI and then there's general purpose AI. So there's some tools that are being used within companies that fall within that high risk bucket. So hiring algorithms, like your earlier example is one of those medical devices is another one that could fall within the remit of high risk AI. So you have the EU AI act and then in the US there's no single federal AI law yet.
03:55
Shannon Yavorsky
I mean, I get asked the question all the time, like, when are we going to have a federal AI law? I don't know. There's a, there's a, it's a bipartisan issue that AI needs to be regulated. But it's, I think, you know, we haven't aligned around what the right way to do that is. And so there's just a lot of emerging state AI laws. And in this year alone, a thousand laws have been proposed across the US States, dc, Puerto Rico and Guam. Like, that's a lot of laws. And there's just a ton of noise there around, you know, these different, you know, different emerging state statutes.
04:35
Shannon Yavorsky
And they, you know, are, some of them are similar to the EU AI act, but others are very specific and they're around things like deepfakes and like political ads and the use of AI in those contexts. So the US has this fragmented landscape and then there are laws emerging around the world. So China and other countries have introduced legislation on algorithmic transparency. And similarly, like deep fake labeling. And the patchwork of laws means that companies have to track multiple and sometimes conflicting requirements around the globe because a lot of these laws have extraterritorial effect.
05:18
Adam Stofsky
So yeah, I did want to ask you about that. Like the EU AI Act. Why does like a US company need to think about that? Or do they.
05:27
Shannon Yavorsky
Yeah, so kind of like the GDPR where it applies to companies that are, you know, even outside of Europe. The EUA AI act will apply to organizations that maybe deploy tools into Europe. So international, you know, companies based outside of Europe need to be aware of the EU AI act as well as those within the eu.
05:51
Adam Stofsky
Okay, great. I have one more question, but just about these laws in general. I know we could probably talk about this for a long time, but just to get a very broad picture. So these laws are, they I know it's a big patchwork and they just had a thousand. So I know the answer to this might be, you know, not definitive. But are most of these laws focused on the makers of AI tools? Like here's, like you're not allowed to make Skynet. Like, here are the limits to what you're allowed to do when you make an AI tool. Or is it like the use of AI, like here, like the, the, the hiring example. Like you're not allowed to use AI to do certain things as a user.
06:31
Adam Stofsky
Or is it a mix of both limitations on what creators of AI tools can do versus what users of AI tools can do?
06:38
Shannon Yavorsky
Yeah, it's such a good question. Despite different approaches to legislation around the world, AI laws in Europe and the US and China, they really share common goals. And themes that emerge across this, all of this legislation are pretty similar and they fall into a couple of buckets. Transparency and explainability. So ensuring people understand how AI makes decisions, ensuring that people know when they're interacting with AI and not a human. Ensuring that people know when content is AI generated, not made by human accountability. So making clear who's responsible for when AI causes harm. Accountability in the sense of keeping track of how decisions were made. Fairness and discrimination. A lot of these laws are looking at preventing biased outcomes in high risk areas. So like hiring and lending policing is another example.
07:45
Shannon Yavorsky
And then an emerging class of laws around safety and security, so requiring testing, documentation and safeguards to prevent accidents or misuse. And that really goes to the sort of, some of the paranoid fears of AI taking over. We're like, okay, we need some safety laws. What should they, what should they tackle? So the principles are designed generally to protect, to protect individuals and prevent harm.
08:16
Adam Stofsky
Okay, great, very interesting. I'm going to ask you just one more question now. Can you mention the voluntary frameworks or standards before folks may come across these in their everyday work. So can you talk about what these are and how they differ from a.
08:33
Shannon Yavorsky
Law so they don't have the force of the law like the AI. Let's take for example the AI, the nist, AI Risk Management Framework. It's really a framework that's designed to help companies build a compliance program and it talks about, govern, map, measure and manage. So figuring out, mapping what AI tools are in use, measuring the risks and then managing those risks. So it's really like a framework to help companies think about, how to think about these risks.
09:09
Adam Stofsky
What are some of the other standards.
09:11
Shannon Yavorsky
You see, there's ISO 42001, which is an AI governance standard that a lot of companies are looking to as well. It's more international. And then the OECD principles, which are, from my perspective, just pretty broad overarching principles that organizations should, that, you know, should really take under consideration in building an AI compliance framework.
09:38
Adam Stofsky
Right. Well, Shannon, thank you for this very concise overview of what's going on with AI law. Really appreciate it.
09:45
Shannon Yavorsky
Thanks, Adam.
<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1154468495?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Shannon Yavorsky - AI Laws Overview"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>
