
00:00
Adam Stofsky
Hi Shannon. How are you today?
00:04
Shannon Yavorsky
Hey, I'm good. How's it going?
00:07
Adam Stofsky
Yeah, good. Today I want to talk to you about a topic called high risk AI. So this is like an actual legal term, right? High risk AI. Can you explain? Well, first of all, what it is and where it comes from?
00:20
Shannon Yavorsky
Yeah, it's a really good question. So under the EU AI act, which is sort of comprehensive AI legislation in Europe, not all AI is treated the same. The law uses a risk based approach and the category of high risk is where most of the obligations sit. So high risk isn't really about whether AI is good or bad, it's whether it's used in a context that can impact people's rights, safety or opportunities. So think about systems or errors could have real world consequences like hiring decisions, education, healthcare or access to essential services. So common high risk use cases that companies deal with today include AI system used in hiring, resume screening and employee evaluation. We're seeing a lot of that education tools that grade exams, health care. So diagnostic AI systems that support medical decisions and critical infrastructure is another big one.
01:30
Shannon Yavorsky
So AI that manages energy, water or transportation. So when you hear the term high risk AI, think less about whether the AI is inherently dangerous and more about the context in which it's used. So if AI impacts someone's ability to get a job, access education, obtain credit or receive medical care, chances are you're in high risk AI territory.
01:57
Adam Stofsky
So we're not talking about, I think people think about AI risk as, you know, the risk of AI becoming Skynet and you know, taking over. And we're not talking about that here, we're talking about sort of smaller risks, but risks that can be meaningful and impact people's lives.
02:14
Shannon Yavorsky
Yeah, that's right. Anything that can sort of impact like a consumer or someone's ability to, you know, get education or get credit is something that's likely going to fall in that bucket of high risk AI as defined under the EU AI act and similar laws that have that approach to AI that's based on sort of prohibited AI, high risk AI and so on.
02:39
Adam Stofsky
And just for context, what are the other categories aside from high risk AI? What are the other AI categories under that law?
02:46
Shannon Yavorsky
So there's prohibited AI, which in that category there are things like a social scoring system like they have in China that would be, that's prohibited under the EU AI Act. And then there are things like minimal risk AI, which is like recommender systems or spam filters that are, it just don't present a lot of risk to individuals.
03:11
Adam Stofsky
Okay. So then, okay, that's interesting. So with the high risk AI category, it sounds like a lot of really important and useful tools sit in this category. You just listed some interesting contacts, hiring, grading, you know, just grading exams, things like that. Is there an actual comprehensive list in the law or, or does it more name a set of criteria and many things could fall under that list. In other words, do we know exactly what is high risk or do we have to kind of make those decisions as we go?
03:42
Shannon Yavorsky
So that's a great question. Some of the laws include examples, but really it's more about whether the AI is potentially impacts an individual. So it means you need to assess whether the tool that you're using, even if it's not listed in a schedule somewhere to the legislation, would fall within that bucket of high risk AI.
04:06
Adam Stofsky
So what are the consequences of like why does this matter that it's something as high risk AI? What are the potential consequences for a company or an employee of a company?
04:17
Shannon Yavorsky
So for companies it means if you're developing or deploying high risk AI, you'll need to meet a higher bar for compliance than sort of minimal risk AI that includes requirements like risk assessments, documentation and record keeping, transparency measures, human oversight, and maybe post market monitoring. So really important to understand what bucket your AI falls in because all of your obligations stem from that designation.
04:51
Adam Stofsky
And what about for like an individual? Well, actually, no, let me ask the question a different way. So are you talking about the people who actually make the AI tools? Let's say I have a really cool new piece of software that grades exams and do I need to worry about that as a tool maker? Or is it all the universities or high schools that are going to be using the tool that have to worry about high risk AI, or is it both?
05:18
Shannon Yavorsky
It's really important to understand whether you're a developer or a deployer. It's an incredibly consequential designation. If you're a developer of AI, meaning you've developed, develop the system, then you have a set of obligations that arise under the EU AI act and similar legislation. And if you're a deployer, meaning you're a company that's using one of these service providers, you have a different set of obligations. So it's really important to understand what designation you have. So whether you're a developer or deployer, and then whether it's prohibited AI, high risk AI, minimal risk AI. So understanding, doing that analysis is really critical for understanding your compliance obligations.
06:05
Adam Stofsky
What do employees of companies need to know about this? Let's Say you're not the general counsel or the CFO or the chief head of compliance. Why do just workers at companies need to know about this?
06:20
Shannon Yavorsky
Yeah, I think that employees at a company need to be aware of the any rules around the tools that they're using. So, for example, someone who is in HR who's using a AI system for hiring or resume screening needs to understand what the potential risks are associated with that using that tool and what the company's rules are around its use, and then understand when they might need to report something to security, for example, or to the compliance team. And you know what else? You know what else they need to do, like compliance steps that might need to take in terms of reviewing the outcome, being the human in the loop, as another example.
07:08
Adam Stofsky
And how do companies generally handle this? I mean, do they have like a list of tools they're allowed to use and rules around those tools? Generally? It feels a bit like the Wild West. Now, like, I'm sure I'm not an HR professional except to the extent that I run a company, but I can imagine there's tons of interesting tools coming out. How do companies generally handle this issue of high risk AI?
07:34
Shannon Yavorsky
So in that example, companies will, you know, oftentimes companies will have a generative AI use policy that speaks broadly to the use of a variety of tools. Sometimes the company will include in a schedule a list of approved tools which may include the HR tool. And then in other scenarios, if there's a particular tool that has lots of different rules associated with it, they'll have a schedule to the policy or a wholly separate policy that speaks to the use of a particular tool, which may mean, you know, outline record keeping obligations. It may outline any monitoring or audits that need to happen in relation to that particular tool.
08:17
Adam Stofsky
Okay, right. So here's what I think and you can tell me if I'm articulating this. Well, I think in this world we're entering into where there's just so much interesting technology coming out, if you are that, like HR professional or that, you know, teaching assistant or whoever is going to be using these tools, I think to understand just a little bit how to identify when I might be considered high risk under the law and to be able to kind of weigh some of the pros and cons and articulate that to your managers or bosses, I think that's really valuable and makes for stronger companies. If many people at all levels can do that. Does that like scare you a bit as a lawyer, or do you think that's the right. The Right way of thinking about it.
09:06
Shannon Yavorsky
I think that's right. I think everybody at an organization needs to be armed with some essential knowledge around the risk associated with the use of the tools so that people can issue spot and say, you know, we've started using this tool, but actually maybe there are biased outcomes here. Or I'm starting to see, you know, interesting, you know, hallucinations in the output that need to be reported back. But you have to be equipped with that, you know, level of education and knowledge of vocabulary in order to report it back to, you know, an AI committee or have reporting lines so that the issue is managed and risk is managed within the organization.
09:49
Adam Stofsky
It could be the opposite too. Right. If you find a great tool and you know, people don't want to use it because they're worried about the risk, but you think actually, you know, this could be really useful and maybe the risk isn't that great, that message is valuable as well.
10:02
Shannon Yavorsky
Yeah, absolutely. And the way I see that coming up is with the AI note taking tools, which are really prevalent, they summarize meetings, they include action items coming out of a meeting, they include minutes in some cases, super useful for companies and can be tremendously introduced. Tremendous efficiencies. At the same time, lots of potential risks. There could be if there's a lawyer on the call, there could be a waiver of legal privilege, which is problematic if the recording is stored somewhere where many different people can access it who weren't meant to be privy to the meeting, the notes, you know, it's AI. There could be miss, you know, inaccuracies and then that's part of the permanent record. And then, as you know, the extent to which you have a record of something, it's presumptively discoverable in the context of litigation. So lots of efficiencies.
10:57
Shannon Yavorsky
It's sort of the lots of risks and lots of opportunities and you have to find the balance that's right for your company.
11:05
Adam Stofsky
Great. Well, Shannon, thanks so much for the summary of high risk AI.
11:08
Shannon Yavorsky
Thanks for having me.
