
00:00
Adam Stofsky
Hello. Today I'm here with Julia Apostle, a partner at Orrick and an expert in European tech regulation. Hey Julia, how are you this morning?
00:18
Julia Apostle
Hey Adam, thanks for having me. Good to be here.
00:20
Adam Stofsky
I guess it's afternoon for you, right? Because you're over there in Paris.
00:24
Julia Apostle
I am, yes. It's a nice rainy afternoon in Paris.
00:27
Adam Stofsky
Right. Well, let's talk today about a pretty critical basic question is what is the definition of AI? What is an AI system under a European law under the EU AI Act? Can you just take a shot at starting to answer that question?
00:42
Julia Apostle
Yeah, absolutely. And it's a really good place to start. And a lot of people don't start with this, but the purpose of the AI act is actually not to regulate every single AI system that has ever been developed or placed on the market in Europe. There is actually a threshold an AI system has to meet the criteria set in the definition of the law. And of course this is a legal definition, it's not a technical definition, even if it does borrow, you know, technical concepts and so forth. So not going to be explaining what an AI system is from an engineering perspective. So first thing that's important is that the definition in the EU law draws heavily from the definition of AI system adopted by the oecd.
01:29
Julia Apostle
That's important from an international perspective because a number of countries have signed up to the OECD principles and have sort of endorsed this definition of AI system and will be looking to use this definition in their own law. So that's important. So hopefully we will have a consistent global approach within AI legislation to what is an AI system legally. So the law has, the definition, has a number of elements and. Do you want me to read the full definition, Adam, before going into. Because, you know, it's a mouthful, but I'm happy to do it.
02:02
Adam Stofsky
Yeah, let's do it. Let's just, let's just start that, let's start with the source.
02:05
Julia Apostle
Okay. So the law says that an AI system means a machine based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness and after deployment and that for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments. That's it. And so the elements, I was thinking.
02:40
Adam Stofsky
That'S a lot, right?
02:41
Julia Apostle
Yeah, that's a lot. You're still digesting that like it's a very run on sentence. It's probably one of the.
02:46
Adam Stofsky
Can you, yeah, can you break it down for us? That would be helpful.
02:49
Julia Apostle
So, so the first element, and this is the easiest one to satisfy, is that it has to be a machine based system. Like it's a piece of software, it runs on a machine, and that's a bit of a no brainer. So most people will check the box on that. The next is that it has to have some degree of autonomy. So what the definition says is varying levels of autonomy that could in theory mean no autonomy. Right, because it's a varying level. No, autonomy is a level of autonomy, but in any event, it's variable, so it's hard to know exactly what that means. But some degree of autonomy from human involvement. Next, it has to. No, sorry, I need to correct myself because that's very important. It may exhibit adaptiveness after development. So that's key.
03:41
Julia Apostle
So whether or not a system can learn and change is optional. And a lot of the new technologies of all the new AI systems are all about learning. Machine learning, we change, but that is not a prerequisite to being covered. Next, the AI system has to be designed to operate according to specified objectives. So those are, you know, goals or objectives that are set during the development process. Right. So according to guidance that has been published by the European Commission in, you know, a few months ago, and which is really integral to understanding and applying this definition in practice, the objectives have to be internal to the system and they need to be distinguished from the intended purpose of the system.
04:35
Julia Apostle
Like, the intended purpose is what, you know, the company that developed a system, you know, puts in its marketing materials that the system is supposed to do. That's not entirely what the objective is. So the purpose is externally oriented and the objective is internally oriented vis a vis the system. Okay, next, and most important is the AI systems must be incapable of inferring outputs. So this is one where there's no varying degrees. It's not optional. This is, you know, definitely mandatory. And what it means is that not just the ability to, you know, generate outputs the way a generative AI system does, like inferring based on input that you put into an API the output, it actually refers to using inference at the development stage of the AI system.
05:34
Julia Apostle
And the way the Commission phrases this in the guidance is that there are specific AI system development techniques that rely on inference. And so one of the key ones is machine learning techniques. So machine learning techniques that are cited in the guidance include, you know, self supervised learning, unsupervised learning, self supervised learning, and reinforcement learning. Right. So those are all typical techniques from their perspective, that enable A system to infer how to generate outputs during the building phase. Okay. Another category are logic and knowledge based approaches. Those also enable inference at the build stage.
06:23
Adam Stofsky
That's so, so sorry, just to clarify there on this inference element.
06:29
Julia Apostle
Yeah.
06:30
Adam Stofsky
The major categories are like machine learning and kind of logic based systems.
06:34
Julia Apostle
That's exactly correct.
06:36
Adam Stofsky
Not to pretend to know exactly what those mean, but I think I have a layman's understanding.
06:40
Julia Apostle
Yeah. And from a very practical perspective and the way that, you know, I like to work, not just in relation to the AI act, but other very, you know, technical pieces of legislation is to be like hand in hand with the engineers. Right. Like I'm not going to say, you know, I'm not an engineer and therefore I will want to work closely with an engineer to say like explain how this was built, like get into the nitty gritty and not just rely on. Well, that sounds like an expert based system to me. Or that sounds like self supervised learning at scale. So, you know.
07:16
Julia Apostle
Yes, you're right to flag the heavily technical aspect of this and from, you know, a compliance perspective, I would recommend that if you're tasked with bucketing these systems to work with a specialist, just like you wouldn't ask an engineer to provide a legal guidance on how the act should apply.
07:35
Adam Stofsky
Right, right. Okay, so let's move on. We're up to, we've have just to recap because this is. Yeah, we've got, we've gone through machine based autonomous autonomy, adaptiveness objective and then it has to have the capacity to infer.
07:53
Julia Apostle
Yeah.
07:53
Adam Stofsky
What's the sixth?
07:56
Julia Apostle
Those, the sort of. The outputs, like whatever is inferred, has to be capable of influencing physical or virtual environments. And you know, that seems pretty broad to me and even with the guidance doesn't necessarily make me think that it's any less broad. Right. So what the commission says is that it specifies that AI systems can generate outputs, like more nuanced outputs than other systems and those can have an impact both on digital environments. And that's like a gaming example. Right. Like you can have a system in a game that therefore influences how the game runs. Or, or you can have a system that will, you know, once it's operating, influences physical environments. Literally like a robot that goes and does something in the environment. Okay, so. And, and it seems like most AI systems are developed to do something like that.
09:04
Adam Stofsky
Well, otherwise what would just sort of be virtually navel gazing essentially.
09:10
Julia Apostle
Yeah. And. Or a very simple software system. Right. Which gets into the discussion and we can get there after the seventh Criteria of what is not in scope. Right.
09:19
Adam Stofsky
Okay, so what is that seventh? I wrote it. So I wrote it down here, but I can't read my handwriting because I was writing so fast. What's the seventh?
09:27
Julia Apostle
Well, I kind of already, you know, to be honest, condensed it into interacting with the physical environment is the seventh, but it's influencing and interacting with the environment. So in some ways, like the guidance breaks it down into two, but maybe you could treat it as one.
09:45
Adam Stofsky
Oh, I see. So like the six is it has an output. Like it has to have an output. And the seventh is that output needs to kind of interact with or influence the virtual or physical environment. That's interesting.
09:56
Julia Apostle
Yeah.
09:57
Adam Stofsky
Okay, let's. Let's leave this here. This is a lot to grapple with, but I think the message behind this video is this is a broad definition of what an AI system is. And companies are going to just be grappling with this for a while. So let's just go. Oh, yeah, go on, Juliet.
10:15
Julia Apostle
Well, what I was going to say is like, yeah, you know, grappling with it for a while. Although when we get to the sort of, the next part of, you know, okay, you think you have an AI system, but is it regulated like under a risk category? Because of course, the AI act is a risk based law. That's another way to sort of fall out of the law. So you could be like, I'm not actually sure on all of these different elements. But even if it is like, let's say for argument's sake, it is an AI system because, you know, I can't come to terms with this influencing the virtual environment thing. It may still fall out.
10:52
Adam Stofsky
Right. For other reasons.
10:53
Julia Apostle
For other reasons.
10:54
Adam Stofsky
Right, right. But in terms of the basic definition we have, Let me see if I can get this. Now it's machine based. I'm cheating because usually I like to try to remember these things, but now I'm cheating and looking at my scribbled notes. Okay, it's gotta be machine based. That's pretty obvious. It has to operate autonomously. Some degree, some degree of autonomy. It may exhibit adaptiveness, but doesn't have to. It has to have some objective. It needs. It needs to infer or create inferences and then have an output that influences the virtual or physical environment. Did I get it?
11:35
Julia Apostle
Yeah. And interact with the environment.
11:39
Adam Stofsky
And interact with the environment, yeah.
11:42
Julia Apostle
So they're active, not passive, as yourself said.
11:45
Adam Stofsky
Right, okay. Let's leave it at that. Julia, super interesting, really helpful. Thank you so much.
11:51
Julia Apostle
My pleasure. Thank you.
<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1156640891?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Julia Apostle - Legal Definition of an AI System_2"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>

