
00:00
Adam Stofsky
Hi, this is Adam Stofsky from Briefly and I'm here with Shannon Yavorsky from Orrick. Hey Shannon, how are you today?
00:09
Shannon Yavorsky
Hey Adam, thanks for having me.
00:12
Adam Stofsky
Yeah, so today we're going to talk about a very interesting topic, which is AI agents and risks and laws around AI agents. Let me start with what might be an obvious question, but what is the definition of an AI agent? Is there an actual legal definition of what makes an AI tool agentic or not?
00:32
Shannon Yavorsky
Yeah, great question. So these are systems that don't just recommend or transform text into images or generate new material in the way that, you know, we started using chat GPT to make sonnet or I used it to make sonnets about data processing agreements. They AI agents, they can negotiate, they can transact, and sometimes they can make decisions with little or no human oversight. So it's that like, ability to be autonomous that distinguishes it from regular AI or like LLMs.
01:10
Adam Stofsky
So it really goes back to the legal, that fundamental legal concept of an agent. Right, so in, in law, an agent is just someone who legally acts on behalf of someone else. Right. And the law professors are probably turning over, as I say this, but like, that's basically what it is.
01:29
Shannon Yavorsky
Yeah, that's right. Like what does the agent have power to do? What is the agent empowered to do? That's exactly right. And that raises profound, when it's an, you know, autonomous agent, it raises profound legal questions. You know, we are going to look at accountability contracts, torts, privacy and intellectual property. So lots of new and emerging issues in relation to this technology.
01:56
Adam Stofsky
Okay, so we're going to get into all those separate issues in a minute or separate kind of categories. But let me just start with a general question, like in general, who is liable when an AI agent acts autonomously? Is it, you know, is it the actual user of the person who set the agent in motion? Is it the tool creator who developed the code that created the agent? The underlying creator of the model? The. Is it someone else? Like, is there a general, any general, like, way of thinking about this yet?
02:32
Shannon Yavorsky
Yeah, the allocation of responsibility is really an emerging legal question. AI agents don't have legal personhood, so they can't be fined or, you know, get penalties or be sued. And when harm occurs, liability is going to fall on the human actor, the developer, the deployer, the user. But identifying which party is at fault isn't always straightforward. I think, you know, similar to any scenario, like who is at fault in a breach of contract scenario, but especially when machine learning Systems like, for example, evolve in ways that their creators didn't explicitly program or negligence and product liabilities. So traditional legal frameworks are being stretched in ways that existing legal structures didn't necessarily contemplate. And AI agents acting independently. So signing contracts, placing trades, or interacting online without a human pressing a button each time.
03:37
Shannon Yavorsky
If an agent exceeds its instructions, can it legally bind the person or the company behind it? And this is an area, active area of legal debate.
03:47
Adam Stofsky
Okay, so for this video, I'd like to really just get people kind of turn you into issue spotters, make you aware of where this could become a concern, because we don't have answers yet for exactly how this is going to play out in all these contexts, but it's all emerging. So let's start with contracts. This is the one that kind of freaks me out the most. As a, as a business owner who signs lots of contracts. Can AI agents enter into bonding agreements on behalf of, you know, an individual person or a company? Is that a legal possibility?
04:21
Shannon Yavorsky
So this also goes back to law school. To create a binding agreement, you need offer acceptance and a meeting of the minds between legal persons. And an AI agent isn't a legal person. So the courts don't treat it as an independent contracting party. Instead, they see the AI as a tool of the human or the company that set it into motion. So when an AI agent clicks by, sends an email, acceptance, or books, travel, the question is not whether the AI intended a contract, it's whether the human behind it authorized that action. And that has real consequences for businesses. If you deploy an agent AI system to negotiate prices or execute trades, then the contracts can be binding as long as the AI was sort of acting within its authorization.
05:14
Adam Stofsky
So I suppose the answer, the sort of scary answer, is yes, these agents can enter into contracts. But then the limitation is there needs to be some human motive, force and intent behind that. Binding.
05:26
Shannon Yavorsky
Yeah, that's right. When properly authorized, they can create binding agreements. And the law already has a place for that. It treats the systems as a tool that's deployed by a person or a company.
05:42
Adam Stofsky
Okay, let's move on and talk about tort liability, injury to someone else. I can imagine this playing out in all kinds of ways. I don't know. An AI agent publishes a bunch of lies about someone. So there's a defamation claim. Maybe an agent interferes with someone's code and messes up their product. As a product's liability issue, we've talked about autonomous vehicles, right? So the, the driver of the autonomous vehicle makes a mistake and crashes into someone. Right. So you can imagine a million different scenarios where AI can create damage. How does tort liability play out as things stand now?
06:23
Shannon Yavorsky
So this is. You go back to, we're going back to law school again. Liability usually turns on duty, breach, ca and harm. Right. And AI agents aren't legal persons, so they can't, you can't just sue the agent. It was the agent's fault. Instead, the focus shifts to the companies that. And humans that are designing, deploying, or operating the systems. And the challenge is figuring out which actor bears responsibility when the AI behaves unpredictably or learns new patterns that cause harm. If a company, for example, fails to train or update an AI system or monitor it in a way that causes harm, they can be liable for negligence. And the tricky part is defining what reasonable care looks like when the technology is constantly evolving.
07:12
Shannon Yavorsky
So the courts are still working through some of these, like, some of these issues, and they're sort of working through, like, what happens in adaptive algorithms and when an AI agent is, makes a decision that causes harm. But the bottom line is AI agents don't escape tort law. They may be autonomous, but humans remain accountable, whether as designers, deployers, or operators.
07:43
Adam Stofsky
I'm assuming this will all be subject to the same kinds of tort defenses that we use in regular products liability law. Assuming things like assuming the risk or contributory or comparative negligence. I'm just thinking about all kinds of possibilities. I don't know. You're. You take your autonomy, you take your Waymo. I've never been into Waymo, so I don't really know how this works. But, you know, if you're Waymo, you're driving down, you know, somewhere on a, you know, market street in San Francisco, and it crashes. I'm assuming the people who coded these machines badly might be on the hook, but if you decide to go, like, drive it into the ocean or like, on a mountain bike trail, then you might have assumed the risk. I'm assuming these tort doctrines will kind of layer over this potentially pretty cleanly. What do you think, Shannon?
08:29
Shannon Yavorsky
The legal challenge here, I think, is that these doctrines remain are flexible enough to handle technology that can learn, adapt, and sometimes even, you know, be surprising. But I think that the existing legislative landscape here is going to apply.
08:49
Adam Stofsky
Right, Interesting. And yeah. And common law landscape.
08:54
Shannon Yavorsky
That's right.
08:55
Adam Stofsky
All right, so let's move on then to data privacy. So talk about the interaction between kind of data privacy law, and agents. Can an agent, like, violate the GDPR essentially on your Behalf or some other privacy law.
09:11
Shannon Yavorsky
There's so much to unpack in relation to privacy. I think, as everybody knows, in order to operate an AI agent, it really, it thrives on data. The more data that it can collect, the better the AI agent is really going to because it develops the way it's acting around context. And context comes from, for example, your calendar, all of your emails, and then beyond that, it may be using your credit card, for example, and all of your passport data in order to, let's say, book travel for you. And so the more data it consumes, the more useful it becomes, but also the more privacy risk it confers in those circumstances. And there's lots of issues that can arise from that scope creep where data was.
10:06
Shannon Yavorsky
Maybe the agent collected data for one task and then decides it can be repurposed for another task or secondary use where developers or third parties are mining that data for advertising or analytics. And unintended exposure if the agent connects multiple data sources and infers certain sensitive personal information like health status or political beliefs, even if you never provided those details. So there's lots that's really emerging in relation to privacy laws, and they weren't drafted with autonomous AI in mind, but they still apply.
10:45
Adam Stofsky
Interesting. All right, let's finally, let's talk about IP intellectual property, particularly copyright. How do we analyze the ownership of works or, you know, inventions created by AI agents by kind of autonomous systems, as opposed to just. I put some words in LLM and it creates an image for me.
11:09
Shannon Yavorsky
Yeah, let's start with the question of who owns the output. And copyright law in the US and in most jurisdictions requires a human, it requires human authorship. And that means if an AI agent autonomously creates a piece of music, a novel, a design, without meaningful human input, it's unlikely to be eligible for registered copyright protection. And for businesses, that creates a, you know, it creates a risk that the output isn't protected and competitors can go out and copy it. So some companies try to address this by ensuring that there's meaningful and substantial human contribution. So parts of the process are protected by copyright. So there's, I think, like a lot of issues that are emerging around copyright law and how output is going to be and the circumstances in which it can be protected.
12:17
Adam Stofsky
It has me a bit tangled thinking about the difference between a work created by an agent versus a work just created by a model under the direction of a human is the idea that I might tell my agent to generate for me, you know, figure out. I don't know, something from our work. We make a lot of icons with. Briefly. Right. So I might say, hey, agent, analyze this video, tell me what icons that we need, and then generate them for me basically, in this style, as opposed to, like, create five icons, you know, based on these five ideas. I don't know. Is that kind of a distinction between a kind of direct use of a large language model and an agent creating ip?
13:03
Shannon Yavorsky
Yeah, I. That's a. That's a good distinction. I think the. The issues could be a little bit different. If I think about the AI agent, you could say, give it rules to go out and use these sources to create the logo or the, you know, the icon and then deploy. Deploy it in this context. There's more risk, you can see, in that kind of scenario, and because you're not involved at every step because of that level of autonomy at which the AI agent is acting on. So I think there's a, you know, there's a. There are big questions there in both scenarios, but they're. They're slightly different questions.
13:43
Adam Stofsky
Great. Well, Shannon, this has been really. This is a very interesting area. Thank you so much for. For talking about it.
13:51
Shannon Yavorsky
Interesting chat. Thanks, Adam.
13:53
Adam Stofsky
Take care.
<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1154468218?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Shannon Yavorsky - AI Agents Overview"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>


