
00:00
Adam Stofsky
Hello. Today, I have with me David Tollen of the Tech Contracts Academy. Hey, David, how are you?
00:16
David Tollen
Good, Adam, how are you?
00:17
Adam Stofsky
Good. Today we're going to talk about. Well, we're going to start to talk about AI and contracts. I know people might not understand how those things connect, but let's just start with a general question. So AI tools, as we all know, can do really powerful stuff, but they also create risks through things like intellectual property or data privacy or security, all kinds of stuff. The question is kind of putting aside contracts for a minute. Who bears the risk? Who takes the blame? When. When something goes wrong with an AI tool, when it harms someone in some way?
00:54
David Tollen
Yeah. So you mean the AI tool produces an output that doesn't do what it's supposed to, and as a result, someone gets injured, like the output is software that works terribly? Something along those lines, yeah.
01:07
Adam Stofsky
That's a good example.
01:08
David Tollen
It's a bit of the Wild West. So we have decades, arguably centuries of case law and statutes and the Uniform Commercial Code that governs who's responsible, who's on the hook when normal products malfunction and somebody's injured. But it's much less clear when the product quote unquote, is the output from AI. So, you know, this malfunctioning software, in my example, the customer's gonna say, well, that was. That was the AI vendor's fault. And the AI vendor is gonna say, hey, we're not a software company or we're not a. A company about writing software for you. We produce generative AI. We. This is on you. I suspect, even though it's the. It's not very well defined and we've got probably, you know, a decade worth of case law ahead of us.
02:01
David Tollen
I suspect the courts are going to be pretty sympathetic to the AI company saying, we. We aren't promising you software will work. We aren't promising you the outputs. You know, the. The images we generate won't have six fingers. I just noticed that in one of our trainings, the one of the slides has a guy with six fingers because I generated it using AI.
02:22
Adam Stofsky
We.
02:22
David Tollen
We are not promising that. And I think the courts are going to be sympathetic to that, but it's going to be pretty rare that there's a case that isn't governed by a contract.
02:32
Adam Stofsky
Okay, so let's. I still want to put aside the contracts for a minute, because through contracts we can kind of decide who has the risk for various potential.
02:39
David Tollen
Mostly.
02:40
Adam Stofsky
Yeah. Mostly. So. So let me just make analogy. I don't know if It's a good analogy. But, but let's take photographs, right? So Photoshop, so. Or let's even take it farther back, like a camera manufacturer. Like, let's say you take some photos and you publish them and they cause some damage. Maybe they're, they're photos that use someone's likeness without permission or they violate someone's copyright or trademark or something like that. You're, I don't think anyone thinks that, like, the victim of that damage can go like, sue Nikon or Canon. Right. So like the, is this sort of the position that the, the AI tool makers are saying, hey, we're just a large language model.
03:24
David Tollen
I think that's a good analogy. And that's an argument the AI tool makers might trot out. We're like the camera, it's not on us.
03:34
Adam Stofsky
Or they're like the photos, or they're like Adobe Photoshop. Right. So that's like the next step closer. You're not going to sue Adobe if someone violates copyright, but it's a little bit.
03:42
David Tollen
That's right. Or, or you might sue, but you're probably not going to win.
03:46
Adam Stofsky
Right?
03:46
David Tollen
So, yeah, I think that's a good analogy. It's much more complicated. I mean, the extent to which the tool is creating the output is vastly greater than in Adobe, where truly, it's like, I've sold you a pen, and obviously the pen manufacturer is not going to be responsible for me copying your illustration and infringing your copyright with AI who created the output. There's, there's an argument that it was the AI, but I still think the AI vendors are gonna be in pretty good shape to say it's on you customer, but it's much less clear in this context than with Photoshop or a system like that.
04:33
Adam Stofsky
Right. Okay, so this is like, academically really interesting. We could probably debate the philosophically interesting. But let's get into some of the kind of nuts and bolts when you're buying a new tool or looking at a tool or even selling a tool. How, what are you seeing out there in the world now? How are risks kind of being allocated? Is there a kind of default where things are sort of landing, or is there a range of ways these things are being negotiated?
05:00
David Tollen
Yes, there's a default, and largely it's because it's the AI companies that are writing the contracts, although they, there's a good argument that it couldn't be any other way, but the default is that they're offering disclaimers, they're saying, we're not responsible for the content of your output. You, you need to be on that. Now this is, I'm really talking about the functionality of the output. Like, does the, you know, does the software that the large language model generates work well on the intellectual property? There's a mixed response from the AI companies. They are saying, we're not going to give you the normal IP warranty or at least an IP warranty that covers our outputs. We can't swear to you that they won't infringe. In fact, if they're smart, they're saying, in fact, we're warning you that they may infringe.
05:57
David Tollen
But most of the big ones, for at least the foundation models, the big models are saying we will indemnify you. So they're agreeing that if you get sued as the customer and you innocently created an output that turns out to infringe, you didn't have any way to know you didn't have infringing inputs, it's truly on the model, they'll defend that case. And I think the big AI companies have, after initially saying, no, we're not going to do this, and I wrote some of those contracts that said absolutely no indemnity, no liability whatsoever. After doing that for a little while, I think they realized that this IP issue is a burden on their business and they can't push it off on the customer. It's the unknowns about IP infringement are really their problem.
06:51
David Tollen
And that's the way it's gone with at least most of the big vendors. By the way, don't forget, as everybody does, there's also small gen AI tools, you know, small language models, things like that. There you might have a much, you have a more flexible set of options about what liability there might because that vendor may know everything that's in the training data. They may know, they may know whether or not everything that went into training the AI is licensed. And so they may have better information about what the outputs could be. But with the big guys, it's, it really is the Wild west.
07:30
Adam Stofsky
So is there a principle here that may be true across risks, like we've talked about, like the risk of bad software being made by like a LLM kind of coding tool? We talked about ip, there's other risks here that would can be evaluated this way as well. But is the principle, hey, if it's our fault, the AI provider says we're going to take the hit for it, we're going to take the risk, we're going to indemnify you. In other words, if there's like copyright infringing stuff in our model that will go into your image, it's our fault. We're going to take the heat. But if it's your fault, if you stuck a bunch of infringing stuff in and created an image based on that stuff, it's your fault.
08:10
Adam Stofsky
Or if you, if we really just screwed up your, your software because our model was mistuned in some way, it's our fault. I bet this is harder than copyright. I don't know anything about how this sort of thing works, but whereas if you just prompted badly and shut, you know, shoved out your software without a human in the loop, it's your fault. Is that kind of the general principle that we're kind of going with fault here?
08:34
David Tollen
That's roughly it. But you got to get the concept of fault out of there. And so this is something that people often misunderstand about indemnities, not just in AI across the board. Indemnities are not, at least at heart about fault.
08:48
Adam Stofsky
Let me reframe the question. Sure. I want to go down fault indemnities.
08:54
David Tollen
But the way you framed it, giving me an opening to talk about this fault issue, is very helpful, very educational. That's good stuff, people.
09:02
Adam Stofsky
Okay, let's do it. So just pretend I didn't interrupt you and like start going with that again.
09:08
David Tollen
So what you said is right, except we've got to get that concept of fault out of there. And this is something that's often misunderstood about indemnities in particular, which is at its heart, an indemnity doesn't need to involve fault. A third party indemnity against third party claims. What you're at heart saying is if such and such a claim is filed against you, we recognize that is more about our business than yours, and so we'll defend that case. And this is where you get into this for some people confusing line where the AI companies will say, we're going to give you an IP indemnity but not an IP warranty. They're essentially saying, if there's infringement, if our output infringes somebody's intellectual property, we're not at fault. We warned you that might happen.
10:02
David Tollen
So we're not giving you a warranty which would, if it was breached, it would put us in breach of contract. We're just taking responsibility. And that's a good way of looking at it. But, but putting aside that fault issue, you're exactly right. They're saying, customer, if you didn't try and infringe, if you didn't say to the output, if you didn't say to the AI, give me a cartoon character that looks a lot like Buzz Lightyear, one of the image generators I worked with gave me that without me asking. If you didn't attempt to make it happen and you weren't reckless, you didn't get Mickey Mouse produced by the AI and anybody should have known that infringes a copyright.
10:44
David Tollen
If none of those things happened and you had this no way to know copyright infringement or potential copyright infringement by the AI, then we're going to take care of you and what can easily happen. You know, the New York Times has demonstrated this in its lawsuit against OpenAI. What can easily happen is the AI produces something that is, quote, unquote, substantially similar to something in its training data, something that it or that it had access to online. And if it's not something really well known, the customer has no way to know that's what happened. That's where the AI companies, at least the big ones, are stepping in and saying, we're going to take care of that for you.
11:27
Adam Stofsky
Okay, great. So let me just summarize now I'm going to take a step back and summarize it from the standpoint of what just kind of everyone operating in the world of AI at all should know. One is that these tools do create risks or a potential to create damage. There is a bit of a default right now where most of these large language model companies in particular, they have contracts that in many cases put a lot of this risk on the user of the model. Right. They, they take responsibility for what they've made with the model. But in some cases the AI companies are actually taking a bit of responsibility for things that are truly not their fault, maybe more under their control or should be under their control.
12:11
Adam Stofsky
And just, it's important to kind of know what these concepts mean and to be aware of them as you use AI tools in your work and your life. Is that a good summary, David?
12:21
David Tollen
Yeah, I think that's a really good summary. The, the way to look at it is that they have taken, I think you should say, responsibility without liability related to IP issues, copyright issues, really not giving indemnities in general for all ip, but copyright issues. They're taking responsibility for functionality issues, for issues about whether the AI produces an output that injures someone. They're not taking responsibility. They are disclaiming responsibility. And within both areas, it's not even clear where the law would go. Absent these disclaimers, the question of whether the user or the AI company or anybody is liable for IP infringement by a gen AI system is being litigated. We don't really know the answer. And you know what is a good analogy that I like? We're in the early stages of an industry that creates new issues and new liability.
13:26
David Tollen
If you look back a century and a little bit more, we had the same thing with the rise of automobiles and a lot of other sort of heavy industry in the Industrial Revolution. And there were a lot of injuries that didn't involve anybody's negligence. And the underlying law said if the, you know, car manufacturer of the car that exploded or whatever wasn't negligent, then it's not liable. Gradually, the courts realized in this new environment that doesn't work. And they came up with the concept of strict liability, which basically says if you're benefiting from the activity, you have to pay its costs, including the cost of accidents where you weren't negligent. That's an example of the law bending and molding around a new industry and creating new doctrines. In this case, the court's really doing it. We may need that for AI outputs.
14:20
David Tollen
We just, we. We are in the dark about who should be responsible or to what extent. And I doubt that existing underlying law is going to continue to govern this. I think the law is going to evolve. Right.
14:32
Adam Stofsky
Wow. Very, very interesting. Yeah. So I'm hesitating to finish because there's so many more questions to ask, but we need to stop to keep this video short, but thanks so much, Dave, really appreciate it.
14:45
David Tollen
My pleasure. Thank you.
<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1159873437?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="David Tollen - AI Contracts - Who_s on the Hook When Something Goes Wrong"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>
