
00:03
Adam Stofsky
Hi, this is Adam Stofsky from Briefly, and I'm here with Shannon Yavorsky from Orrick. Hey, Shannon, how are you?
00:08
Shannon Yavorsky
Hey, Adam, thanks for having me.
00:11
Adam Stofsky
So we are here to talk about AI, which obviously is a big deal right now and will be for a long time. And I wanted to ask you the question, Shannon, help our viewers understand why is AI a risk for companies? And what I mean by this is people using various AI tools who work in the company, large language models or any AI tool, what are the risks that companies face for their employees and people using AI?
00:39
Shannon Yavorsky
Yeah, AI is increasingly embedded in everyday tools that companies use. Chatbots, recommender, recommendation engines, autonomous systems, for example. And as it becomes more powerful and autonomous, the legal risks are multiplying and businesses, developers, and even users of AI need to understand where the law is catching up and where it's not catching up. So the first big risk that I think about is liability. And when an AI system makes a harmful decision, for example, a self driving car causes an accident or an algorithm unfairly denies a loan, who's responsible? And courts typically look to existing legal theories like negligence, product liability, or breach of contract. But when AI is trained on unpredictable data and makes decisions that no one explicitly programmed, assigning fault can be really tricky. So I think liability is one of them. The next one I think about is privacy.
01:47
Shannon Yavorsky
So AI thrives on data and that creates privacy risks. Training models often involves massive data sets, some containing personal or sensitive information. And laws like the gdpr, the European General Data Protection Regulation, and the whole range of US state laws set strict requirements around how data is collected, used and stored. So developers and users of AI need to be aware of the potential privacy risks associated with AI. The next one I think about is ip. So intellectual property AI systems can generate text, music, code and images. We see a lot of companies using the text decoding tools. That was kind of really one of the first ones that was really adopted by companies. But who owns the output? Is it the person? Is it the company? So copyright law typically only protects works that are created by humans, not machines.
02:53
Shannon Yavorsky
So if a model is using copyrighted material and for training, then there are lots of issues around whether any of the output materials can be protected by copyright. And some of those issues are still unresolved. Another one that I think about is bias and discrimination. So AI can unintentionally discriminate. If training data reflects societal biases, the AI can replicate or even amplify those differences, and that can lead to unlawful discrimination in employment, housing, Lending or even policing for example. So anti discrimination statutes and the Equality act can apply even if no human intended the bias. So lots of potential, you know, emerging issues around bias, discrimination and fairness. And then the last thing that I'll say in the terms of like big buckets of risk is the emerging legislative landscape. There are just so many new laws that are emerging worldwide.
04:02
Shannon Yavorsky
The EU AI act, which imposes strictly obligations on high risk AI systems. And then in the US there are tons of statutes emerging at a state level on AI. So lots of different laws and then the existing legislative regimes being applied to AI. So hugely fragmented landscape there as well. Lots of different risks.
04:28
Adam Stofsky
So I'd like to make this a little more concrete for folks. So I'm going to dream up some examples and I'm dreaming these up on the spot here. So just based on what you said, so let's say I'm a, I don't know, marketing professional at a company and I want to figure out or get some idea of who to prioritize targeting emails to. So I grab like a huge list of people and different customers. I've got this list somewhere and I throw it into an LLM and say, hey, can you match these people to my customer avatars that I've laid out for you and prioritize who I should reach out to? Right. That's a cool use case for an LLM.
05:05
Shannon Yavorsky
Interesting.
05:06
Adam Stofsky
How could something like that create an issue? Like, let's say I maybe had those people's email addresses in that information or their company names or I don't know, how could that. Is that a good example? I guess is my question.
05:20
Shannon Yavorsky
That's a really good, that's like a law school hypo that's has like lots of different angles. I think the first one is that comes to my mind as a privacy lawyer is have you told these people that this is some like data that's being collected about them and the way in which it's going to be used that you could market to them in this way? Have you been transparent? And transparency is like this core concept of AI and like being clear to people about what you're doing, what data is being collected and how it's being sort of used and shared. So that's the first one that comes up. But certainly in that scenario there are potential issues in relation to bias and discrimination if you reach out to certain people and other people based on some of that information.
06:07
Shannon Yavorsky
So lots of different sort of issues that could potentially arise in that scenario. And that's something that Companies have to think through, like from soup to nuts, like what are the things that could potentially go wrong here or that we need to comply with on the front end.
06:24
Adam Stofsky
Another one I thought of is like, this is a bit different from what you talked about, but I'm a marketing professional again and I come up with some cool marketing campaign concepts using a large language model and it puts a bunch of other people's copyrighted works in there. Maybe it refers to songs or it copies something that someone else owns. Is this a potential issue?
06:46
Shannon Yavorsky
Yeah, there are really interesting issues about whether that output can be protected by copyright. As an example, what were talking about that only humans can get really get copyright protection, or at least that's what copyright offices have said around the world. So lots of emerging copyright issues to think through, especially in relation to marketing.
07:12
Adam Stofsky
So an example there could be, let's say I work for, I don't know, a video game company. They generate a lot of art, right? So I am an artist and I'm using AI to assist me in developing some character designs for that game. It's not impossible that when that game comes out, the company doesn't really own those character designs and other people can potentially use them if I just use AI to make them. Is that right?
07:34
Shannon Yavorsky
That's exactly right. And the other thing, the thing that companies trip over in this space a lot is text to code. So if you're using text to coding tools, it may be that code is not subject to copyright protection. And if it's going to create, you know, work on the company's crown jewel software, that could potentially be an issue, especially in the context of any corporate transaction where lawyers like me on the other side are like, okay, who made this code? Who owns the copyright? And peeling back the, you know, peeling back the layers. And if it's all generated in certain circumstances using, you know, text decoding tools, it may not be subject to copyright protection, which is, you know, potentially a deal issue at that point.
08:19
Adam Stofsky
Right. That can be pretty catastrophic for a software company.
08:22
Shannon Yavorsky
Yeah, it's kind of one of those bet the company issues.
08:26
Adam Stofsky
Right. So I've been kind of teeing up my last question for this video. Maybe it's now a bit of a softball. If you're just, you know, an employee at a company, you don't own it. You're not like on the board. You're not the CEO or the CFO or the cto. You work in marketing or HR or product or sales. Why should you care about this? Why do you have to know about AI law.
08:52
Shannon Yavorsky
Yeah. So just regular people and companies who are using these tools, they need to know where the boundaries are and what they can and can't do with certain tools. And I, you know, I think one thing that comes up a lot with our clients is just companies getting their arms around exactly what tools people are using. Because it's so easy to click through and just say, oh, look, there's this awesome tool that's available, and all I have to do is one click to accept the terms of use, and then I have this amazing thing that's going to make my work super efficient. And in those scenarios, you're exposing the company to risk, because a lot of times those click through terms include provisions that say that the provider will be able to use any of that data to train their model.
09:39
Shannon Yavorsky
So you could be exposing company confidential information to that service provider unknowingly, unwittingly. But there's a lot of potential risk there, and employees don't want to fall foul of that.
09:53
Adam Stofsky
So the way I like to think about this is like, you don't want to be that guy.
09:58
Shannon Yavorsky
Yeah, totally. Like, you don't want to be that guy.
10:00
Adam Stofsky
Like, hey, you know, hey, look. Wow. Why was our. Why is our, like, company's code, like, you know, being. Being, you know, published online? Or like, why are we being sued for, you know, we send a cease and desist letter because someone's using our, you know, to go to the video game example, using our characters to, you know, do crazy things. We're just putting off our customers. And you can't stop them because you don't actually own, hey, you know, which artists made those works of art. You don't want to be that guy.
10:28
Shannon Yavorsky
Yeah.
10:28
Adam Stofsky
So I think in this world, the world we live in, I think just knowing just the basics of how to think about using AI is pretty critical. Would you agree?
10:37
Shannon Yavorsky
I agree. I think the, you know, for me, it's like, the potential is enormous, but so are the legal risks. And staying ahead means, like, thinking about these issues as any, you know, any member of. Any employee of a company needs to be aware of where the pitfalls are.
10:57
Adam Stofsky
Great. All right, Shannon, thanks so much.
10:58
Shannon Yavorsky
Thanks for having me.
<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1154468694?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Shannon Yavorsky - Why AI Can Be a Risk for Companies"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>


