
00:00
Adam Stofsky
Hi Shannon. How are you today?
00:07
Shannon Yavorsky
Hey, Adam.
00:07
Adam Stofsky
Good.
00:07
Shannon Yavorsky
How's it going?
00:09
Adam Stofsky
Yeah, good. I'd like to continue our conversation about high risk AI and today talk about how to weigh the cost and risks of using like cool new AI tools, basically, because a lot of them are really cool and a lot of them have a lot of risks. So I want to talk about, you know, how to weigh that with the law in mind. So to kick it off, why don't you just share with us kind of your thoughts on what the, like the upside here, like, what is the, what excites you most about what AI tools can do these days?
00:38
Shannon Yavorsky
Yeah, so AI tools can introduce incredible efficiencies into companies, like into a variety of processes, from text to coding tools that can be leveraged by folks in the engineering team to folks in the marketing team more quickly developing marketing materials, down to the legal team that can use sort of the, lots of different legal tools to help identify case law, to help draft documents. But alongside that comes with a lot of potential risks that companies have to think through.
01:19
Adam Stofsky
So is there like a, I don't know, like a formula you can use to, I don't know, plug in numbers even and try to figure out the risk versus reward? Like when you're advising your clients on this sort of thing, how do you, is there any kind of framework for thinking about this?
01:34
Shannon Yavorsky
So I think it's pretty case specific. If you're just using a AI powered spam filter, I care a little bit less. My, I don't get like high blood pressure. But if we're talking about AI and hiring a tool that is going to help identify candidates and maybe give them a score or a tool that's used for performance reviews, These are things that like impact employees, that impact consumers. It's really a bigger risk decision has to be made there. And that's the point at which you might want to think about doing an AI risk assessment or something similar to like really intentionally weigh up the risk and rewards associated with a particular tool.
02:19
Adam Stofsky
So is it really down to, you know, I'm a business owner, right? So I think about this sort of thing. Let's say I had, I don't have any engineers on my team, but let's say I had a bunch of engineers and I'm looking at one of these coding AI, generative AI coding apps. So I could literally look at like, how many engineering hours will this save me? Or how much more productive can my engineers be versus like what are the potential risks to, I don't know, me owning my Intellectual property. Right. Because that's a big risk with using AI tools to create code. Right. You just sort of weigh those against each other. Well, if there's a 1% risk or a 0.1% risk that this will jeopardize my IP, maybe it's worth it against this a huge amount of time savings.
03:01
Adam Stofsky
Do you get into a quantitative analysis like that?
03:04
Shannon Yavorsky
I think it's really important to look at the particular use case. So if you're talking about using a text decoding tool for your crown jewel software, I feel like the risk analysis is a little bit different than if you're using a text decoding tool to solve tech debt or for ancillary code that's just sort of a little bit less important. So I think you have to look at the snapshot of what you're trying to accomplish and look at what the risk looks like associated with that predictability particular scenario.
03:40
Adam Stofsky
Okay, so let me just walk through a few examples and what the analysis might look like. So let's look deeper into the sort of text to coding tools. You know, what are some of the other considerations you might look at? Aside from whether it's obviously, if it's like your crown jewel piece of software, you know, what are some other considerations you might look at?
04:01
Shannon Yavorsky
So another one that's come up is whether there are in the text acuting tools, there can be bugs in that software. So thinking through like what is the, is this introducing some security risk? Can there be inaccuracies? Can it be buggier than other code? Potentially. So those are some of the things like are gaining efficiencies one side but maybe losing on another. So you have to think about that.
04:26
Adam Stofsky
Yeah, that's interesting. Right, because you're not, it's not just risk versus benefit, it's sort of, it's, it's the benefits. And then actually maybe those benefits aren't so great because you got to go back and fix it later. So that's yet another kind of consideration.
04:41
Shannon Yavorsky
I think that's exactly right. Like if you have to add a human in the loop and they're spending as much time as they would have on the initial coding, then you're like, okay, I'm not really going to take this risk. It's kind of when I think about giving work, legal work to a first year, I'm like, is it better if I just do this myself? Because it's going to take me so long to review whatever's been done. It's kind of the similar analysis for.
05:06
Adam Stofsky
An AI that's really interesting. I think a lot of us have actually experienced that with just using large language models to do various tasks. Then is this going to cost me? Is it actually going to, like, we do this with writing scripts. Sometimes we write first draft scripts. And it's not always the case that the LLM saves us time. It just depends on the situation.
05:24
Shannon Yavorsky
You have to go back and edit it and contour it and maybe it got it. Not exactly right. So I think that's right. It's like a whole calculus. Right. The risk and the time and the resources and the like. All of those things have to be sort of looked at. You have to look across those things to make a decision.
05:43
Adam Stofsky
But just to be precise here, there's actually not to get like nerdy and kind of legal. And I'm holding my pencil. This is how I know I'm getting like. Because I'm like taking notes and getting very serious. Right? I'm very serious. So the human in the loop factor is that there's actually two things going on here. Tell me if I'm right about this. This one is that you're weighing the risks and benefits. Okay, so what are the benefits of using this tool versus a like a kind of operational risk of, oh, God, I got to go back and edit this thing, the work wasn't that good. Versus the legal risk of I need a human in the loop to make sure that I actually own this piece of material under the law, under the intellectual property laws. So those are different things, right? Right.
06:26
Adam Stofsky
One is an actual kind of operational requirement or requirement for quality control. The other is actually legal. So those are really different things, right?
06:35
Shannon Yavorsky
I think that's right. It's not just a, like, technology operational decision. It's legal and compliance and the risks kind of. They can span intellectual property and privacy, accuracy, bias, discrimination. So understanding those different categories of risk will help organizations make kind of informed, defensible choices when adopting different AI systems and figuring out what the, you know, risk score is associated with a particular tool.
07:08
Adam Stofsky
All right, let's analyze a couple more. Let's talk about, like the notetaker apps that everyone uses. I've had several calls recently where literally like there's like four of those note taker apps just pop up on the screen and because everyone's using them and it's. But they, I mean, they're incredible, right? Actually, they're incredibly. I don't have to take notes anymore. Can you kind of Give us a bit of a benefit risk analysis for note taker apps.
07:37
Shannon Yavorsky
Yeah, so that's a really great example. There are so many efficiencies to be gained by having a note taker act. People aren't distracted because they know there's going to be a record of will summarize key points. It might even come up with action items coming out of the meeting. It's just tremendous, potentially tremendously useful. But there are lots of different risks to think about. And from a legal perspective, the first one that we think about is the moment there's a record of something, it's presumptively discoverable in litigation. So you have a record, it might have to be disclosed in litigation at some point. The, the second is if there's a lawyer on that call, you could be jeopardizing waiving legal privilege because the notetaker app is like a third party. Another is data retention, like storage and data breach risk.
08:30
Shannon Yavorsky
The more data you have, the larger the bucket of data that could be compromised in a data security incident and then access. So maybe that was a confidential conversation with HR and a few other people, but then it's potentially stored in a place that other people who weren't really intended to hear it were are able to access it. So there are lots of different considerations with those note taker apps. Setting aside the fact that there could be hallucinations and inaccuracies in the actual record, which is, you know, terrifying.
09:07
Adam Stofsky
And there's also just like reputational issues like if people are gossiping or like saying bad things about. There's all kinds of crazy stuff that could.
09:15
Shannon Yavorsky
And I think people like react differently when they know something's being recorded. So there's that sort of culture point. Like I think people definitely react differently or would say different things knowing that they're being recorded. So if you want to have a candid conversation, I think that, you know, having those tools off is a good idea because I, I think that does weigh on people when they know that whatever they're saying is going to be formed part of a record.
09:45
Adam Stofsky
So this is really interesting because you obviously have all these great benefits. You weigh them against legal risks, but then these other risks, these sort of more cultural risks. That's really kind of fascinating. So I, I have to say there's going to be no formula for this one. You just have to like, if you run a business that involves very sensitive data, you don't want to have conversations about patients in a health clinic. Kind of probably and take notes on all that stuff and have it. Unless it's really protected. Right. Yeah.
10:13
Shannon Yavorsky
Or there's a human in the loop. Right. Like the, you know, there are a lot of compelling arguments for having a doctor's note taking app that is enabled, but that's one where you get it wrong. There really can be life threatening consequences. So there needs to be someone reviewing that or ensuring that it's an accurate and accurate record. So I think it's context specific. And highly regulated industries definitely, you know, healthcare financial services need to be more attuned to the higher level of risk associated with those that use.
10:49
Adam Stofsky
Right. Whereas maybe if you run a media company and you have, you record lots of creative brainstorm sessions. Yeah, you don't want that stuff getting out there. But the real legal risk is not as high. So maybe you're able to be a little more, you can weigh the risk a little bit lower than you might otherwise.
11:05
Shannon Yavorsky
Yeah, I think that's exactly right.
11:07
Adam Stofsky
Okay, let's do, let's talk about one more example in a totally different context. This one, I was actually an employment lawyer for a couple years, a hot minute about 15 years ago. So this is, you know, sort of close to something I used to work on. But how about AI around employment? Right. So there's a lot of different tools doing a lot of different things, helping you make decisions, generating rejection letters or other kinds of hiring and HR material. So how would you analyze these tools in terms of a risk versus reward?
11:40
Shannon Yavorsky
Yeah, that's a really good example because a lot of companies are onboarding these vendor tools that are now available to parse through like thousands of resumes that people receive for particular roles really everywhere along the HR lifecycle. And interviews using AI tools and interviews and then AI tools for performance reviews as another example. And these are, you know, the outcome for some of these things can impact individuals. So there's just a higher risk associated with using these tools. So for example, if you're using a tool to parse resumes and it, you know, is somehow has biased data in it and is excluding a certain population from the panel, there can be real legal risk associated with that because there's real bias risk.
12:36
Shannon Yavorsky
So you really need to proceed with caution in onboarding these tools and making sure that you're doing an AI risk assessment or analysis, ensuring that your employment folks are involved in that onboarding to make sure you're not falling foul of any employment legislation.
12:54
Adam Stofsky
Yeah, this is an interesting example because it's not like there's a, this is not like a new kind of risk. There's actually a very old fashioned kind of risk. Right. These are lawsuits under Title VII or state employment discrimination laws. And these are real and frequent and cause losses for companies both just financial and reputational all the time.
13:15
Shannon Yavorsky
Yeah, that's exactly right. And the eeoc, so employment, you know, governmental bodies have made very clear that they will apply the existing body of legislation to AI use cases. So there's no doubt there that those laws will apply. And there's lots of guidance in that space as well.
13:34
Adam Stofsky
Great. All right, Shannon, this has been really interesting just to conclude. I think it's important to say, as we've said before in other videos in this library, that this is a decision that's not just for, you know, CEOs and CFOs and CTOs and heads of compliance, but really everyone should have some sense of this because there's going to be AI tools everywhere in the coming years. Would you agree, Shannon?
14:01
Shannon Yavorsky
Yeah, I think that's exactly right. The, and the legal risks of AI are really multifaceted. And the prudent approach is to build contractual safeguards, do analyses of each individual tool, ensure that there's good human oversight, good humans in the loop, and align the use cases with the relevant risk and the regulatory landscape. So AI adoption is accelerating, but there needs to be careful legal diligence and understanding and doing that. Balance of the risk versus the rewards.
14:38
Adam Stofsky
Great. All right, thanks so much.
14:39
Shannon Yavorsky
Thanks, Adam.
<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1154468978?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Shannon Yavorsky - Evaluating Risks and Benefits of AI Tools"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>


