
00:00
Adam Stofsky
So today we're going to talk about the issue of LLMs or large language models and kind of risk versus reward of using LLMs at work. So let me kick it off with a basic question, Julia. What is a, what are we talking about? What is a LLM and what does it mean for an LLM to potentially create risk in a company?
00:32
Julia Apostle
So an LLM is typically understood to be a large foundation or frontier model or potentially depending on how it's been developed under the EU AI Act, a general purpose AI model that's been trained on a lot of data, text data, and has, you know, general capabilities or to generate textual responses in response to written prompts. Right. So if you put in, you know, a large piece of text, like a very long law, you can ask the LLM to generate a shorter version, a condensed version, a summarized version of that text.
01:15
Julia Apostle
That would be an example of a, you know, using text, an LLM, to generate a shorter textual version of a larger document or to scan a bunch of different documents looking for specific information to answer questions about, you know, either information upon which it's been trained that it goes and pulls from the Internet or through, in documents that you have also uploaded into the system as a whole.
01:43
Adam Stofsky
So how are people using these, what are the upsides of using these at work? You kind of just mentioned one kind of summarizing large amounts of information, but you know, what are. And then both of you can jump in here like, what are you seeing out there? As in, you know, in late 2025, early 2026, what are the upsides of using this kind of tool at work?
02:03
Shannon Yavorsky
I definitely see people across organizations using LLMs to write emails, to draft documents, to, in hiring, in the HR cycle for marketing as well, to generate marketing copy. So, you know, another really big use case that we see all the time is there are a bunch of tools out there for meeting summaries, so recording meetings and summarizing them and they'll put, you know, the LLM will generate a list of next steps. And these tools are awesome. They're super helpful. They introduce efficiencies into the process. They, you know, they make my writing better. When I've drafted an email at 11 o'clock@ night and it's maybe not so sharp, I'll plug it into our tool and it'll, you know, sort of clean up the grammar.
02:59
Shannon Yavorsky
And so it's really, really been helpful, I think, for introducing efficiency, for making people work more quickly through process, for, through tasks. And another example that I give is some of the stuff that I did as a first year lawyer, parsing through hundreds and hundreds of documents, looking for certain provisions is now really done by large language models in 10 seconds.
03:35
Adam Stofsky
Right. So we sit down to like use ChatGPT or Claude for something. You don't think of yourself as engaging in like risky behavior.
03:42
Julia Apostle
Right.
03:42
Adam Stofsky
You're just sort of chatting with a chatbot. So what does it mean for an LLM to create risk for a company?
03:52
Julia Apostle
I think there's a couple different ways that can happen. So there can be risk in just using the tool. Right? Because you're not supposed to be sticking confidential information or personal data into the tool, which has nothing to do necessarily with the characteristics of the tool itself. Right. It's about using AI generally. Like, is this actually permitted purpose within your organization, within the contractual frameworks that you deal with? And then there's risks that are inherent to the tool itself, and those have been well ventilated and documented, depending on the use case. Right. So if you're asking it to answer questions, for example, I mean, we're lawyers, so these issues and use cases are close to our heart and home. Like, if you're asking it to answer questions or even to summarize legislation, then accuracy is paramount. Right.
04:48
Julia Apostle
And accuracy in relation to some of these models is not perfect. And Shannon, I'm sure you've seen that. I have certainly seen that. Where a summary of a law comes back wrong. And, and so that's a number one risk. Right. And, and I think the key is that the risk, the nature of the risk depends on the use case. So that people need to understand, well, what do I want to use this for and what are the risks inherent to that use case? And that's a sensitization education issue and why AI literacy is so important within organizations.
05:28
Adam Stofsky
So what are some of the other kind of risks or risky behavior that people can engage in using large language models?
05:36
Shannon Yavorsky
So another risk is bias. AI tools or large language models reflect patterns in the data that they were trained on. And that means that they can unintentionally reinforce bias or stereotypes. And it's really important when AI tools are used for things like performance feedback, hiring, related materials, customer communications, or anything that could really affect people directly. So if people are using AI for to draft or review that kind of content, it's especially important to have what we call the human in the loop and have someone reviewing the output to make sure that it's not biased.
06:21
Julia Apostle
I was just going to flag the whole ownership issue. Like if you're using it to create content on behalf of someone else, there's the whole question of, well, who owns that content? And if you are being commissioned to create work and you're not comfortable assigning ownership because you're not sure, well, does the AI own it, do we own it? Then that also is a potentially a risk as well because it remains an area that's, you know, still unsettled in a number of jurisdictions.
06:55
Adam Stofsky
All right, so we've got accuracy, we've got bias, we've got ip. So the risks I guess are piling up. How about you guys are privacy lawyers? How about, how about personal information? This is what kind of freaks me out. I'm thinking about people putting in health data or lists of customers and things like that. Does this create, potentially create risks?
07:15
Shannon Yavorsky
I think absolutely. And it depends on the nature of the data. So it's riskier if we're talking about health related data, more sensitive personal information like financial data. As another aspect, as another example, this is the reason that companies, when they're negotiating agreements with large language models, they insist on what's called a data processing agreement because they don't want the large language model to then train their model on the company's documents, which can include, you know, lots of different personal information, which would then potentially expose it on the other side. It's also potentially a sale of personal information under the state privacy laws, which creates a lot of, you know, not insurmountable hurdles, but it's additional complexity and additional risk.
08:13
Adam Stofsky
So, so I guess the bottom line there is always be kind of skeptical when putting a bunch of personal information into a large language model. But also if you work at a company, just don't use your own free account. Right. Make sure you're using a company account and try to understand what the rules are around that account. Is that a good kind of rule of thumb?
08:32
Julia Apostle
Yeah, I think so, yeah.
08:35
Adam Stofsky
Okay, so accuracy, bias, intellectual property ownership, data privacy, confidential information, any other major risk categories we're kind of missing here.
08:46
Shannon Yavorsky
The other one that I would flag is just the, you know, reputational risk. So if people are misusing AI or there's an embarrassing story like has happened to a number of different companies that have misused AI. There was a Canadian airline that used a LLM sort of chatbot on their website and it gave someone the wrong price and it just, the whole incident ended up on the front page of the papers, which is the last thing that companies want is to show the poor implementation of an AI tool. So there can be sort of loss of customer confidence and media attention that the company doesn't want to draw. Those kinds of risks, I think are, you know, need to be considered as well, which goes back to Julia's point of. It's really.
09:44
Shannon Yavorsky
AI literacy is really important, helping people with, across the company to understand why it's important, why it matters and what they can and can't do, what the, where the guardrails are and is really critical.
09:57
Adam Stofsky
Right. Okay, great. Well, that's a great overview and summary of kind of some of the risks and rewards of using large language models at work. We talked about kind of accuracy bias issues, data privacy, confidential information, intellectual property ownership, reputational risk, and just thinking about outputs generally and the quality of what comes out and what you're allowed to do with that output. All right, but we're going to leave it at that for now. Thank you both so much.
10:23
Shannon Yavorsky
Thanks so much, Adam. Thanks for having us.
<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1175496943?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Julia Apostle and Shannon Yavorsky - Risk vs- Reward of LLMs_2"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>


