
00:00
Adam Stofsky
Hi, this is Adam Stofsky from Briefly, and I'm here with Matthew Coleman from Orrick. Hey Matthew, how's it going?
00:16
Matthew Coleman
Doing great, Adam. Thanks for having me.
00:18
Adam Stofsky
So today we're going to talk about the law of AI and sales teams. This might seem like a bit of a strange topic, but I wanted to ask you, Matthew, why do sales teams need to know about AI law to understand how AI is regulated?
00:37
Matthew Coleman
It's a really good question and I think that most buyers are going to be pretty educated in these issues, you know, and only increasingly so in coming years. So it is important for sales teams to at least have kind of, you know, the credible vocabulary and be able to speak competently about some of the risks and some of the issues that might arise in buying AI technologies. So if the product involves AI in any capacity, whether this is a true blue like generative AI type product, or if it's some other type of technology product that just includes an AI component, I think these risks are really important to understand and to have again, some kind of level of competency or fluency with.
01:16
Matthew Coleman
And really all of it comes down to what is the use of that tool, like, what is the intended purpose for why a buyer might want to deploy an AI tool. And, and all of the risks stem from that use. So that could be the risks around automated employment decisions. If the AI tool is intended to create decisions about whether or not to employ someone or to promote them or to let them go, or other automated decision making tools that have similar effects, buyers might be concerned about those risks.
01:49
Matthew Coleman
What kind of controls are in place, how you're providing notice to those end users, any kind of available recourse in case someone doesn't like the decision, you know, those types of controls that your teams might have built into the product and understanding how they operate so that companies and buyers can help address those potential risks. From a legal compliance perspective, the other thing that comes up is there's a lot of attention being paid these days on companies that are overhyping their AI capabilities and you know, they call it AI Washington. And so to the degree that you're speaking about the AI capabilities, understanding the difference between what is like a deterministic model that's based on heuristics or decision trees versus a true generative model that's actually creating content and there's a whole spectrum in between there as well.
02:37
Matthew Coleman
So being able to speak competently about what the tool actually does can help keep your company out of risk. So you're not kind of overhyping what the tool can actually do.
02:49
Adam Stofsky
So to unpack what you just. There's a lot there that's really interesting to unpack what you said it sounds like. So the sort of first category is sales teams need to have fluency in various risks around AI because that's what their customers or potential customers are going to want to understand to feel comfortable buying the product for their companies. Right. So there's no legal risk here. That's more of a. So it's such a sales risk. Right. You have to know what you're talking about and sound fluent in these areas.
03:18
Matthew Coleman
It's important to know because a lot of laws that are popping up these days regulate very particular risks associated and very particular harms associated with the use of AI technologies. So for example, the automated employment decisions, there's a series of laws at the state and local level that do regulate using AI tools to make employment decisions. There are AI related laws associated with making disclaimers available that you're interacting with an AI chatbot or that generated content has labels on it that it was created using AI. And so there is a legal compliance element to it. Should your tool that you're selling have any of those kinds of features or capabilities? And so understanding that, understanding what the risks are, those are questions that might come up over the course of the sales cycle.
04:03
Adam Stofsky
Got it. So there's like several issues here. So one is in general, they're like there's sort of general risks around things like AI washing and exaggerating claims of AI. So that's just something that might be common to all AI tools and products. Like don't understand what your products capabilities are and don't exaggerate them.
04:22
Matthew Coleman
That's right. Say what it does and don't do anything more than you say it does.
04:27
Adam Stofsky
Right. And then there's a. Depending on what the product is that you're selling, there could be a whole variety of potential regulation or legal risks involved that you need to be able to talk with intelligence about with your potential customer. Right?
04:42
Matthew Coleman
That's right. That's right. And it does. So to my, how do I frame this? There are legal risk. Yes, there are legal risks associated with the use cases of your tool. And most businesses that are going to be buying those products, they're going to care about the legal compliance risks, they're going to care about what an AI provider is doing to help them comply with those risks. But the buyers are also going to be concerned with other kind of reputational risks, operational risks, how they're going to be able to manage, you know, the review of what an AI generative tool creates. You know, if an AI tool were to ever hallucinate, you know, what kind of controls are in place to help them monitor that and rely on a provider to be doing some of that monitoring themselves.
05:28
Matthew Coleman
And so, you know, if you're aware of your company that has done some kind of like red teaming testing to prove that, you know, the tool is fit for purpose, that it's not going to be able to be misused, that's something that buyers are going to care about. If it has built in controls for you to be able to have a human in the loop on overseeing decisions that are made, that's functionality that your buyers are going to care about. So it's a whole host of different risks stemming from legal compliance risks, but also kind of spanning the business type of risks as well.
05:57
Adam Stofsky
Okay, so let's see if we can organize things a little bit into. I know the different kinds of products are going to have a whole wide variety of different kinds of risks. I'm thinking about everything from kind of health tech and medical devices to defense tech. I mean, there's a million different autonomous vehicles. I can imagine lots of individualized risks. But are there some general categories that we can talk about that are going to be of concern? You know, for example, data privacy. Can you kind of go through what sort of the broad categories are that every salesperson might need to have an understanding of?
06:33
Matthew Coleman
Yeah, sure. So one of the main issues that comes up all the time in sales cycles is data rights. What a AI provider is doing with the data that it's collecting from its customers and whether or not it's retaining any of those rights to train its models. And again, it's important for a salesperson to kind of understand what the company line is because some companies do use their clients data to train their models and some companies don't. You know, some companies are able to deploy an on premise version sandboxed in a separate environment so that all the client data just lives in that environment, never gets called home back to the company servers. And so there's no training risk whatsoever.
07:10
Matthew Coleman
But again, companies from an IP perspective, from a confidentiality perspective, from a privacy perspective, they're going to care about that kind of question as to what are you doing with our data and are using it for your own commercial purposes. So that is something that may come up again depending on the use case. If we're talking about tools that make significant decisions, then we Want to know a little bit about what controls are in place to be able to appeal those decisions, inject a human into the loop to be able to oversee those decisions. So it's not just the tool running amok. And also what are those decisions based on? Have they been assessed for any kind of bias or kind of ethical impacts with how the tool is intended to operate? So those are a few.
07:54
Adam Stofsky
Okay, that's really interesting. Another question just occurred to me. Not to open this into too much complexity, but since we're talking about risk here, do you see certain contract terms getting negotiated pretty heavily? I'm thinking about things like third party indemnities, for example, where people are, I mean, do buyers try to essentially push more risk onto the providers of AI tools? You've seen this happening.
08:26
Matthew Coleman
It is all across the board at the moment because this is such a nascent industry and there are so many players of all shapes and sizes. Really the risk shifting provisions in a contract, your limitations of liability, your indemnities, your disclaimers, your, even your reps and warranties about what the tool can and can't do. It is all across the board in terms of what companies are willing to give up in order to make the sale and also what companies are being essentially forced to give up in order to make the sale. So, and that largely depends on the bargaining power between the parties.
09:00
Matthew Coleman
But yes, I are, I'm seeing particularly kind of smaller companies making concessions in those fields, like giving up indemnities for not only the standard like IP type infringement indemnities, but also just for like, you know, unexpected misuses of the tool and in order to make a sale and to try and kind of build the business and then kind of figure out the risk profile, how that shifts as the company grows through the seed rounds, through the different series of fundraising. As it gets bigger, it can, you know, likely less willing to absorb that kind of risk. So right now there's no, I would say firm state of the art in that field, but I imagine there will be.
09:43
Matthew Coleman
The other thing that I think we're starting to see that's fairly nascent at the moment, companies that are willing to take on that kind of risk if they have an insurance backstop. And insurance policies for AI harms or AI incidents were just like early days in seeing how those policies are drafted and the premiums that are associated with them. Right now they're pretty expensive, but they are available on the market. So companies are exploring that as an option, as a way of saying, okay, we'll accept more risk in our contracts in order to be able to make more sales.
10:15
Adam Stofsky
And this might be a question you get from a customer as well about insurance.
10:18
Matthew Coleman
Right, exactly. Particularly if you're talking about higher risk tools or higher, you know, the risk of harm is higher through the use of the tool.
10:26
Adam Stofsky
Right. So this might be something like an autonomous vehicle that could like crash or like something that makes employment decisions which has a potential risk of violating discrimination.
10:34
Matthew Coleman
Law, genetics, biometrics, anything that processes highly sensitive data.
10:40
Adam Stofsky
Okay, so we could go down a lot of rabbit holes here. But to conclude and keep it simple, I'm going to try to summarize back to you sort of the major categories of things that salespeople should kind of have a grip on if they're dealing with AI at all, which is going to be, you know, many salespeople in the coming years. One is understand some of the general risks about selling anything that involves AI. So these are things like exaggerating AI capabilities, washing, things like that. Two, understand the specific risk created potentially by the tool you're selling and really anticipate what those customers questions are going to be. Three, understand when to maybe escalate a contract question to a lawyer. Right. Things like indemnities, third party indemnities, liability caps, these could be pretty complicated.
11:29
Adam Stofsky
So kind of have a sense of when you should escalate and when you should negotiate yourself. And then fourth would be just a little or maybe 3.5 is kind of understand, you know, if you're insured and what sort of the insurance profile is of the product you're selling. Did I summarize that? Well, I think you did.
11:44
Matthew Coleman
I think you did. I think the only other thing that I would add to that list is it's important for salespeople to talk to the product people about what kind of AI related safety tools they've built in. And it kind of goes to the business risk question, but it's also useful sales material for building customer trust that you have eyes on target, that you have, you know, built the tool in a way that recognizes risk and has also tried to address it.
12:09
Matthew Coleman
And so if companies have done, you know, monitoring or red teaming, if they've done bias assessments, if they've had third parties come in and kick the tires of this tool and try to, you know, make it do the wrong thing and you've gotten a good, like a clean bill of health, that's useful to know and to be able to share with your customers because it does build trust.
12:27
Adam Stofsky
Right. So good communication between sales, legal, product. Those companies that have that great line of communication are going to sell the most and sell the best.
12:37
Matthew Coleman
That's right. Yeah. They're going to be in a great spot.
12:39
Adam Stofsky
Right. All right, Matthew, this is really interesting. Thanks so much.
12:42
Matthew Coleman
My pleasure. Thanks for having me.
<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1159873919?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Matthew Coleman - What Sales Teams Need to Know About AI Law"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>


