
00:00
Adam Stofsky
Hey Laura, how are you?
00:12
Laura Belmont
Hi Adam. I'm great.
00:14
Adam Stofsky
So welcome back to our conversation on AI and contracting. So today I want to ask you about buying AI. We've talked a lot about selling AI, buying AI and buying AI tools. So, so what? I don't know what should procurement teams or legal teams or even like you know, in smaller companies, CEOs and CFOs, what should they be thinking about and what should they be asking about when buying AI tools? Just broadly, sure.
00:40
Laura Belmont
So I'll start by saying the organization should really think about what their existing process is and how to build that out for AI. As we've talked about, there are a lot of the same concerns with AI as there are with traditional SaaS products. Concerns just seem amplified in this space. So I'd start by saying you likely have a procurement process. If you don't, you should implement one. But think about how you can add AI into that process and what specific questions are. So the big ones, if I'm buying, are starting with data. How is this vendor going to be using my data in this tool? Are they going to be using it for training their model?
01:25
Laura Belmont
If they are incorporating a third party model like OpenAI's ChatGPT or Anthropic's Claude, then I'm also going to want to know how that vendor is using my data.
01:38
Adam Stofsky
So, so the thing I don't understand always is why do like everyone talks about data? Why, like why exactly do I care about what about the model being trained in my data? What's why do I care?
01:48
Laura Belmont
It's a great question and one that everybody should be thinking about kind of even before the training question, so I'm so glad that you brought it up, is what's my use case here? When we're talking about AI, the use case of how we're using the tool is in so many ways almost more important than the tool itself. We know even the way that the regulations are thinking about AI is they're not regulating the tool, they're regulating how the tool is used. So is the tool being used in a healthcare space? Is the tool being used to make a really important question or determination about somebody's credit history or their health? So how we're using the tool is really incredibly important. And it's good to think about so that we don't just get hung up on questions that maybe don't matter.
02:40
Laura Belmont
If I want to use an AI tool to create some cute little pictures to go in my marketing, right. Or if I'm Using it to make a video. It's very different than using an AI tool to analyze my sensitive information or personal information, confidential information. So we should be asking about that use case because maybe the tool is okay, maybe the bigger issue is what do my policies and what does my internal governance look like about how we at an organization can use the tool. Is it okay for this use case but not for this use case always be.
03:18
Adam Stofsky
Can I ask you a follow up creep so that I want to make it more concrete so like, so trying to understand the risks involved with using an AI product not based on what the product itself kind of is or does, but on how you're going to use it. So here's an example just for our work, right? Because that's on my mind. So we do a lot of like animation it briefly, we make a lot of videos. So we use some text to voice tools and we're always with whatever the new text to voice tool is, you want to try to use it because it's kind of interesting and can make our work easier. So one use case is like I have a script for a video and I want to generate the voice completely from just from. From the model, right.
04:01
Adam Stofsky
And it creates this kind of voice track. Okay, that's one use case that's pretty cool. Or I have professional voice actors I work with that I have contracts with and we want to dub their voices in different lang to be used. So those are like meaningfully different use cases because one involves someone else's intellectual property. They're in fact their entire source of their livelihood is their voice. Is that like a good example of like how subtly different use cases might really change the risk profile of using a tool?
04:33
Laura Belmont
Sure, I think that's a good one. So going with that example, the question that we should ask, and this is what somebody's concern is would it be a problem if a third party had access to this information? Because when we're asking about if data is being used to train a model, I think we're less concerned about the third party having access to that data. We're worried that the information that we give that third party could somehow make it in an output that got spit out to other people. That's why the generative aspect of AI seems to be more concerning. It's not are you the vendor doing with this data necessarily, it's could somebody else get access to it?
05:17
Laura Belmont
So in that scenario, you know, if you're using a consultant, you likely have terms in your agreement that talk about what is kept Confidential and what is considered IP or what is considered their proprietary information. So even putting aside AI to ask the question, would this be a problem if I gave it to a third party? If somebody unrelated to this transaction had access to it? And then going back, and this is something, you know, another example that we've talked about and it would be.
05:47
Adam Stofsky
That case, it would be a huge problem because we should ask, voice the.
05:52
Laura Belmont
AI tool aside, right, for a minute and say, what is the concern here? Would it be a problem if somebody else had access to this data? Either because my contract with that person says they can't have it, or it's confidential and I've been entrusted with this information. And this is something that we talk about. A lot of organizations we know there's these consumer version of AI tools and those free versions typically allow model training where an enterprise version does not. So some organizations say, okay, there's no way we're going to get everybody in our organization this enterprise seat. So yes, you can use your personal seat that could train on our data for these use cases. Right. It's not confidential, it's, you're just using it to iterate to help problem solve.
06:42
Laura Belmont
You're trying to get a research project put together, coming up with those use cases. Because it's true that we're not going to be worried about every use case.
06:53
Adam Stofsky
So I run a, I'm just completely making up examples as I go here. Let's say I run a, I don't know, a doctor's office, a doctor's group with 10 doctors, right. So, and I, I like to create materials for my patients to read about certain conditions they might have and treat. So I can use AI to generate those kind of generic, like, hey, you have the flu, here's how you treat yourself. I'm completely making this up on the spot. But just an example I thought of, because I can use Flexity or ChatGPT or one of these tools to do.
07:21
Laura Belmont
That, you know, publicly available information. And I want to put it together and I want to style it in a way that will beneficial for my clients. But there's nothing particularly confidential about that. Now you want that information and then you are saying, okay, this is my client's diagnosis and I want to take that mission and give them something tailored. That's where I'd say, I don't feel comfortable about putting that client's information into a tool. But you could even back it up and say, well, what if I just put these factors in without the person's name and to make sure there was no personal health information. So there are ways to kind of get around these concerns, but I will caution that there's always scope creep.
08:07
Laura Belmont
So you might say, yes, you can use this tool for this purpose, but once somebody has that great tool in their hands, they might want to keep using it. Which is why it's really important that for all of these tools we're looking through and seeing what are they contractually providing?
08:25
Adam Stofsky
Right. Okay. So we talked about, this is really interesting. We've talked about kind of data, sort of data usage that's really critical, like inquiring how data is being used, but then also really understanding the use case. Any other kind of key categories or key things that buyers of AI should be thinking about.
08:44
Laura Belmont
Yeah, so security is always going to be the case. And again, because it's so important for data, so we're thinking about how is the model being deployed? And I will give an example. So for some of these chatbots that we're using, even if they're saying no data training, we still could be concerned that third party model has access to our data. Because what if there were a security incident? Right. For that organization? So we want to think about those questions. And the way that a model is deployed changes that. So if you're just using an API or you're using the cloud, you know, there they likely have access. Then you want to know, well, how long are you keeping that data? There's other deployment models. So say you're using something like an AI infrastructure, like Amazon Bedrock or maybe with Azure.
09:32
Laura Belmont
There are ways that when models are being deployed in those systems, the data is not even getting sent to the model. It's living within that environment. So it's not adding a new risk if you already have the environment. So we really want to know how is that model being deployed or is it being locally deployed on your server so it's entirely within your control because that is just letting us know who else could have access if there were to be a breach. Am I introducing new risks to our organization? It doesn't matter if they're training on it, just by nature of them kind of holding that data.
10:09
Adam Stofsky
Right, Very interesting. Anything else? Anything else on our list of key things to think about or ask when buying an AI tool?
10:17
Laura Belmont
There's so many and it really is how far kind of down that chain you are going to go.
10:22
Adam Stofsky
Give me a lightning round, give me like a rapid fire, like a few other.
10:25
Laura Belmont
Okay, so lightning fire. I would also say what you know if you're using a third party model, what model are you using? Can the customer switch between models? Do I have option to choose one model or another? Because I like Llama more than I like, you know, chatgpt? I don't actually. I'm just saying that as an example. Do you provide notice if you change an underlying model because then it's different terms and conditions. Can I opt out of any sort of model training? Can I disconnect certain features or connectors that you might have to other systems? How is it being processed and stored? Are you sending the data anywhere? Right. Are there any cross border data concerns that I'm worried about? Always in the contract. So I don't think about this as a questionnaire type. But who owns the data? Right?
11:21
Laura Belmont
Like that should be in the contract. But a huge thing is do making sure like I still own my data inputs, do I own the outputs? And if I'm owning the outputs, did you carve a license for yourself that you can use them? So what is the ownership look like? Then there's a ton of like, what's your AI incident response policy? Do you have a human review process? Are all of this automated? Again, that goes back to your use case where some regulations are saying there has to be a human oversight involved. But what's your documentation look like? I mean we could go probably 100 questions.
11:56
Adam Stofsky
Yeah, that's a pretty good list. Can I, can I ask you what we got? We got to wrap up because we're getting to our time here. But I, how about like, how about pricing? This again is something that's kind of just come to me like pay as you go pricing. I'm thinking about AI agents and a few tools I've used and it's sort of like a black box of how like what is, are you charging by tokens or by time or by this?
12:18
Laura Belmont
Adam, It's a great point because I've talked to people who are like, oh great, I'm paying $40 a seat per month. And they're like, but wait, I just got timed out of the system. Or I already put too many token. So that's a fabulous question to really understand what the usage policies look like and how the pricing works with it. Because typically you're going to have your price per seat. But depending on what you're doing and how advanced you're using a tool, like what sort of data are you putting into it, what are the analytics, you can rack up prices incredibly quickly. So I think transparency into that is. Is so important. But just for people to know that's an issue. It's not just going to be your flat fee per month and then. Great, you can run with it.
13:03
Adam Stofsky
I like how these interviews are me just asking you about things that, like, have come up for me in the last week. I know you.
13:09
Laura Belmont
How about you tell me about this very specific issue? What would you do here? I am not your lawyer, but that was not a lawyer question, so I didn't have to worry about answering that one. But a great point that.
13:21
Adam Stofsky
Yeah, it's like, no, really, Laura, this is great. I think we should wrap here because there's just so much here. But this is a great intro, like a deep dive on the most important questions around kind of data usage and use case and then some other key questions around buying AI. Buying AI tools. Really? Thanks so much.
13:38
Laura Belmont
And, Adam, I'll just say you framed it really well that I said there's probably a list of 100 questions, but again, it's always good to say why do I care? Right? Like, based on my use case. Is this a question that I care about for, you know, how I'm using the tool?
13:53
Adam Stofsky
All right, thanks so much.
13:54
Laura Belmont
Thanks, Adam.
<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1159873250?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Laura Belmont - Buying AI and AI Tools"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>


