
00:00
Adam Stofsky
So today we're going to talk about AI disclosure requirements. That's like a fancy word, fancy term for like what companies need to tell the public or regulators about their use of AI. So just, why don't you give us an, sort of broad strokes. What, what does this mean? What are AI disclosure requirements?
00:32
Shannon Yavorsky
Yeah, it's such a good question. There are really two buckets of things to think about here. The first is making disclosures to consumers around when they're interacting with AI or interacting with AI generated content. So think about the chatbot disclosure laws that are emerging which require companies to indicate when a chatbot is AI powered rather than a human. So you can't deceive people, right? Which is why you're starting to see these chatbots very clearly labeled. This is our, you know, AI powered legal bot, or whatever they want to call it, AI powered consumer service customer service helper, because that is now a requirement of the law. Separately, there are requirements to label certain content as having been generated by AI.
01:29
Shannon Yavorsky
So for example, where an image has been generated by AI, there are certain circumstances in which you have to indicate that's an AI, you know, generated image. And then this third bucket, which I put into the bucket of disclosure obligations, but it's more about disclosing how AI was a certain system was trained. And there are these disclosure obligations in relation to the data sets that were used to develop the AI. So there are kind of those three different things. So AI, when you're interacting with AI, like chatbot, in the chatbot world, when you're interacting with AI content, like an image generated by AI. And then the third bucket is when something was, you know, how it was developed, like what went in to build the AI. And there are those disclosure obligations.
02:26
Adam Stofsky
I have a lot of questions about all of these categories, but let me start out with a broader question. What kinds of companies need to worry about each of these? Is it only companies that are actually making fundamental AI models and need to worry about that third category of training data. Who needs to worry about all of these things?
02:47
Shannon Yavorsky
It's a really interesting question around who has to make these disclosures. There are lots of companies that are thinking through whether they're a developer or deployer, right? Under applicable law. And really the data set transparency obligations, the question of whether companies have to disclose what data was used to train the AI models, that's really for the developer to maintain the detailed documentation describing the training and testing data quality controls and like, for example, steps that they took to prevent bias and they have to make that documentation available to regulators and sometimes summarized for users. So the dataset question is really on the developer side of the house, on the deployer side of the house. If you're a regular company that is deploying third party chatbot software, then oftentimes the software will allow you to configure it in a way that there are those disclosures.
03:57
Shannon Yavorsky
But it's your primary legal obligation to present that notice to the consumer because you're the one interacting with the consumer.
04:04
Adam Stofsky
Let's say I develop a chatbot tool that's trained on all kinds of data. I'm a developer of this chatbot, so I have to disclose to all of my customers and definitely to regulators what data this was trained on. But what if I give my customers the ability to also train their version of that chatbot with their, let's say, their own records of the last, I don't know, two years of customer interaction? So the chatbot's learning what kinds of challenges their customers have. Does that the kind of user of the AI software become a developer because they're training a tool on their own data?
04:42
Shannon Yavorsky
Yeah, I think that's exactly right. There's a line, right, that you have to go back to the legislation and figure out whether you've cross the line between being just a deployer to being a developer as well. If you are, for example, in your fine tuning a model. And I think this is something that, you know, we talk about really core requirements for building an AI compliance program and frankly, whether you're a developer or a deployer, transparency in this way, like having data model cards and data sheets, is becoming a baseline expectation. So this concept of transparency that's a through line in all of the legislation from the chatbot laws to these, you know, the data transparency around data sets, it's this idea of transparency being clear to people, to consumers that they're interacting with AI and how their models were built.
05:47
Adam Stofsky
So let's get into the customer sort of the warning labels, as it were, both about whether a user's interacting with AI and if they're seeing or interacting with AI generated content. Just say a little more about this. Where are the lines drawn about what is AI and what's not? Let's start with a hypo, because we're lawyers, right? We'll do a hypo. So let's say I like go on, I don't know, stable diffusion or go on to OpenAI and I say generate an image of X, Y and Z and I have to just generate an image. I'm assuming that if I then publish that image under some of these, I'll have to say, hey, this image was created by AI, right? Because it was 100% creative AI. No human in the loop, just a fully AI created image.
06:34
Adam Stofsky
What if, I don't know, I have one of those headshots, AI headshots where just a photo of myself hanging out in my backyard and I have it, then kind of create a background and transform it into a headshot. They call it AI, something like that, where AI is kind of involved. Do these laws still require me to disclose that it's AI generated?
06:57
Shannon Yavorsky
So I would go to the core law. Really in this space right now is the FTC Section 5 obligation to or prevention of unfair and deceptive acts and practices. And if the AI generated or manipulated image could mislead consumers. So, you know, a fake endorsement or a fake event or altered evidence for.
07:28
Adam Stofsky
Product image that looks way nicer than the actual thing does.
07:32
Shannon Yavorsky
Yeah, totally. That's exactly right. Then that could constitute a deceptive or unfair practice under Section 5. And then, you know, it's a good moment or it's a good teachable moment for like sales teams to help them understand that's an obligation, that they can't deceive consumers and show something that is, you know, that could be used to mislead folks. And I would say the EU AI act also includes very specific provisions in relation to synthetic content. So AI generated or manipulated content, and the requirement is to clearly label or disclose that the content was artificially generated, unless it's obvious from the context. So if it's in a like artistic or fictional settings. And then there are much stricter obligations when the content depicts real people or events that could mislead the public.
08:41
Shannon Yavorsky
So it's really like, if you keep in mind that the core principle there is being not misleading people, I think companies will do the right thing if they understand that's the principle that they're sort of aiming for.
08:55
Adam Stofsky
Okay, great. Well, so just to summarize, AI disclosure rules, AI disclosure laws, when companies need to disclose that they are using AI or making AI come in several different flavors. One flavor is for the actual developers of AI tools. They need to disclose the data that tool was trained on. And there's maybe some lines over who is an actual, you know, developer of AI, but that's what those that category needs to do. And then people who are companies that use AI, you need to disclose if a customer is interacting with some kind of AI chatbot, for example, I'm assuming agent. As things continue to develop and if content was created using AI, I think so. I summarized that.
09:41
Shannon Yavorsky
Right.
09:42
Adam Stofsky
Right. Shannon, great summary.
09:43
Shannon Yavorsky
Yeah. Great summary.
09:44
Adam Stofsky
So I'm sure this is going to open tons of interesting questions, like how does this play out with music and or in video games or in other kinds of media? I mean, there's just so much here to develop. Right. As this law matures.
10:01
Shannon Yavorsky
Yeah, I think that's exactly right. And just keeping in mind that AI transparency is really shifting from a best practice to a legal expectation is a good true north here.
10:13
Adam Stofsky
Great. All right, Shannon, thanks so much.
10:16
Shannon Yavorsky
Thanks, Adam.
<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1175498782?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Shannon Yavorsky - AI Disclosures Overview_2"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>
