On Demand Library
Created in partnership with
Generative AI and Hallucinations
2:47
Legal Disclaimer 
The information provided in this video does not, and is not intended to, constitute legal advice, instead, all information, content, and materials available on this site are for general informational purposes only. The law changes fast, so information in the video may not constitute the most up-to-date legal or other information. 
Transcript

00:06
Adam Stofsky
We've been talking about the risks of an AI tool, sort of essentially aiding and embedding the infringing of various intellectual property. But what if it's just wrong and you use that and use information? You hear about this all the time. AI is always, it's lying and it's incorrect. What are the risks of that? What is sort of, can you comment a bit about that? 


00:27

Annette Hurst
So that is a really significant risk, and people need to understand this. 


00:32

Annette Hurst
These generative AI tools will deliver information. 


00:35

Annette Hurst
To you ten out of ten times with absolutely absolute confidence, right? Certainty in the accuracy and reliability of that information. But some number of those times, it's going to be completely wrong. 


00:52

Annette Hurst
It's going to be false, untrue. 


00:54

Annette Hurst
People have started calling this hallucinations, right? It just made something up. 


00:59

Annette Hurst
And people might have seen a story. 


01:00

Annette Hurst
In the New York Times, you know. 


01:02

Annette Hurst
About a lawyer who wrote a brief. 


01:04

Annette Hurst
Using one of these tools, and it just made up case law citations, you. 


01:09

Annette Hurst
Know, and the lawyer didn't understand that. 


01:11

Annette Hurst
Capability was there, should have understood it. 


01:14

Annette Hurst
And submitted a brief to the court. 


01:15

Annette Hurst
That was completely made up. So everybody needs to understand this risk. 


01:20

Annette Hurst
About the AI delivering information to you. 


01:23

Annette Hurst
That is completely made up, because that can result in defamation, libel, or slander. 


01:29

Annette Hurst
If you say something false and slanderous. 


01:31

Annette Hurst
About a person, it can result in false advertising. 


01:35

Annette Hurst
If you make a factual assertion about. 


01:37

Annette Hurst
A product or service that's false. If you have a public company and. 


01:41

Annette Hurst
Your investor relations people put something false. 


01:43

Annette Hurst
Out into the marketplace, you could find yourself on the wrong end of a securities fraud lawsuit. 


01:49

Annette Hurst
There are so many ways in which. 


01:51

Annette Hurst
An AI making up something wrong can. 


01:54

Annette Hurst
Go badly that it's important. 


01:56

Annette Hurst
That's really why it's important to have humans in the loop before anything created by an AI goes out into the world. 


02:05

Adam Stofsky
But doesn't this radically diminish the efficacy of these tools? I mean, if you've got to, like, sit there and have a human fact check them, doesn't it kind of defeat the purpose? 


02:17

Annette Hurst
I mean, I would say no. 


02:19

Annette Hurst
I would say that it can still really enhance your efficiency and your productivity. 


02:23

Annette Hurst
Even though it's also super important to do the right thing and make sure that everything you're sending out into the world is correct. 


02:33

Adam Stofsky
So, bottom line, fact check your AI outputs for hallucinations. 


02:39

Annette Hurst
Absolutely necessary. 

PDFs
Audio
Share Video
Embed Video
© 2024 Briefly