On Demand Library
Created in partnership with
Generative AI and Bias
4:28
Legal Disclaimer 
The information provided in this video does not, and is not intended to, constitute legal advice, instead, all information, content, and materials available on this site are for general informational purposes only. The law changes fast, so information in the video may not constitute the most up-to-date legal or other information. 
Transcript

00:07
Adam Stofsky
Annette, how about this issue of discrimination and bias? I know we hear about a lot, there's a lot of fears that getting AI involved with things like interviews or hiring or prospecting for hiring creates potentially some real problems around discrimination. But that's not really generative AI, right? That's more predictive. Are there risks created by generative AI around bias and discrimination? 


00:35

Annette Hurst
I think that you're right that the risk is more obvious in terms of how harm can occur with predictive AI's a bias, let's say, in lending and resume screening in, you know, these kinds of activities where algorithms are making recommendations that control or allocate a resource to a person, that can be a very serious, if there's bias in the loop. We've heard of how the problems with facial recognition algorithms and racial bias, there are some serious issues there. There are also possibilities for generative AI to have meaningful biases that can harm, that are not obvious on the surface. You know, some of them are maybe just a frustration, a quality of service harm. Let's take a chatbot for customer service, chat bot or something like that, where it's a voice. 


01:33

Annette Hurst
They may not be trained as well to recognize women's voices as men's voices, and so women experience a worse quality of service as a result. But there are lurking some more. There are lurking potentials for more significant risks. Let me give you an example. Let's say you were going to train a generative AI on a whole body of research literature, medical research literature, and you are going to use that to help you generate ideas for maybe a new treatment of some kind? Well, there are existing biases in medical research literature about, a lot of the test subjects over the years have been white men, for example. And so if that bias is, that set of association also creeps into the literature that is used to train the model. It can also be reflected in the output. 


02:27

Annette Hurst
So depending on the use cases for generative AI, if the use cases are very serious ones involving important life events, financial events, and other things for human beings that can cause them harm if it goes wrong, then we need to be more sensitive to the possibility of bias, even in the case of generative AI. 


02:53

Adam Stofsky
So how does a executive or entrepreneur, or really anyone working at a company approach this is a matter of really understanding the tools and the limitations before using them to make very meaningful decisions. What is the, I mean, can you ensure for this sort of thing yet? Like, what are the. 


03:16

Annette Hurst
Probably not yet. Yeah. 


03:18

Adam Stofsky
So what do you do? What do you, what's an entrepreneur to do. 


03:21

Annette Hurst
So it's a great question because everybody wants, you know, just tell me what to do. I want an acceptable generative AI use policy. Right. Can you just give me that off the shelf, please, Annette? You know, but the reality is the only thing off the shelf is don't use it. And that's not practical or realistic because people are going to use it anyway. 


03:41
Annette HurstSo the best thing to do is to make the kind of risk assessment that people and businesses make all the time, which is to analyze the area of operations where you might use a generative AI tool and then look at the costs and the benefits, you know, the benefits being increased productivity, the other things we've discussed and the cost being some set of risks, and then decide, okay, is this an area where I'm going to allow for the use of such a tool? Also, as we've discussed throughout, you can mitigate risk by having commercial terms with a tool provider or through tool selection. And so if it's an area where you want to use a tool but minimize risk, you may also do that through tool selection and through commercial terms of service.

PDFs
Audio
Share Video
Embed Video
© 2024 Briefly