
00:03
Adam Stofsky
Hey, I'm here with Matthew Coleman, who is a partner at Orrick, specializing in technology and privacy and AI. Hey, Matthew, how are you today?
00:10
Matthew Coleman
Doing great, Adam.
00:11
Adam Stofsky
Good. Today we're going to talk about AI governance. Let's just start with a basic question. What is AI governance? What are we, what are we talking about here?
00:20
Matthew Coleman
When we're talking about AI governance? We're talking about controlling the use of AI tools and the data that flows within those AI tools. So when you're thinking about governance, you're thinking about how do you oversee the usage of that system so that you're controlling for the risks that you've identified in the use of that system. So generally in any kind of data governance, whether it's AI or privacy or cybersecurity, you're thinking about, do you have the right people in place, do you have the right policies and processes in place to be able to assess and identify and control for risks? And do you have the right technologies, technical controls in place, the right technologies to be able to control for those risks in an automated way? So generally those are principles that have been long standing.
01:04
Matthew Coleman
And if you can stand up a program, an AI risk management program or an AI governance program that addresses all three of those elements, then you're going to have pretty good eyes on target.
01:15
Adam Stofsky
So just to recap that, it's a set of systems, people, policies that help a company manage the risks that using AI tools potentially creates. Is it really that simple?
01:28
Matthew Coleman
Yeah, it really is. And being able to, it's controlling for the unknowns. Right. And so having as much visibility into those AI systems within the organization as possible and being able to exercise some semblance of control over the use of those systems.
01:43
Adam Stofsky
So what are some of those risks? I mean, I can dream up a bunch of ideas of why AI might be risky, but what do you think are the most common that you see today?
01:52
Matthew Coleman
It really depends on what is the AI system and what's the intended use of that AI system. It all kind of starts from the use case. So most of the AI related controls that we think of are, you know, how do you protect against certain harms? And so the harms in an AI system that is say a generative AI system like a large language model, one of the concerns we think about is ultimately how are you locking down the use of your data so that it doesn't then get absorbed by the large language model and go back to training their foundational model and is then repurposed and used for other customers? Because doing that comes with Certain risks associated with losing confidentiality protections in the legal world, losing privilege protections.
02:37
Matthew Coleman
If it's your proprietary information and you're keeping that data as a trade secret, you could risk losing that type of protection. And so there are certain controls you can put in place, like contractual controls and technical controls, to restrict those foundational models from doing that, from using that data for training. If you're thinking of a predictive model that makes decisions about people and whether they're entitled to certain benefits or rights, or in the employment context, whether they're entitled to a job or a promotion, you know, those are legal risks that have really significant effects on people. And so there are a number of laws out there, particularly privacy laws, that focus on high risk AI systems that have those kinds of automated decision making tools embedded within them.
03:23
Matthew Coleman
And so, you know, if you're denying someone a mortgage or employment because an AI tool said that this person isn't a good fit without any kind of human oversight, that's a major risk associated with that individual and their rights. And so there are certain controls you would need to put in place to notify them of those decisions, to give them the ability to appeal, that kind of thing.
03:42
Adam Stofsky
Wow, that's a lot of risks. I have one I want to ask you about, one more potential risk that I think about all the time, which is just like intellectual property. So what if I like, you know, I'm trying to come up with some ideas about maybe how to pitch my new product, right? And I'm a software engineer or maybe a marketing person at a software company. So I take like the plans for this cool new product and drop it in a large language model. Is there a risk that our like secret sauce, trade secret kind of gets out there? Is that a risk that you think about?
04:15
Matthew Coleman
So it's a risk if it ever gets challenged in the sense of if some, if you're trying to assert a trade secret in any kind of litigation and it comes to light that you divulged that trade secret in a large language model without adequate protections, then a court may find that you don't actually have trade secret protections. And so, yes, that is a potential risk.
04:35
Adam Stofsky
Okay, interesting. So what are the consequences? You know, obviously there's reputational consequences, all kinds of issues, but what are the legal consequences if a company just ignores this and ignores AI governance completely? Broad and broad strokes.
04:53
Matthew Coleman
Broad strokes. In addition to some of the things we already talked about, you know, losing protections for some of the data that you're processing, you Know there are risks associated with if you are a B2B business and you have a bunch of contractual obligations to protect your customers data, and then you turn around and use an AI system and feed all of that data into an AI system without the right kind of controls, you could be in breach of those contracts. That whether that's the core confidentiality provisions of those contracts or there's a data processing agreement that says you're only going to use people's personal data in the ways that we tell you and that does not include sending it to any third party for that third party's independent use. You know, things like that. So there's contractual risks.
05:35
Matthew Coleman
There's also the risk of things going wrong. So there's some classic cases where company uses use chatbots to interact with customers and the chatbot misquotes a policy like an airline and it misquotes the bereavement policy. Not only are you out like whatever the money that this chatbot has now said you owe to the customer, it's also a reputational issue in addition to that kind of legal and business issue.
06:01
Adam Stofsky
So Matthew, my next question is how can companies actually assess the risk? How can they look at AI tools and determine what the risks are and I guess how to weigh the benefits against those risks?
06:16
Matthew Coleman
So there are long standing elements within global enterprise risk management programs that look at third parties and the systems and IT services that you guys are procuring and they do some diligence before you onboard them. They look at those technologies, they look at how they're developed, they look at their security program to see how they're protecting your data. They look at the contractual protections that those companies are affording you and whether or not they adequately address the risks associated with, you know, how that tool is intended to be used. So using that, leveraging that existing process to due diligence of AI vendors or vendors that even have some kind of AI component to their software and see how was the AI developed? How are they going to be using your data?
07:04
Matthew Coleman
Are they going to be reserving the right to train their models on underlying data? And does that cause any risks? Is the intended use case of that model or of that AI system something that could be considered high risk that you then need to control for? So there's usually, for organizations that have good governance, there's usually going to be some kind of upfront diligence and assessment process and then you figure out what kind of controls those you know, processes or those technical controls or Those contractual controls that you need to put in place in order to, to mitigate risk.
07:36
Adam Stofsky
Okay, so a couple more questions just about the nuts and bolts of how this plays out day to day.
07:42
Matthew Coleman
Yeah.
07:43
Adam Stofsky
So how is AI governance? How does it all come together in a company? So if I'm an employee at a company, how do I know what the rules are? Do they get told to me? Is it in my contract? Is there a policy? How does that tend to work?
07:55
Matthew Coleman
Right. And so every company is a little bit different in terms of how they're thinking about these issues and where they're at in their AI governance, call it maturity. So for any average employee, to the extent that an organization has an AI governance program, it should have been made available or made known to employees.
08:12
Matthew Coleman
And there's usually some either committee or responsible person that would be a point of contact that you can go to with any questions if they have developed a AI acceptable use policy that should have been made available to organizations, and that will usually set out the rules of the road of what you can and cannot use AI tools for, as well as which AI systems have been assessed by the AI governance committee, gone through approval processes, got the right contractual protections in place and approved for certain use cases.
08:45
Matthew Coleman
And generally employees should consider staying within those bounds of, you know, using the vetted software, using the software that's within the company's IT environment, not using publicly available large language models, because that could also have legal implications for the organization where usually the protections are less concrete and those systems would take a lot more liberties with the use of data and so trying to stay within those bounds. And if there's anything that's gray or not clearly answered, just asking the question and figuring out whether or not there's a business justification for bringing on a new vendor for a particular use case, of whatever the employee wants to do.
09:27
Adam Stofsky
Right. So my last question was, and I suppose Till, is what employees can do to kind of reduce the risk of AI use to their company and themselves. You kind of answered it already. Follow the AI use policy if it exists. But can you kind of summarize what are some basic ground rules of how employees, just average employees, should be thinking about using AI?
09:48
Matthew Coleman
Yeah, totally. And there's such a built incentive for companies and employees to use AI tools because it makes you more efficient. You know, it's ways of, you know, expediting workflows, of ideating things that you may not have thought of before, just making you generally a better employee and better at your job. Like there's A lot of really good reasons for using AI tools. The important thing to consider is how you're using it and what the controls are to mitigate the risks that we've identified are, and whether you're doing it in collaboration and in communication with the people who are responsible for managing those risks. So again, you may not be aware of everything that the organization is assessing or has approved.
10:31
Matthew Coleman
And so it's always worth asking the question if you need a large language model or you think it would be useful, if there's a particular workflow AI tool that you think would be awesome to use in your job. You know, ask the question to the AI governance team or the IT team to see, hey, is there something that does this? Is there something that you've looked at that you've approved that I can use for this particular purpose? And even if the answer is no, it starts the conversation and then, you know, they can get the gears turning to do that assessment, to negotiate a potential contract and use that service potentially, if there's again a strong business case for it. What I would recommend straying away from again is using those publicly available systems. It may not be blacklisted.
11:13
Matthew Coleman
Some companies just outright blacklist on the IT systems, outright blacklist, any access to the publicly available instances of large language models, for example. And again, that's for very good reason. But even if you're not blacklisted, I would suggest straying away from it and seeing what your organization has actually approved for you.
11:33
Adam Stofsky
Great. Matthew, this is really interesting. Thanks so much.
11:36
Matthew Coleman
My pleasure. Thanks for having me.
<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1154467726?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Matthew Coleman - What is AI Governance"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>


