The biggest barrier to AI adoption is the law.
Sounds like a surprise? It shouldn’t be—but it is. Talk to most C-suite executives, general counsels, AI thought leaders, etc. and you’ll hear them talk about big challenges—you’ll hear them talk about AI competition with China, self-driving cars, AI agents talking to each other autonomously, AI’s impact on social media, artificial general intelligence, the environmental impacts of data processing, and more. You’ll hear them talk about nebulous, philosophical ideas, debate ethics, reference Skynet, and get scared and excited all at once.
And, of course, there is nothing wrong with this. These are all important problems, but they are not the biggest challenges we face when it comes to AI.
Instead, as is often the case with new technologies, the biggest challenge is more mundane, it’s less sexy, and some might even call it boring: the biggest challenge we face is that the teams and the tools companies use for AI development are bifurcated and misaligned. None of the other big, sexy, exciting problems can be solved without fixing this first.
On the one hand are tools designed to help data scientists optimize AI for performance. These are tools that help train models, measure their recall, monitor their accuracy, and more. On the other hand are tools designed to help lawyers (and privacy and risk personnel*) understand AI risks like bias, privacy, copyright infringement and other liabilities. These tools help with impact assessments, documentation, workflow management, and more. But the fact that these are different tools means that they separate the technical experts from the risk experts, and ultimately means that AI risk management efforts are destined to fail.
Over the last decade of focusing on AI risk management, I have seen the same problem again and again and again. Say, for example, that a company wants to deploy an AI system—or in some cases, has already deployed it—but is worried about its risks. Here’s how the process for addressing the system’s risk usually goes.
Day 1 of the process
Data scientist: Is it ok to deploy this model?
Lawyer: It depends. Tell me about your model, preferably in writing so I can carefully review it.
Data scientist: Ok, let me get back to you. I just need to round up the right information for you.
Day 7 (one week later)
Data scientist: I just sent you an email with a bunch of information about the model. I had to get in contact with the business unit that is using the model and then the other data scientists on my team who helped me develop the model. That meant me scraping information from Github, some Jupyter notebooks, JIRA, and a few other places. But now you should have all the information you need. Is it ok to deploy it?
Lawyer: Thanks, let me review this information. I’ll get back to you.
Day 14 (two weeks later)
Lawyer: This information was helpful but we need more to understand the system’s risks. Who reviewed the data the model was training on? Did you ever test it for bias?
Data scientist: Let me get back to you.
Day 21 (three weeks later)
Data scientist: Sorry this took me so long—but yes we had someone from the office of the Chief Privacy Officer review the data. I can send you their approval via email. But we never tested it for bias. We can do those tests but the business unit needs this model deployed as soon as we can. What tests do we use?
Lawyer: Let me get back to you on the right tests.
In this scenario, the lawyer would then ask other lawyers or outside counsel for the right tests to use, pass them along to the data scientist, who would work with their team to find the right testing code (sometimes open source or from vendors) or build it themselves, run these tests, present the results to the lawyer, then have another long back-and-forth about what to actually do with the bias they are likely to find.
This process usually takes months (typically in the 3-6 months range, although I’ve seen it take longer in some cases), is ad hoc and, most importantly, is not scalable. The central problem is that the data scientists and the lawyers are working in different worlds, using separate tools, and therefore managing risks from entirely different perspectives.
Here is why this is such a problem: because this process cannot scale. It is too manual and time-consuming and ad hoc to enable companies to use AI in practice. And this in turn means that companies have a choice to either
a) play it fast and loose with the approval process for AI risks, sometimes ignoring that process altogether and deploying risky AI systems without real oversight
Or
b) to allow that process to slow things down so much that data scientists and business units can’t actually deploy AI at the speed they need to.
As long as legal review of AI systems is impractical, it simply won’t take place the way it needs to—leading to legal issues for some companies and widespread noncompliance for others. Businesses will lose time and value. Customers will interact with biased, privacy- and copyright- violating or otherwise harmful AI.
Nobody wins.
Which brings us to our mission at Luminos. Our goal is to help automate the approval process for AI risks—allowing companies to move fast when adopting AI and enabling lawyers, privacy, and other risk personnel to keep up. Our goal is to align all the different tools and processes involved in AI governance so that they can practically be implemented. We want data scientists and lawyers to work together to minimize the harms AI systems can generate. We want AI systems in the world that actually reflect our values and uphold the law. And we’re only just getting started.
To learn more about what we’re building at Luminos and how it can help you, reach out to contact@luminos.ai.
Otherwise, follow us here or on LinkedIn and stay tuned for more!
* Note: I use lawyers in the broader sense throughout this article to stand for the people with the “risk hat” who review and approve AI models—knowing full well that it’s not only lawyers in this position. Sometimes it’s privacy officers, compliance personnel, responsible AI teams, etc. who sit in this exact role can vary across companies.