AI Incident Database: Incident #1324
A Texas-based healthcare AI startup claimed their technology could achieve a 1 in 100,000 hallucination rate. The state attorney general disagreed. What happened next should matter to every company making claims about AI performance.
This is the kind of incident that flies under the radar. Most people in AI governance never heard about it. But it demonstrates something critical: the legal environment AI operates in isn't new, and state-level consumer protection laws apply just as much as any federal AI regulation.
Pieces Technologies, a healthcare startup, marketed a clinical AI system with what the Texas attorney general called misleading performance claims. The company stated their technology was capable of a 1 in 100,000 hallucination rate.
In healthcare, that number matters. A hallucination about what disease someone has can lead to serious harm or death. The stakes are high, which is exactly why generative AI deployments in medicine get scrutiny.
The problem: achieving that performance level would have been extraordinary. As Sean McGregor, founder of the AI Incident Database, put it on Inside the Incident: "They wouldn't be a startup that raised a hundred million dollars. They'd be a startup that raised $10 billion because that just was not actually in the cards for most companies at that point with generative AI."
The Texas attorney general filed a complaint. The case settled quickly, just a few months later. The settlement imposed two requirements on Pieces Technologies:
Marketing and advertising disclosures: Every claim about the product needs to be backed up by extensive evidence and standards. This means third-party or independent verification for public statements about AI performance. For a startup trying to grow, that's a significant operational burden.
Consumer harm disclosures: The company must comprehensively disclose all potential harms to users of the product. Not just standard disclaimers, but detailed, thorough documentation about what could go wrong.
What Texas essentially told Pieces Technologies: if you're going to operate at this level, you need to be 100x more thorough about disclosing risks and backing up claims than a typical company would be.
If you're building AI technology outside of Texas, shipping to customers in other states, does this case apply to you? Yes.
Just because AI is new doesn't mean the legal environment is new. Consumer protection laws, state attorney general enforcement powers, and advertising standards all apply to AI systems. This case demonstrates that you need to pay attention to more than just federal AI regulations or state AI-specific laws.
State consumer protection departments and attorney generals have broad authority. If your AI system crosses state lines or serves customers in multiple jurisdictions, you're operating under dozens of different enforcement regimes.
If you're selling AI, deploying AI, or making claims about AI performance:
Get the science right and get the technology right. But also recognize that you need documentation showing what you did to validate those claims. Third-party testing, comprehensive performance data across different use cases, and clear disclosure of limitations all matter when someone questions your claims.
Don't assume "medicine is a big space" excuses vague performance numbers. An average hallucination rate across all medical use cases can be misleading when some scenarios are far riskier than others.
Build alignment between your technical teams and your governance teams before you go to market, not after a legal complaint arrives.
The goal isn't to point fingers when something goes wrong. The goal is to build a culture where technical and governance teams share concerns openly, test claims thoroughly, and document everything that matters before making public statements about performance.
Old laws apply to new technology. State attorney generals have authority. And bold claims about AI performance need evidence that can stand up to scrutiny.
See how LuminosAI helps you document AI decisions before they become legal issues. Our platform automates the review process so your team has the evidence and documentation ready when it matters. Book a demo to see how it works.
Want to learn from more AI incidents? Visit the AI Incident Database to explore documented cases and understand what's happening in the world of AI deployments. If you're aware of incidents that haven't been reported, you can submit them to help build a more complete picture of AI risks across industries.
Here's what typically happens when a company faces legal action: one set of lawyers talks to another set of lawyers and asks, "What did you do to prevent this?"
The first defense is demonstrating good faith. That means showing all the testing you conducted, the data you collected, and most critically, clear documentation that explains why every claim you made was reasonable.
The speed of the Pieces settlement suggests there may not have been much to fall back on. When you're deploying AI and making bold performance claims, you need more than just good technology. You need documentation that shows:
Because probabilistic systems will fail. There's a 100% probability they will be wrong sometimes. The question is what you can demonstrate you did to reduce that harm.
When AI works well, everyone takes credit. IT, data science, business units, legal teams are all involved in successful deployments. That means when something goes wrong, everyone is involved in the failure too.
The technical team could have been more conservative about performance claims. The legal and compliance teams could have demanded better documentation before those claims went public. Ideally, both sides work together to ensure what gets communicated to customers reflects reality and is well-documented.
The main problem in cases like this is misalignment. Technical teams and governance teams need to be connected, not operating in separate silos.
If you're selling AI, deploying AI, or making claims about AI performance:
Get the science right and get the technology right. But also recognize that you need documentation showing what you did to validate those claims. Third-party testing, comprehensive performance data across different use cases, and clear disclosure of limitations all matter when someone questions your claims.
Don't assume "medicine is a big space" excuses vague performance numbers. An average hallucination rate across all medical use cases can be misleading when some scenarios are far riskier than others.
Build alignment between your technical teams and your governance teams before you go to market, not after a legal complaint arrives.
The goal isn't to point fingers when something goes wrong. The goal is to build a culture where technical and governance teams share concerns openly, test claims thoroughly, and document everything that matters before making public statements about performance.
Old laws apply to new technology. State attorney generals have authority. And bold claims about AI performance need evidence that can stand up to scrutiny.
See how LuminosAI helps you document AI decisions before they become legal issues. Our platform automates the review process so your team has the evidence and documentation ready when it matters. Book a demo to see how it works.
Want to learn from more AI incidents? Visit the AI Incident Database to explore documented cases and understand what's happening in the world of AI deployments. If you're aware of incidents that haven't been reported, you can submit them to help build a more complete picture of AI risks across industries.
Inside the Incident is a video series where Luminos CEO Andrew Burt and AI Incident Database founder Sean McGregor analyze real AI failures to understand what went wrong and how to prevent similar incidents. Watch Episode 3 to hear the full discussion about the Pieces Technologies case.
Legal Disclaimer: This content is for educational purposes only and does not constitute legal advice.