Closed AI ≠ Confidential

The Truth About  ‘Closed AI’   

Lately, I’ve noticed more and more vendors in the legal tech space pitching something they call “closed AI.” The way they talk about it, you’d think it’s the ultimate security blanket — a fortress around your client data. And at first, I’ll admit, it sounds comforting. “Closed” feels safer than “open,” right?

But after digging deeper, I realized the marketing doesn’t always match reality. In fact, relying on “closed” as shorthand for security may actually put law firms at greater risk.

Let me explain.

Sales ≠ Security

Not long ago, I asked a vendor about their data practices. Simple, reasonable questions:

✅Where is the data stored?

✅How is it protected?

✅Is anything used for model training?

Their answer? “Don’t worry. It’s closed. Nothing can happen.”

🚩 That was my red flag moment. Because if you can’t explain the how, then “closed” doesn’t mean secure. It means you’re in the dark.

Closed ≠ Safe

One of the biggest misconceptions is that “closed” automatically equals “locked down.” In reality, a closed system often means all of your firm’s data is centralized on a single vendor’s servers. That creates a single point of failure.

Think about it:

▪️If that vendor suffers a breach, everything goes with it.

▪️If their security practices are sloppy, you’ll never know until it’s too late.

▪️If their employees have broad access, your client confidentiality is only as strong as their weakest staff password.

Closed doesn’t equal safe — it equals concentrated risk.

Opaque ≠ Ethical

Another issue: opacity. Many of these vendors don’t provide transparency reports, third-party audits, or even straightforward documentation about how data flows through their systems.

When you ask where prompts are stored or whether data is cached, you’ll often get vague answers like:

▪️“We can’t disclose that.”

▪️“It’s proprietary.”

▪️Or my personal favorite: “Trust us.”

But in legal practice, trust without verification is not an option. Attorneys have ethical obligations to know how technology handles client information. If a vendor can’t explain it in plain language, that’s a problem — not a selling point.

Buzzwords ≠ Compliance

Here’s the kicker: no matter how slick the marketing, buzzwords don’t equal compliance.

The ABA Model Rules, state bar opinions, and confidentiality rules don’t care if your AI system is “closed,” “proprietary,” or “air-gapped.” What matters is whether it actually protects client data in accordance with your ethical duties.

If your bar were to investigate a breach, “but the vendor told me it was closed” is not a defense.

Comfort ≠ Confidentiality

This is the real danger: “closed AI” makes lawyers feel like they don’t have to ask questions. It creates a comfort blanket effect. Attorneys hear “closed” and think: Perfect, someone else has locked this down for me.

But that mindset is exactly what can get firms in trouble. Because the more you assume you’re protected, the less likely you are to dig into the details. And in law, it’s always the details that matter.

Closed ≠ Controlled

To break it down, here are some practical risks I’ve seen in the “closed” model:

▪️Single-server dependency – all your data lives in one environment; one breach compromises everything.

▪️No independent audits – without SOC 2, ISO 27001, or similar certifications, you’re taking the vendor’s word on security.

▪️Data residency issues – your client data may be stored overseas without your knowledge.

▪️Retention black holes – if they don’t define how long data is cached, you can’t guarantee it’s deleted.

▪️Lack of control – you may have no say in whether your data is retained, archived, or used to improve their product.

Does that sound like “confidentiality”? Or does it sound like risk disguised by marketing?

Marketing ≠ Due Diligence

The good news is you’re not powerless. Before handing over sensitive case data, ask vendors for clear, written answers to these questions:

Where is the data stored? (Specify location and jurisdiction.)

Who has access to it? (And how is access logged?)

How often are audits performed? (Ask for SOC 2 / ISO 27001 reports.)

What’s retained and for how long? (Can you demand deletion on request?)

How is it encrypted in transit and at rest?

What happens if there’s a breach? (Notification timelines, liability, indemnity.)

If a vendor dodges or refuses, that’s your signal to walk away. Because in 2025, transparency is the new confidentiality.

Open ≠ Unsafe

Ironically, open systems often have more robust safeguards. Why? Because they know they’re under scrutiny. Their security practices are documented, audited, and tested. You can see exactly how they handle data instead of being told to “trust us.”

Confidentiality in AI isn’t about whether a system is open or closed. It’s about whether it’s:

☑️Transparent

☑️Audited

☑️Compliant

☑️Aligned with your ethical obligations

That’s what protects clients.

Not buzzwords.

Convenience ≠ Compliance

AI has incredible potential in law. It can streamline discovery, accelerate drafting, and enhance research. But none of that matters if the technology undermines client trust.

Here’s the truth:

Closed ≠ Confidential

Opaque ≠ Ethical

Buzzwords ≠ Compliance

Comfort ≠ Confidentiality

Do your due diligence. Ask the hard questions.

And remember: in the legal world, protecting client confidentiality isn’t optional — it’s the foundation of our work.

✔️ Mandie Lake — The Paralegal Who Works Smarter, Not Harder