Private AI · Law firms

Use AI on matter files without sending privileged work into the public cloud.

A managed, on-prem AI system for solo and small law firms. Matter summarization, clause extraction, deposition condensation, and first-draft internal memos — all on hardware your firm controls. Designed around ABA Formal Opinion 512 and your existing duty of confidentiality.

For 5–50 attorney firms. Apple-silicon hardware. Managed by Wilcoe.

The questions you can't answer with a public chatbot.

Law firms aren't avoiding AI. You're avoiding the AI that can't survive a confidentiality conversation. These three frictions are what our law-firm clients walk in with.

"Where does that data go?"

Public-tier chatbots train on inputs, route through subprocessors, and store data in regions you didn't pick. That's a thin answer when a partner asks whether you should be entering matter content there at all.

📝

"Did the client consent to that?"

ABA Formal Opinion 512 specifically warns about self-learning tools and informed consent. The cleanest answer is to keep the data inside the firm in the first place — which removes the consent question from most workflows.

🛡

"Who reviewed that draft?"

Supervision is a duty. Audit logs, prompt-template management, and mandatory human-review gates are the difference between "we used AI" and "we can show how we used AI" — to a client, an auditor, or a court.

What we tend to start with.

Narrow, high-ROI workflows. Each one runs on the appliance, retrieves only over your matter files, and lands in front of an attorney for review before it leaves the firm.

01

Matter summarization.

Multi-document summarization across the matter file with citations back to source documents. Structured outputs in your house style. Useful for partner briefings, transition memos, and pre-meeting prep. Mandatory attorney review before output leaves the firm.

02

Clause extraction and contract review.

Extract specific clauses, flag deviations from your firm's standard language, build a clause library over time. Useful for due diligence, M&A, and lease/license review. The model never leaves the appliance and never trains on your client work.

03

Deposition and call-note condensation.

Long transcripts → structured summaries with topic markers, timestamps, and quoted excerpts. Useful for litigation prep, witness comparison, and chronology building.

04

First-draft internal memos with mandatory review.

Memos drafted from your firm's existing precedent and approved language. Review gates baked in: no memo leaves the firm without an attorney signoff captured in the audit log.

Built around ABA Formal Opinion 512.

The architecture maps cleanly to the duties you already operate under: competence, confidentiality, communication, supervision, candor.

Confidentiality stays with the firm.

Local inference and local retrieval mean client information isn't disclosed to a third-party vendor. That removes the "informed consent for vendor disclosure" question for the workflows that run on the appliance.

Supervision becomes documentable.

Matter-level access logs, mandatory human-review gates, and prompt-template management create a record that supervisors and auditors can review.

Competence is supported.

A managed local stack with allowlisted models and tuned retrieval is easier to learn and easier to demonstrate competence with than a wild-west cloud surface.

Candor is preserved.

Output review steps and audit logs let attorneys verify what the model produced before it leaves the firm — and document that verification for client or court scrutiny.

Wilcoe Private AI is designed around your obligations. Final compliance signoff is firm-specific and remains with your managing partner or general counsel. Read the full ABA Op 512 explainer →

Deployment shape.

A representative starting point. Right-sized in the Readiness Sprint and quoted firm-specifically.

Element Solo & small law firm (5–15 attorneys)
Hardware1× Mac mini M4 Pro, 48–64GB RAM. Locked office or closet, UPS, segmented network. For larger firms: 1× Mac Studio M4 Max with encrypted storage.
ModelsOn-device Apple Foundation Model + a local open model for longer-context work. Cloud off by default.
Knowledge layerLocal vector DB. Matter-level partitioning. Role-based retrieval (associate vs. partner vs. paralegal). Retention by matter close-out date.
ControlsRBAC, audit logs scoped to matter and attorney, encrypted backup, prompt-template management, mandatory review gates on all outbound output.
Cloud fallbackOff by default for matter-touching work. Allowed via written policy for non-privileged drafting (firm marketing, public-facing posts) where your engagement letter permits.

90 days from sprint to live.

One workflow live in a single office, with attorney-side review and a written policy your counsel can sign off on.

Days 1–14

Risk + workflow + ethics review.

Inventory matter systems. Map the first workflow. Coordinate with managing partner or ethics counsel.

Days 15–30

Hardware + policy pack.

Right-sized appliance. Written policies on retention, access, and review steps.

Days 31–50

Install + identity + logs.

Network segmentation, MFA, role-based access, encrypted backup, audit logging by matter.

Days 51–70

Connectors + first workflow.

Matter-file indexing. The first vertical copilot, with attorney-review gates.

Days 71–90

Training + go-live.

Attorney + paralegal training. Audit log review. Decide what to add next.

Common questions from law firm partners.

Is this compliant with ABA Op 512?

It's designed around it. The architecture, policies, and audit trail map to the duties named in Op 512 — confidentiality, supervision, candor. Final ethics signoff stays with your managing partner or general counsel; we provide the documentation they need to make that call.

Does this require informed client consent?

For workflows that run entirely on the appliance — no third-party vendor in the data path — most firms determine that informed consent is not required because client information isn't disclosed externally. Engagement letters and matter-specific policies decide the edge cases. Op 512 is explicit about self-learning tools and external disclosure; private inference removes both factors.

Can we still use ChatGPT for non-sensitive work?

Yes — under a written policy. Many firms keep public-cloud AI available for firm-marketing drafts, public-facing posts, and continuing education research, while sensitive matter work runs on the appliance.

What about conflicts and matter walls?

Matter-level partitioning is built in. Access is role-based and matter-scoped. The retrieval index respects the same conflict and ethical-wall boundaries your matter management system already enforces — not a parallel system.

How fast can we start?

The Readiness Sprint scopes the pilot in two weeks. Most firms launch live inside 90 days from sprint kickoff.

What hardware will sit in our office?

For solo and small firms, typically one Mac mini M4 Pro with 48–64GB RAM, UPS, encrypted local storage, in a locked closet or server room. We handle the procurement, install, and ongoing management.

Do partners have to use it?

No. Partners review output, sign off when needed, and don't have to operate the system. Associates and paralegals do most of the day-to-day; the system reduces low-leverage work without changing how the firm makes decisions.

What does it cost?

Sized in the Readiness Sprint. Pilots vary several-fold across firm shapes. How we think about cost →

Use AI on matter files. Without giving them away.

Book a 30-minute Readiness Call. We'll walk through your highest-leverage workflow, the ethics frame for your firm, and what a 90-day pilot would look like.

Book a Readiness Call →

or

Take the readiness check →