Private AI & the rules you already work under.
Plain-English explainers for the four regulatory frameworks that drive most SMB private-AI conversations. Each section: what the rule requires, where private AI helps, where it doesn't, and what your firm still has to sign off on.
This is not legal advice. Final compliance signoff is firm-specific and remains with your counsel or compliance officer. Wilcoe builds the architecture; your firm interprets it.
HIPAA & Business Associate Agreements (medical, therapy, dental, allied health).
What it requires.
HIPAA covers Protected Health Information. The Privacy Rule limits how PHI can be used and disclosed. The Security Rule sets administrative, physical, and technical safeguards for ePHI. When a vendor creates, receives, maintains, or transmits ePHI on your behalf, that vendor becomes a Business Associate, and you need a signed Business Associate Agreement (BAA) defining their obligations under HIPAA.
If you put a patient note into a public AI chatbot without a BAA, you've sent ePHI to an unagreed vendor. That's the first compliance question.
Where private AI helps.
- No third-party vendor in the data path. Local inference on your hardware means there's no Business Associate to negotiate with on routine workflows. The data doesn't leave your environment.
- Audit-friendly access. Role-based retrieval, matter/patient-level access logs, and approval gates document who used what when — what the Security Rule expects.
- Explicit cloud-off defaults. Sensitive notes can't accidentally end up in a public model when the policy says cloud is off for that workflow class.
- BAA on the optional cloud path. If your policy allows cloud fallback for specific tasks, we structure that with a vendor that signs a BAA — and we keep that path narrow.
Where it doesn't.
- Private AI doesn't replace your Notice of Privacy Practices, your authorization process, or your breach-notification plan.
- It doesn't relieve you of risk-analysis or workforce-training obligations.
- It doesn't substitute for clinician judgment on diagnosis or treatment. We don't sell decision-support; we sell documentation, summarization, and admin acceleration.
ABA Formal Opinion 512 (lawyers).
What it requires.
The American Bar Association's Formal Opinion 512 says lawyers using generative AI must consider their existing duties: competence, confidentiality, communication, supervision, and candor. It explicitly warns that self-learning tools can require informed client consent if client information is entered into them, since the firm may have effectively turned that information over to a vendor for purposes beyond the immediate task.
The framing matters: lawyers don't just decide whether AI is useful. They decide whether using it for a specific matter would breach existing professional duties.
Where private AI helps.
- Confidentiality stays with the firm. Local inference and local retrieval mean client information isn't disclosed to a third party in the first place — which removes the "informed consent for vendor disclosure" question for that workflow.
- Supervision becomes documentable. Matter-level access logs, mandatory human-review gates, and prompt-template management create a record that supervisors can review.
- Competence is improved by predictability. A managed local stack with allowlisted models and tuned retrieval is easier to learn — and easier to demonstrate competence with — than a wild-west cloud surface.
- Candor is preserved. Output review steps and audit logs let attorneys verify what the model produced before it leaves the firm.
Where it doesn't.
- Private AI doesn't make you competent to use it. Training is a separate workstream, and the duty stays with the lawyer.
- It doesn't eliminate the need for matter-specific judgment about whether a tool is appropriate. Some matters call for no AI at all.
- It doesn't replace human review of substantive output. Output gates exist; lawyers still sign.
IRS Publication 4557 + FTC Safeguards Rule (tax, accounting, financial-adjacent).
What they require.
IRS Publication 4557 directs tax professionals to build a written information security plan (a "WISP"), comply with the FTC Safeguards Rule, use multi-factor authentication, log access to client information, and oversee third-party service providers. The FTC Safeguards Rule applies to non-bank financial institutions and requires designated security coordinators, risk assessments, written policies, encryption, monitoring, and vendor oversight.
Translated: tax and financial-adjacent firms need a written security architecture, not a vibe.
Where private AI helps.
- Vendor oversight gets simpler. Fewer cloud vendors in the data path means less to oversee. The local appliance is documented once, configured per the WISP, and managed under a single retainer.
- MFA, logs, and access reviews are first-class. Identity, audit logging, and role-based access aren't bolt-ons — they're how a managed appliance is built.
- Encryption at rest and in transit. Local storage is encrypted. Network segmentation is segmented. WISP language maps cleanly to what the appliance actually does.
- Predictable cost. "Operating expenses for the safeguards" is a real WISP question; a fixed retainer answers it.
Where it doesn't.
- Private AI doesn't write your WISP. We can produce a private-AI architecture pack that maps to it, but the WISP itself stays your firm's document.
- It doesn't designate a security coordinator. The FTC Safeguards Rule wants a named human inside the firm who owns the program. That's still you.
- It doesn't substitute for incident-response planning. We provide the playbook; your firm runs the drills.
The pattern across all four.
Each framework expects three things that map cleanly to a managed private-AI architecture: controlled custody of the sensitive data, documented controls over who sees what when, and human accountability for the output.
That's why "private" can be a faster path to "compliant" than negotiating BAA language with five cloud vendors and writing exception policies. The trade-off is that you have to build the architecture and run it. Wilcoe Private AI is the work of building and running it.
What we hand off to your counsel.
- An architecture diagram showing data flows, custody points, and cloud-fallback boundaries.
- The policy pack: prompt-template policy, access policy, retention policy, review-and-signoff policy, breach-response playbook.
- Operational logs and audit-log samples your counsel can review.
- A vendor-oversight document for any cloud fallback that remains in scope.
Want a Wilcoe-shaped architecture aligned with your obligations?
Book a Readiness Call. We'll walk through the framework that applies to your firm and map it to a private-AI architecture you can hand to counsel.
Take the readiness check Book a Readiness Call →