Most HealthTech founders think they’re ready for a hospital security review.
They’ve got a privacy policy. They’ve read about HIPAA. They’ve checked a few boxes on a compliance questionnaire.
Then the hospital sends over their security review packet and everything falls apart.
I’ve been on both sides of this. I’ve helped founders prepare for enterprise reviews, and I’ve sat on calls where a hospital’s security team asked questions that made the room go quiet. Not because the technology was bad. Because nobody had thought through the architecture from a patient data perspective.
Here’s what hospitals are actually looking for, and why most AI vendors aren’t ready for it.
It Starts Before You Get in the Room
A hospital’s IT security team isn’t doing you a favor by reviewing you. They’re protecting themselves.
When a hospital partners with an AI vendor, they become responsible for how that vendor handles patient data. A breach on your end can trigger their reporting obligations. It can end careers. It can generate HIPAA fines that hit the hospital, not just you.
So when they send you a security questionnaire, they’re not being bureaucratic. They’re stress-testing your architecture before they expose their patients to it.
The problem is most founders treat it like a checkbox exercise. Fill out the form, get the deal, figure out the hard stuff later. That’s the wrong frame entirely.
The First Question That Kills Deals
In almost every hospital security review I’ve been involved with, the first real question comes down to this:
“Where does patient data live, and who can see it?”
That sounds simple. It’s not.
Your answer needs to trace exactly how PHI moves through your system. From the moment it enters your platform to where it gets stored, processed, and eventually deleted. Every hop. Every integration. Every log.
If your LLM is calling an external API, they want to know if PHI is in that payload. If you’re using a third-party vector database for RAG, they want to know if embeddings derived from patient records are in there. If you have an analytics dashboard, they want to know if de-identification is real or just cosmetic.
Most AI vendors can’t answer these questions on the spot. They’ve built fast. They haven’t mapped their own data flows.
That’s the first place deals die.
A BAA Is Not Enough
Yes, you need a Business Associate Agreement. That’s table stakes.
But hospitals know that a BAA just means you’ve agreed to be legally responsible. It doesn’t tell them anything about your actual controls.
What they’re looking for underneath the BAA:
Your encryption posture. Is PHI encrypted at rest and in transit? What key management approach are you using? Can you show them?
Your access controls. Who on your team can access production data? Do you have role-based access? Do you log it? How do you handle offboarding?
Your audit trail. If something goes wrong and they need to trace it back, can you produce a complete record of what happened to a specific patient record? When it was accessed, modified, sent, or deleted?
These aren’t theoretical questions. They come from real incidents hospitals have dealt with. A vendor employee who accessed patient data they shouldn’t have. An API misconfiguration that exposed records. A logging gap that made it impossible to prove nothing was compromised.
They’ve seen it. They’re not going to let it happen again on their watch.
Where AI Specifically Creates New Risk
Traditional software security reviews are hard enough. AI adds a layer that most hospital security teams are still figuring out — which means the questions are getting sharper, not easier.
The issues that come up most often:
Prompt logs. If your system logs LLM inputs and outputs for debugging or quality purposes, and those inputs contain PHI, you now have a new class of sensitive data sitting in your logs. Is it protected? Is it retained longer than necessary? Can it be queried?
Training data contamination. Have you fine-tuned your model on any patient data? If so, there are serious questions about whether that data can be extracted, and whether you had the right permissions to use it in the first place.
Retrieval outputs. RAG-based systems pull from a knowledge base to generate answers. If that knowledge base contains clinical content tied to real patients, the retrieval layer itself becomes a potential PHI exposure point. Vector databases don’t have row-level security by default. That’s a problem.
Model hallucination in clinical context. This is less of a security question and more of a patient safety question, but hospitals treat them the same way. If your AI can generate plausible-sounding clinical information that’s factually wrong, that’s a liability. They want to know how you’re constraining outputs and what guardrails are in place.
The hospitals that are sophisticated about AI procurement are now asking about all of this. The ones that aren’t yet will be soon.
The Vendor Security Questionnaire Is a Sales Document
Here’s something most founders miss.
The security review isn’t just about passing a compliance check. It’s a buying signal. A hospital that sends you a 40-page security questionnaire is a hospital that’s serious about buying from you. They wouldn’t waste the time otherwise.
How you respond tells them a lot about who you are as a vendor.
If you respond slowly, with vague answers, they assume your security is vague.
If you respond fast, with specific answers tied to actual architecture decisions, they start to trust you.
One of our clients came to us after failing a hospital security review. Not because their security was terrible. Because they couldn’t explain their own system in the terms the hospital needed. We did an architecture audit on their stack, mapped all their PHI flows, documented their controls, and rebuilt their response. They went back to the hospital two weeks later and closed the deal.
The technology hadn’t changed. The story around it had.
What a Prepared Vendor Looks Like
When a hospital security team sees a vendor who has done the work, they notice. This is what that looks like:
- A data flow diagram that shows exactly how PHI moves through the system, including every third-party service that touches it.
- A clear statement of where PHI is stored, with encryption standards specified.
- Documentation of access controls and audit logging, with evidence that logs are actually retained and queryable.
- A BAA that’s ready to sign, not something that requires three rounds of legal back and forth.
- Answers to the standard HIPAA security rule questions that are specific to your architecture, not boilerplate copied from a template.
- And increasingly, answers to AI-specific questions about your LLM infrastructure, prompt handling, and output constraints.
When you walk in with that package, the conversation shifts. You’re not defending yourself. You’re a peer who understands what they’re protecting.
The Bigger Picture
Hospitals are not trying to make your life difficult. They’re trying to protect patients.
The founders who understand that tend to do well in enterprise healthcare. They build security into their architecture from day one instead of trying to bolt it on when a deal requires it.
That means thinking about PHI containment before you choose your infrastructure. It means building audit logging into your system design, not adding it as an afterthought. It means knowing exactly what your LLM does with patient data before a hospital security team asks you.
If you get that right, security reviews stop being scary. They become a competitive advantage.
The vendors who can answer every question fast and specifically are the ones hospitals trust. And in healthcare, trust is what closes deals.
If you’re heading into procurement soon and want a second set of eyes on your security posture before the questionnaire lands, reach out. Happy to take a look and tell you where the gaps are before someone else does.