What Actually Happens in an Architecture Review

Most people selling consulting services keep the deliverable vague. I think that's backwards. Here's exactly what we do for two weeks, what you get at the end, and how to know if you need one.

Software architect and doctor reviewing clinical AI system architecture on dual monitors in a modern hospital office

I get the same question on a lot of intro calls.

“OK Sam, the architecture review sounds useful. But what does it actually look like? What do you do for two weeks, what do I get at the end, and how is it different from just hiring a consultant to look at my system?”

Fair questions. Let me walk you through it.

I am going to be specific on purpose. Most people selling consulting services keep the deliverable vague because vague is easier to sell. I think that’s backwards. The clearer I am about what you get, the easier it is for you to decide whether you actually need it.

Why this exists in the first place

Most HealthTech teams I talk to are in one of three situations.

The first situation is: we shipped something, it works, but we’re not sure it would survive a real hospital security review. We don’t know what we don’t know.

The second situation is: we are about to go into procurement with a major health system. We have heard horror stories about how hard it is to pass. We want a second set of eyes before we get there.

The third situation is: something is breaking. PHI is leaking. The model is hallucinating in production. A partner has flagged an issue. We need a clear picture of the gap between where we are and where we need to be.

The architecture review is for all three. It is a structured way of mapping what you actually have, identifying what’s missing, and giving you a clear plan to fix it before someone else finds the problem for you.

Two weeks. Fixed scope. Fixed price. Same deliverable every time.

Week 1: Mapping what you actually have

The first week is about discovery. Not “tell me about your product” discovery. Real discovery.

We start with a working session, usually 90 minutes, where you and your technical lead walk us through the system. Not the pitch deck version. The real version. Where does data come in? Where does it go? What stores it? What touches it? What integrates with what?

I am not looking for the diagram you drew for your last investor meeting. I am looking for the diagram nobody has drawn yet.

After that session, we go heads down. We are reading your code. We are looking at your infrastructure. We are checking your data flows. We are mapping every place where patient data is created, stored, transformed, retrieved, or surfaced.

This part is uncomfortable, by the way. Most teams discover during week 1 that the system in their head is not the system that actually exists. Architecture drifted. Engineers shipped things without updating docs. The intern from last summer set up a microservice nobody remembers what it does.

That is normal. It is also exactly why this exercise is valuable. You cannot fix what you have not mapped.

By the end of week 1, we have a complete architecture map. Every component. Every integration. Every data flow. Every place where PHI lives, moves, or could be exposed.

Week 2: Finding the gaps and writing the plan

Week 2 is where we compare your actual architecture against what a clinical-grade system needs to look like.

We are checking specifics. Not in a generic compliance-checklist way. In a “where does your specific system break under specific scenarios” way.

For each layer of your system, we are asking concrete questions. Where does PHI enter? Is it encrypted in transit and at rest? Who has access? Are those access decisions logged in a way you could defend in an audit? Does your AI layer have audit trails? Do your prompt logs contain PHI? Is your vector database protecting per-patient data? Can you trace any AI output back to its source?

If you’ve been reading my articles, you’ve seen most of these questions before. The review is where they get applied to your specific system, with concrete answers about your specific gaps.

We organize the findings by severity. Some things are five-alarm fires. Some things are slow burns. Some things are technically fine but will create friction in procurement. You need to know the difference.

Then we write the plan. Not “here are some recommendations.” A specific, ordered list of what to fix, in what order, with rough effort estimates and a recommended sequence. We tell you what your team can handle internally. We tell you what we would recommend bringing in outside help for. We tell you what to do first if you only have time and budget to fix one thing.

That’s the deliverable. A full system map, a prioritized gap analysis, and a fix plan you can act on the day after we deliver it.

What you get at the end

Three artifacts, all in writing.

One. The architecture map. A complete diagram and accompanying documentation of your system as it actually exists today. Most teams keep using this internally for years. It becomes the source of truth their team didn’t have.

Two. The gap analysis. Layer by layer, what’s there, what’s missing, and what’s risky. Categorized by severity, prioritized by impact on procurement and patient safety.

Three. The remediation plan. The specific work to do, in the specific order to do it, with realistic effort estimates. Sized for your team’s actual capacity, not a hypothetical one.

Plus a 60-minute walkthrough with you and your technical lead. We go through the findings together. You ask questions. We answer them. You leave knowing exactly what you have, what’s missing, and what to do next.

What this is not

This is not a HIPAA compliance audit. We are not signing anything that says you are compliant. That is what your compliance officer or auditor does, and they should. Our work informs theirs. It does not replace it.

This is not a code review. We will look at code where it matters for the architecture. We are not reviewing every function. We are not commenting on your test coverage.

This is not consulting in the traditional sense. We are not embedding with your team for months. We are not running standups. We are not project managing your remediation. We come in, map, find the gaps, write the plan, and hand it back to you. What you do with it is up to you.

This is also not a sales pitch for our build work. About a third of clients hire us to help them implement parts of the plan. Two thirds take the plan and execute it themselves or with someone else. Both are fine. The review pays for itself either way.

Who this is for

I want to be honest about who this works for and who it doesn’t.

It works for HealthTech teams that have a working system in production or close to it. If you’re pre-MVP and you’re trying to design an architecture from scratch, the review is the wrong tool. What you need is design work. We do that too, but it’s a different engagement.

It works for teams that are about to enter a hospital procurement cycle in the next 90 days. The two weeks of review pays for itself many times over in deal velocity. We have had clients come out of procurement with the architecture map literally pasted into their security questionnaire response. That kind of preparation changes how the conversation goes. We wrote about what hospitals are actually evaluating and why most vendors aren’t ready for it.

It works for teams that have already lost a deal at security review and want to know why. There is usually a specific structural reason. Finding it is the first step to not losing the next one. We have seen this firsthand with a client who lost four months after a partner test surfaced edge cases their internal QA had missed.

It does not work for teams that just want validation. If you want me to tell you everything is fine, this is the wrong service. I am going to tell you what I see. That includes the parts you’d rather not hear.

How it actually starts

The first step is a 30-minute call. You tell me where you are, what you’ve built, and what’s on the horizon. I tell you whether the review is the right fit, or whether something else would help you more.

If it’s a fit, we scope and book the two weeks. If it’s not, I tell you that too and point you in a better direction. That’s a real outcome. Some teams need a build engagement, some need a compliance auditor, some need a fractional CTO for a few months. The architecture review is one of those tools, not all of them.

If you’re sitting on a system right now and you’re not 100% sure how it would hold up under a real hospital review, the call is the easy first step. Whether you book the review after or not, you’ll leave the call with a clearer picture of where you stand.

That’s worth 30 minutes of anyone’s time. Especially yours.

Architecture Review

Is your AI system ready for patient data?

Book an architecture review — we'll map your system end-to-end, identify every PHI exposure point, and give you a prioritized plan to fix, build, or scale with confidence.