Context / problem
Teams usually ask for a technical audit when something feels off, but nobody agrees on why.
The symptom is often phrased in simple terms: “We move too slow.” Delivery feels heavier than it should. Things that looked straightforward take longer than expected. Product is frustrated because work was designed months ago. Engineering is frustrated because priorities keep shifting or arrive without enough shared context to make good local decisions.
Leadership is usually not disengaged in these situations. In my experience, they are working hard. The problem is more often that leadership, product, and engineering are operating from different mental models of what is happening inside the teams.
That is why I do not treat a technical audit as only a code or architecture review. Most of the value comes from making the full delivery system visible: how decisions are made, how priorities are translated into work, where context gets lost, and where teams are blocked waiting for alignment.
What a technical audit is (and is not)
- A technical audit is: an independent assessment of delivery risk, architecture constraints, product-engineering alignment, team/process friction, and practical next steps.
- A technical audit is not: a code-style nitpick exercise, an architecture-only opinion piece, or a disguised performance review.
What usually goes wrong
A lot of technical audits fail because they focus on visible technical artifacts and ignore how the organization actually makes decisions.
The result is often a polished report that sounds serious, but does not help the team move faster or make better decisions.
Common audit anti-patterns
- Architecture-only reviews with no delivery or team context
- Accepting leadership assumptions without testing them against team reality
- Missing the product-engineering gap in problem framing, tradeoffs, and success criteria
- Long reports with no prioritization or sequencing
- Findings that are disconnected from business risk
- Recommendations that assume more capacity than the team actually has
- No follow-through plan after the report
The most expensive mistake is usually not “missing bad code.” It is failing to make the alignment problems visible.
In one recurring pattern, leadership sees a capacity problem and concludes that the team needs more people. Product sees a planning problem and says, “We designed this months ago.” Engineering sees a clarity problem and asks, “What is actually most prioritized?”
If an audit does not make that mismatch explicit, it is easy to produce the wrong solution. The organization adds process, adds people, or starts rewrite discussions, while the real bottleneck is still weak shared context, unclear ownership, and assumptions that teams can read each other’s minds.
A useful audit has to test assumptions, not just document symptoms.
How I approach it
1. Start with the decision the client needs to make
I start by asking which decision is actually blocked.
Not “what is wrong with the codebase?” but “what decision are you trying to make that you do not have enough confidence to make yet?”
That might be a rewrite vs migration decision. It might be whether to hire. It might be whether the team structure needs to change. It might be whether the delivery problem is technical, organizational, or both.
This matters because many organizations jump straight from symptom to solution. “We move too slow” quickly becomes “we need more people.” In my experience, that is usually a symptom-level conclusion. The recurring bottleneck is clarity: priorities, ownership, tradeoffs, and feedback loops.
2. Assess across five layers (not just code)
I assess the delivery system across multiple layers, because the bottleneck is rarely isolated to one layer.
- Code quality and maintainability: How hard is it to change the system safely?
- Architecture and system boundaries: Where are the coupling and scaling constraints?
- Delivery process and release risk: Where does work slow down, pile up, or fail late?
- Team friction, ownership, and decision flow: Where does execution depend on escalation or hidden knowledge?
- Product-engineering alignment: Are teams aligned on the problem, tradeoffs, and what “done” or “success” actually means?
This is also where leadership fit becomes visible. Sometimes leadership needs to step in and align teams that are optimizing locally instead of for company priorities. Sometimes leadership is too deep in the execution loop and slows decisions that should happen inside the team. Both create drag, but the fix is different.
3. Collect evidence from multiple sources
I do not rely on a single source of truth, because there usually is not one.
I look at the codebase and architecture, but I also look at how work moves through the organization and where context gets lost. Depending on the engagement, that can include:
- Repo and codebase review
- Architecture docs, ADRs, and system diagrams
- CI/CD setup and release process
- Incident history, recurring bugs, and operational pain
- Interviews with leadership, product, and engineering
- How priorities are set, updated, and translated into technical work
- Metrics, telemetry, or analytics quality when decision-making depends on them
The goal is not to collect everything. The goal is to test assumptions from multiple angles.
- If leadership says the issue is capacity, I want to see whether work is actually blocked by lack of hands or by unclear decisions.
- If product says the team has had enough time, I want to see what changed since the original plan and how those changes were communicated.
- If engineering says priorities are unclear, I want to see how often teams are waiting for clarification and who has the authority to resolve it.
4. Prioritize findings by risk and leverage
One of the easiest ways to waste an audit is to turn it into a long list of observations.
I prioritize findings based on risk and leverage, not just technical correctness:
- Impact: What is the cost of this problem to delivery, reliability, or decision quality?
- Frequency: Is this occasional friction or a recurring bottleneck?
- Effort: Is this a quick fix, a structural change, or a longer migration?
- Dependencies: What has to be true before this recommendation can work?
This is where “interesting” gets separated from “important.”
A technically correct improvement can still be low priority if it does not change outcomes. Meanwhile, a process or alignment issue can be high priority because it affects every team every week.
5. Deliver outputs the team can act on immediately
I want the output to be useful on Monday, not just impressive in a meeting deck.
That usually means a short set of prioritized findings, explicit tradeoffs, and practical next steps. Depending on the situation, I may also include:
- Quick wins vs structural changes
- A 30/60/90 day sequence
- Risks of doing nothing
- Clarification of which decisions belong to leadership vs product-engineering teams
The point is not to produce a verdict. The point is to help the organization make better decisions with a more accurate picture of reality.
Tradeoffs / limits
A technical audit can reduce uncertainty quickly, but it cannot replace the work of improving how a team actually operates.
One reason this matters is that organizations often jump too fast to the wrong conclusion, especially around headcount. In my experience, headcount is seldom the real bottleneck. After 25+ years, I still mostly find clarity problems: priorities, ownership, tradeoffs, and feedback loops.
Adding more people into a system with weak clarity often increases coordination overhead before it improves delivery. That is why I treat “we need more people” as a hypothesis to test, not an automatic recommendation. The point is not to avoid hiring. The point is to understand whether the organization has a staffing problem or a clarity problem first.
This is also one reason I dislike talking about people as “resources.” The language makes it too easy to treat a system problem as a staffing spreadsheet problem.
Fred Brooks made the same point from another angle decades ago in The Mythical Man-Month1: adding people to a late project can make it later. I explore the math behind that in The Communication Math. I do not use that as a slogan against hiring, but as a reminder to check coordination and clarity before assuming headcount is the fix.
This is also where leadership’s role needs to be defined carefully.
Leadership is necessary when teams are working as silos and optimizing for local goals that do not match company priorities. In that situation, leadership has to create alignment and make tradeoffs visible across teams.
But outside of that, leadership usually should not stay in the day-to-day execution loop. If too many decisions require escalation, teams lose ownership, feedback loops get longer, and delivery slows down. Teams need context and trust, not constant intervention.
Another limit is social, not technical: the word “audit” can send the wrong message.
At C-level, it sounds structured and responsible. Inside the teams, it can sound like a hidden performance review: “Why are you bad?” That framing creates defensiveness and makes it harder to learn what is actually happening.
So part of running a useful audit is how it is communicated. I frame it as a way to reduce friction, improve feedback loops, increase transparency, and make day-to-day work easier and more effective for the people doing it. The goal is not to find someone to blame. The goal is to improve how the system works.
An audit can identify likely bottlenecks, risks, and high-leverage next steps. It cannot guarantee outcomes without follow-through. The value comes from what the organization changes after the audit, not from the report itself.
What good looks like
What good looks like is not just “faster delivery.” It is a team and organization that are aligned on what matters, who owns which decisions, and how to respond when reality changes.
At the organizational level, leadership has a clearer decision path and stops carrying day-to-day execution ownership that belongs in the teams. Product and engineering work closer together, with shorter feedback loops and more shared context. Teams spend less time guessing what is actually prioritized and more time moving the right things forward.
At the team level, people feel ownership and clarity. They know what problem they are solving, what tradeoffs are acceptable, and when they can decide locally without escalation. That usually improves delivery speed, but more importantly it improves decision quality.
And the cultural signal matters just as much as the process changes: people feel that they are learning, they want to get into work on Monday, and they feel secure. When something breaks in production, it is not time to point fingers. It is time to learn something and improve the system.
That is where the feedback loops become real. If teams get fast signals and have the trust and ownership to act on them, they can fix things as fast as they break them, and the organization gets stronger instead of more defensive.
Who this is useful for
- Founders with a growing product team
- CTOs inheriting a legacy platform
- Teams with slipping predictability or rising incidents
- Organizations planning a migration/replatforming effort
Short CTA
If your team is in this situation, I do short technical audits focused on practical priorities, clearer ownership, and next steps. See /work-with-me or my background on /cv.