The AI Readiness Audit.
8 dimensions, scored 1–5. The instrument that produces the readiness scorecard.
The same framework powers the self-assessment calculator and the formal Sweep deliverable. Surfaces deal-stoppers before an engagement starts.
Each dimension is scored 1–5 during the Sweep based on interviews, document review, and operational observation. The aggregate score predicts engagement fit; the per-dimension scores predict which specific risks the Install plan needs to design around. A high aggregate with low scores in two dimensions is almost always a more useful diagnostic than a flat-average mid-score.
The Sweep produces a one-page scorecard per dimension with the score, the specific failure mode driving it, and the recommended Install design adjustment. The same framework, in compressed form, powers the public self-assessment calculator. The dimensions are deliberately mechanical — they surface the deal-stoppers (no exec sponsor, no instrumentation, no investment appetite) that kill engagements that otherwise look good on paper.
Reading a 3.4-average scorecard.
A consulting firm we audited scored 3.4 average across the 8 dimensions. On the surface that's a probable fit. But the per-dimension breakdown told a different story: 4-5 on business stage, AI spend, operational pain, and investment appetite — but 1 on instrumentation (they could not measure the workflows we'd target) and 2 on executive sponsor (delegated to IT, not the managing partner).
The diagnosis: averaging produces a probable-fit signal that the per-dimension breakdown contradicts. We declined the engagement until the managing partner sponsored it directly and the firm could produce baseline KPIs on the workflows in scope. Three months later they came back with both. We started the Sweep then.
MORAL: AGGREGATE SCORES LIE. PER-DIMENSION TELLS THE TRUTH.
Book an Ops Call.
30 minutes. Operator-to-operator. No deck. No follow-up nurture sequence designed to wear you down.
Book an Ops Call →