Bot first.
Human last.
Same craft, less time.
A public note on how Data Disruption actually delivers engagements. Useful for buyers comparing vendors, useful for us as a commitment device.
Most consultancies have added AI to their existing delivery model. Data Disruption rebuilt delivery around it. Every engagement runs with Claude Code and the Microsoft Fabric MCP in the loop from hour one. Humans still sign everything. But the economics have changed — and we've changed with them.
What this means for a client: faster delivery, visible evaluation, clearer ownership of business logic, and a named human reviewer before anything important is treated as final.
The work didn't change.
The time it takes did.
TMDL is text. DAX is text. Measure documentation is text. Copilot grounding prompts are text. Claude Code — with the Microsoft Fabric MCP server — can read, modify, test, and iterate on all of it in a single session, under human review.
The parts of an engagement that used to take days now take hours. The parts that require judgement still require judgement. We built our delivery model around that distinction, not around pretending AI makes everything faster.
Every engagement,
same six steps.
Regardless of which offer you buy, every Data Disruption engagement runs through the same six-stage loop. Claude Code does the first pass. A Decision Engineer reviews, challenges, and signs. Nothing ships without a human signature.
Decision brief
We don't start with data. We start with a written decision brief: the single decision the client wants to make differently, the frequency, the cost of getting it wrong. This is the only stage a bot cannot do. It is run by a Decision Engineer, facing the client, with a whiteboard.
Inventory & map
Claude Code connects to the Fabric MCP, inventories every workspace, semantic model, report, dataflow and pipeline. Produces a map of what exists, what it depends on, and where the data actually lives. A task that used to take a senior consultant 3 days finishes in a couple of hours. Human reviews, flags surprises.
Shape the model
Decision Engineer drafts the target semantic model in Tabular Editor / TMDL. Claude Code scaffolds measures, relationships, calculation groups, roles. Every change committed to git. Every measure paired with a generated unit test the lead approves or rewrites. This is the stage where the time savings are largest.
Ground the agent
For Ask Your Data engagements, Claude Code writes the grounding prompts, authors the business glossary, generates a 200-question evaluation set, runs it, surfaces the failures. We iterate until the accuracy rate meets the threshold we agreed with the client in writing. This used to be a "trust us" phase. It's now a number on a report.
Pressure test
A second Decision Engineer — not the build lead — runs Claude Code against the model as an adversary. Tries to break the measures, break the agent, find the blind spots. Every failure becomes a regression test that ships with the engagement. Clients get the test suite with the code.
Handover
Client receives the semantic model, the agent config, the grounding prompts, the 200-question eval set, the regression tests, the auto-generated documentation, and a recorded walkthrough. No lock-in. No ongoing mystery. A 60-minute live handover session closes the sprint.
Six things
we won't bend on.
The method only works because of what we don't let the bot do. These aren't soft preferences — they are contractual commitments in every engagement.
No code ships without a Decision Engineer signature.
Every commit, every measure, every agent prompt is reviewed and git-signed by a named human. If the human is overloaded, the sprint slows. The bot never self-merges.
No agent ships without a 200-question eval set.
We don't deploy any "Ask Your Data" agent without a written evaluation harness agreed with the client. If the harness keeps failing, the agent doesn't ship.
No claim goes to a human without a source row.
Every answer from every agent we build cites the semantic-model measure and the underlying rows. Clients can always drill through to proof.
No prompts leave the client's tenant.
For regulated industries we use Azure OpenAI inside the client's own tenant. No data egresses to public APIs without written authorisation. Claude Code runs locally against a private MCP. Logs stay with the client.
No engagement is priced by the hour.
Because we use AI to compress delivery, hourly billing would punish our own efficiency. Every offer is fixed-scope, fixed-price.
No undocumented IP stays with us.
Every artefact — code, measures, prompts, eval sets, tests, docs — transfers to the client on handover under an unencumbered licence. No black boxes. You own it.
Show us the decision
that still takes too long.
A free 45-minute call. Bring a workflow, a reporting pain, or a trust issue — we'll tell you quickly whether this is a real fit.
See how this applies to your team