Most developer tools fail adoption not because they're poorly built, but because they're designed from the supply side — what the builder can create — rather than the demand side — what the user actually needs to accomplish. Jobs-to-be-Done (JTBD) is the framework that explains this gap. Applied to engineering tools and AI adoption, it reveals why $180K Copilot investments show no return, why SonarQube dashboards go unopened, and what it actually takes for a tool to get "hired."
Your engineering team has six tools running in the background right now. Three of them nobody opens.
It's not a training problem. It's not a buy-in problem. It's a design problem — specifically, a demand-side design problem. Most developer tools are built by asking "what can we build?" The question that actually determines whether a tool gets used is different: "What job is someone trying to get done, and what stops them from doing it?"
That second question is the foundation of Jobs-to-be-Done (JTBD), a framework originally developed by Clayton Christensen and refined into a practical switching model by Bob Moesta. It's been applied extensively to consumer products and SaaS. It's rarely applied to developer tools — and that's exactly why most developer tools end up in the graveyard.
The Tool Graveyard
Every engineering organisation has one. SonarQube installed, never actioned. LinearB dashboards nobody opens. Copilot licences purchased, renewed out of inertia, rarely justified against any metric.
This isn't vendor failure. Most of these tools are genuinely well-built. The failure is adoption — and adoption failure is almost always a demand-side problem.
Gartner found that enterprises use on average fewer than 40% of the features in the software they license. But the deeper problem isn't feature usage — it's job mismatch. The tool was purchased to do one job. The actual job being done in the organisation is different. Nobody mapped them.
When we talk to CTOs about a failed AI tool investment, the pattern is consistent. They bought the tool to "move faster." But the actual job they were hired to do by their board was "show measurable AI ROI." Those are not the same job. A tool that makes individual developers type less code fails the board job entirely.
Supply-Side Thinking: How It Shapes What Gets Built
Supply-side thinking starts with what you can build and works outward. It produces features. It produces integration lists. It produces comparison matrices. What it doesn't produce is a clear answer to: "What specific situation does someone need to escape before they'll switch to this?"
Clayton Christensen's foundational JTBD insight is that people don't buy products — they hire them. The job has a specific context. A specific frustration driving it. A specific outcome that would make it successful. And critically: a set of competing options, including "do nothing."
Bob Moesta developed this into a switching model with four forces that determine whether a hire happens at all:
- Push — the frustrations with the current situation that are building pressure to change
- Pull — the attractions of the new solution that make switching feel worth it
- Anxiety — the specific fears that hold people back even when push and pull are present
- Habit — the inertia of existing patterns and tools that reduces urgency to switch
Adoption happens when push + pull outweigh anxiety + habit. Not before.
Most tool vendors optimise entirely for pull — features, demos, case studies. They underinvest in understanding push (what's actually broken in the current situation), and they almost never address anxiety. That's why tools that look great in demos sit unused in production.
The Real Jobs CTOs, Tech Leads, and Developers Are Hiring For
Here's where it gets practical. The stated reason a team adopts a tool is almost never the real job. The real job is revealed in the switching moment — the specific event that pushed someone from "this is annoying" to "I need to find a solution."
For CTOs and VPs of Engineering, the switching moment is usually one of three events:
- "We spent $180K on Copilot and our DORA metrics got worse."
- "The board asked about AI ROI and I had nothing to show them."
- "We just raised and hired 12 engineers — and delivery is slower than with 6."
The job isn't "use AI tools." The job is: give me a board-ready narrative with numbers that shows engineering is a strategic asset, not a cost centre. Any tool that can't connect to that job will be deprioritised when the next quarter starts.
For Tech Leads and Staff Engineers, the switching moment is usually:
- "I just spent three hours reviewing a PR that any reasonable quality gate would have caught."
- "We wrote a 20-page architecture doc and nobody follows it."
- "Every time we touch this module, something breaks in production."
The job isn't "better code review tooling." The job is: let me stop being the single point of failure for code quality without lowering the bar. A tool that gives them a dashboard to check doesn't do this job. A tool that enforces their standards structurally — at commit time, without them being in the loop — does.
For developers, the switching moment is less dramatic but just as specific:
- "My Copilot-generated code keeps getting rejected in review."
- "I've been on this team six months and I still don't know the architecture conventions."
- "I'm afraid to touch this file because last time it broke three things I didn't expect."
The job is: let me ship working code without waiting for a senior engineer to validate me. This is a confidence job as much as a capability job.
Notice what none of these jobs mention: lines of code generated per hour, test coverage percentage, or the number of integrations a tool supports. Supply-side metrics. Demand-side jobs are expressed in terms of situations, anxieties, and outcomes — not features.
Why AI Tools Consistently Fail the Demand Test
AI coding tools arrived with enormous supply-side pull: autocomplete, generation, explanation, refactoring at scale. The promise was compelling. The demand-side analysis reveals why the ROI often doesn't materialise.
The job a CTO is hiring for — "ship working software without increasing risk as we scale" — has several components. Speed is one of them. But speed without correctness creates rework. Speed without architecture consistency creates debt. Speed without test quality creates fragility.
When you install Copilot on a codebase that already has 88% decorative tests, slow pipelines, and unclear architecture boundaries, you've addressed the speed component of the job while ignoring the correctness, consistency, and fragility components. The tool accelerated the existing system. If the existing system was generating defects at a certain rate, it now generates them faster.
This is what Bob Moesta calls hiring a product for the wrong job. The tool is doing what it was designed to do. The job you needed done is different.
The community knowledge base at beyond.minimumcd.org frames this precisely: "AI does not create new problems. It reveals existing ones faster. Teams that try to accelerate with AI before fixing their delivery process get the same result as putting a bigger engine in a car with no brakes."
The Adoption Sequence as Demand-Side Design
What does demand-side AI adoption actually look like? It follows a specific sequence — and the sequence matters because each step addresses a different force in the four forces model.
-
Quality Tools — Choose AI tooling that minimises rework, not just maximises output. A tool with a 20% error rate adds hidden rework tax to every generated line. This step addresses the push force: removing the frustration of AI that generates plausible but broken code.
-
Clarify Work — Use AI to improve requirements before writing code, not to write code from vague requirements. Ambiguous specs are the single largest source of defects. This step addresses a deep anxiety force: "Will AI-generated code reflect what we actually wanted?"
-
Harden Guardrails — Strengthen the safety net before expanding AI use. Automated tests, architecture enforcement, security scanning — all running on every commit. This step addresses the habit force: existing process "works," even if slowly. Guardrails make the new system trustworthy enough to replace the habit.
-
Reduce Delivery Friction — Remove manual gates, fragile environments, and long branch lifetimes. This step addresses the remaining push force: if AI accelerates code generation but the pipeline is still slow, the net gain disappears.
-
Accelerate with AI — Now expand code generation, refactoring, and autonomous contributions. The guardrails are in place. The pipeline is fast. The job — "ship working software without increasing risk" — is now supported end-to-end.
Teams that skip steps 1-3 are thinking supply-side. They see pull (AI can write code faster!) and jump to step 5. The tool gets blamed when it fails. The real failure was sequencing — applying a solution before the job was properly set up to be done.
Reading the Signals in Your Own Organisation
The demand-side lens gives you a diagnostic framework for any tool your team has adopted — or failed to adopt.
If a tool is unused, ask:
- What push force was it supposed to address? Is that force still present?
- What pull force did it offer? Does the team actually experience that pull?
- What anxiety did it not address? Is that anxiety still blocking adoption?
- What habit is it competing with? What would make that habit easier to break?
If an AI tool investment is showing no return, ask:
- What job was the CTO actually hiring for when they bought it?
- What components of that job did the tool address? Which did it miss?
- Was the underlying system ready for the tool's output? (If not, acceleration made things worse.)
If your team resists a tool, ask:
- What's the anxiety? "My team will resist another tool" is itself a job — someone needs to show the team that this one is different.
- What's the habit being disrupted? The more embedded the habit, the stronger the pull needs to be.
Three questions for Monday morning:
-
List the engineering tools your team has adopted in the last two years. For each one, write one sentence describing the job it was hired to do. If you can't write that sentence, that tool is probably underperforming.
-
For your last AI tool investment, identify which of the five adoption stages you were at when you deployed it. If you went straight to stage 5, that's your ROI explanation.
-
For the tool you most want your team to adopt next: write out the four forces explicitly — push, pull, anxiety, habit. If you can't articulate all four, you're designing from the supply side.
The Bottom Line
JTBD doesn't change what tools do. It changes how you evaluate whether a tool is worth adopting — and how you sequence adoption so that the job actually gets done.
Most engineering tools are built by people who deeply understand the supply side: what's technically possible, what integrates with what, what features peers are shipping. The demand side — the specific situation someone needs to escape, the anxiety that holds them back, the habit that generates inertia — is almost never the starting point.
The teams seeing measurable returns from AI investment aren't the teams with the most AI tools. They're the teams who understood the job before they bought the tool, sequenced the adoption to address all four forces, and measured success against the actual outcome — not the feature list.
The question isn't "are we using AI?" It's "is the AI doing the job we hired it for?"