Part 2: Strategy & Opportunity Discovery

Chapter 7: Problem Framing & JTBD

Hire Us
2Part 2: Strategy & Opportunity Discovery

7. Problem Framing & JTBD

Chapter 7 — Problem Framing & JTBD

Overview

Convert vague ideas into solvable problems with clear users, jobs, contexts, and success criteria. Proper problem framing is the foundation of successful AI initiatives—it aligns teams on what success looks like and prevents the costly mistake of building solutions in search of a problem.

This chapter introduces the Jobs-to-be-Done (JTBD) framework and systematic techniques for defining AI opportunities that deliver measurable value.

The Problem Framing Challenge

Common Anti-Patterns

Organizations often fall into these traps when defining AI initiatives:

graph TB A[Poor Problem Framing] --> B[Solution-First Thinking] A --> C[Vague Value Proposition] A --> D[Technology Fascination] A --> E[Missing User Focus] A --> F[Unconstrained Scope] B --> G[ChatGPT for everything<br/>without clear problem] C --> H[Improve efficiency<br/>no metrics defined] D --> I[Blockchain/AI hype<br/>no business case] E --> J[Generic tools<br/>nobody adopts] F --> K[AI does everything<br/>never completes] G --> L[Failed Projects] H --> L I --> L J --> L K --> L style L fill:#f8d7da style A fill:#fff3cd
Anti-PatternDescriptionConsequenceCost ImpactExample
Solution-FirstStarting with "let's use ChatGPT for..."Solution constrains problem definition6-month delay"Build a chatbot" instead of "reduce support costs by 30%"
Vague Value"Improve efficiency" or "better experience"Can't measure success or failure40% wasted effortNo clear acceptance criteria or ROI
Technology FascinationPursuing cool tech regardless of business needImpressive demos, no production value$500K+ sunk costsBlockchain for supply chain without pain point
Missing UserNo clear primary user or stakeholderBuilt for everyone, valuable to no one<20% adoptionGeneric tools nobody champions
Scope CreepTrying to solve everything at onceProject never completes2-3x budget overrun"AI that does all of customer service"

The Cost of Poor Framing

Quantified Impact:

MetricImpactSource
Project Failure Rate63% cite "poorly defined problem" as root causeGartner 2023 AI Survey
Timeline DelaysAverage 6-month delay when reframing mid-projectMcKinsey AI Research
Cost Overruns2-3x budget overrun when scope isn't well-boundedIndustry benchmarks
Wasted Effort40% of engineering time on unwanted featuresStandish Group CHAOS Report
graph LR A[Poor Framing] --> B[Unclear Requirements] B --> C[Wrong Solution Built] C --> D[Late Discovery of Mismatch] D --> E[Costly Rework or Cancellation] A --> F[6-month delay] A --> G[2-3x cost overrun] A --> H[63% failure rate] style A fill:#f8d7da style E fill:#f8d7da

Why Teams Skip Framing:

  • Pressure to show progress quickly ("just start building")
  • Assumption that problem is obvious
  • Lack of structured frameworks
  • Fear of appearing to slow down innovation
  • Executive impatience for AI "quick wins"

Jobs-to-be-Done Framework

The JTBD framework shifts focus from features and demographics to the underlying job a user is trying to accomplish.

Core JTBD Principle

"People don't want a quarter-inch drill. They want a quarter-inch hole." — Theodore Levitt

Traditional Thinking:

  • Who is the user? (demographics)
  • What features do they want?
  • How do we build it?

JTBD Thinking:

  • What job are they trying to get done?
  • What's the context and constraints?
  • What makes them "hire" or "fire" a solution?

JTBD Canvas

A structured template to capture the essential elements:

graph TB A[Job Executor] --> B[Functional Job] A --> C[Emotional Job] A --> D[Social Job] B --> E[Current Approach] C --> E D --> E E --> F[Struggles & Friction] F --> G[Opportunity Space] H[Context & Constraints] --> G I[Success Criteria] --> G style A fill:#e1f5ff style G fill:#d4edda style F fill:#fff3cd

Canvas Template:

ElementDescriptionQuestions to Ask
Job ExecutorWho is trying to get the job done?Who experiences the pain? Who makes decisions? Who uses the output?
Functional JobThe practical task to accomplishWhat are they trying to achieve? What's the desired outcome?
Emotional JobHow they want to feelDo they want to feel confident, efficient, in control? Reduce anxiety?
Social JobHow they want to be perceivedDo they want to appear expert, responsive, data-driven?
Current ApproachHow they do it todayWhat tools/processes exist? What workarounds?
StrugglesPain points and frictionWhat's slow, error-prone, frustrating? What fails?
ContextSituation and constraintsWhen/where does this happen? What's fixed vs. flexible?
Success CriteriaEvidence the job is done wellWhat metrics matter? What's "good enough"?

JTBD Example: Customer Support Agent

Complete JTBD Canvas:

## Job Executor
Sarah, Tier 1 Support Agent, 2 years experience
- Handles 40-50 contacts/day via chat and email
- Measured on handle time, CSAT, first-contact resolution
- Works in open office with background noise

## Functional Job
When a customer reaches out with a question or issue,
Sarah needs to quickly understand the problem, find the right solution,
and communicate it clearly to resolve the issue in one interaction.

## Emotional Job
- Feel confident she's giving accurate information
- Avoid the stress of customers getting frustrated with wait time
- Experience sense of accomplishment from helping people

## Social Job
- Be perceived as knowledgeable and professional
- Maintain team reputation for quality support
- Demonstrate value to earn advancement opportunities

## Current Approach
1. Read customer message and check account details
2. Search knowledge base (often returns too many irrelevant results)
3. Check recent similar tickets (manual search, hit-or-miss)
4. If stuck, ping teammates in Slack or escalate to Tier 2
5. Compose response, check with supervisor if unsure
6. Send response and update ticket status

## Struggles & Friction
- Knowledge search returns 50+ articles, takes 3-5 min to find right one
- Can't remember which articles are outdated
- Each escalation costs 10+ min of wait time
- Composing responses from scratch is slow and inconsistent
- Afraid of giving wrong info, so double-checks everything
- Context switching between 5+ tools

## Context & Constraints
- Works in high-volume contact center (300+ agents)
- Peak hours are unpredictable
- Can't make customers wait >2 minutes
- Must maintain 85% CSAT score
- Company policy requires certain language/disclaimers
- Limited to approved knowledge base (can't improvise)

## Success Criteria
- Find accurate answer in <30 seconds
- First-contact resolution >80%
- Handle time <6 minutes
- CSAT >85%
- <5% escalation rate

Three Types of Jobs

Understanding the layers helps design more complete solutions:

graph LR A[Functional Job] --> D[Complete Solution] B[Emotional Job] --> D C[Social Job] --> D A1[What to accomplish] --> A A2[Practical outcome] --> A B1[How to feel] --> B B2[Reduce anxiety] --> B C1[How to be seen] --> C C2[Reputation/status] --> C style D fill:#d4edda

Examples by Job Type:

Job TypeSupport Agent ExampleData Analyst ExampleSales Rep Example
FunctionalResolve customer issues quicklyGenerate accurate insights from dataClose deals and hit quota
EmotionalFeel confident in answers; reduce stressFeel smart and competent; avoid embarrassmentFeel in control of pipeline; reduce rejection anxiety
SocialBe seen as helpful expertBe perceived as data-driven and rigorousBe recognized as top performer

Design Implications:

Job TypeAI OpportunityImplementation
FunctionalAutomate/augment the core taskKnowledge search, answer suggestion, auto-response
EmotionalReduce uncertainty and anxietyConfidence scores, verification, fallback options
SocialEnable professional growthQuality metrics, skill development, recognition

Problem Statement Template

A good problem statement is specific, measurable, and actionable.

Template Structure

In [context],
[user/stakeholder] needs to [job to be done]
because [underlying reason/pain],
but currently [struggle/obstacle],
which causes [quantified impact].

Success means [measurable outcome]
within [timeframe]
under [constraints].

Problem Statement Examples

Example 1: Customer Support

In our customer support center,
Tier 1 agents need to quickly find accurate answers to customer questions
because response time and accuracy directly impact CSAT and operational costs,
but currently they spend 3-5 minutes searching through 500+ knowledge articles
with a 40% escalation rate for information they can't find,
which causes $2.4M annual cost from extended handle times and lost productivity.

Success means agents find the right answer in <30 seconds
with <5% escalation rate and >85% CSAT
within 6 months
under constraints of existing knowledge base and 85% agent adoption.

Example 2: Sales Forecasting

In our quarterly business review process,
the sales VP needs accurate pipeline forecasts to guide resource allocation
because incorrect forecasts lead to missed targets or wasted hiring,
but currently forecasts are based on rep intuition with 35% error rate
and require 40 hours of manual data compilation per quarter,
which causes poor resource decisions and $5M+ in missed opportunities.

Success means forecast error <15% and compilation time <4 hours
while maintaining deal-level transparency for coaching
within 3 months
under constraints of existing CRM data and sales process.

Example 3: Claims Processing

In our insurance claims intake process,
claims processors need to extract data from submitted documents
because manual entry is slow, error-prone, and costly,
but currently processors manually type information from PDFs, taking 12 min per claim
with 8% error rate requiring rework,
which causes 20-hour processing time and $3M annual rework costs.

Success means <3 min per claim, <2% error rate, and 4-hour processing time
maintaining compliance and audit trail requirements
within 9 months
under constraints of existing document formats and systems integration.

Problem Statement Quality Checklist

CriteriaGoodBad
Specific User"Tier 1 support agents""Users" or "customers"
Clear Job"Find accurate answers quickly""Improve efficiency"
Quantified Pain"3-5 min search, 40% escalation, $2.4M cost""Search is slow"
Measurable Success"<30 sec search, <5% escalation""Better experience"
Time-Bound"Within 6 months""Soon" or no timeline
Constraints"Existing KB, 85% adoption"No constraints mentioned

Research Techniques

Effective problem framing requires deep understanding of user needs and context.

1. User Interviews

Interview Structure (60-90 minutes):

graph LR A[Introduction<br/>5-10 min] --> B[Context & Background<br/>10-15 min] B --> C[Current Process<br/>20-30 min] C --> D[Pain Points<br/>15-20 min] D --> E[Ideal Future<br/>10-15 min] E --> F[Wrap-up<br/>5-10 min]

Key Questions:

PhaseQuestionsWhat to Listen For
ContextWalk me through your typical day/weekFrequency, patterns, triggers
What are you measured on?Incentives, constraints
Current ProcessShow me how you do [job]Actual vs. stated process
What tools do you use?Integration points
Pain PointsWhat's most frustrating?Emotional response
How much time does [X] take?Quantified impact
What workarounds have you developed?Real needs vs. stated
Ideal FutureIf you had a magic wand...Unconstrained thinking
What would make you look like a hero?True success criteria

Interview Best Practices:

  • Ask "show me" not "tell me": Watch them work, don't just listen to descriptions
  • Dig into specifics: "Last time this happened, what exactly did you do?"
  • Follow the energy: When they get animated, dig deeper
  • Quantify everything: "How often? How long? How many?"
  • Look for workarounds: Signals of unmet needs

2. Shadowing & Observation

Observation Protocol:

What to ObserveWhat It RevealsDocumentation
Tools & SystemsIntegration complexity, switching costsScreenshot workflow, list all tools
WorkaroundsUnmet needs, pain pointsDocument specific examples
Wait TimesProcess bottlenecksTime each step with stopwatch
Errors & ReworkQuality issues, risk areasCount frequency, classify types
CommunicationCollaboration patterns, escalationsNote who talks to whom, about what
Emotional SignalsFrustration, confusion, confidenceRecord specific moments

Example Observation Log:

## Observation: Support Agent - Sarah, 2 hours

### Timeline
9:00 - 9:12: Customer chat - password reset
  - 2 min: Read request, verify identity
  - 5 min: Search KB for password policy (opened 8 articles before finding right one)
  - 3 min: Guide customer through reset
  - 2 min: Update ticket, add notes
  - **Observation**: Significant time lost in KB search; showed frustration

9:12 - 9:28: Customer email - billing question
  - 3 min: Read email, check account
  - 7 min: Couldn't find answer, pinged Slack (waited for response)
  - 4 min: Escalated to Tier 2 after no Slack response
  - 2 min: Write holding message to customer
  - **Observation**: Escalation due to missing info; visible stress during wait

### Patterns Observed
- KB search: 5-7 min average, high frustration
- Escalations: 30% of cases (way higher than 10% policy)
- Tool switching: 6 different tools in 2 hours
- Waiting time: 15 min total (on escalations and approvals)

3. Five Whys Analysis

Drill down to root causes, not symptoms:

Example: "We need AI to automate customer support"

graph TD A[Why do we need to automate support?] --> B[Because handle time is too high] B --> C[Why is handle time too high?] C --> D[Because agents spend 5 min searching for answers] D --> E[Why does search take so long?] E --> F[Because KB has 500+ articles with poor relevance] F --> G[Why is relevance poor?] G --> H[Because articles aren't tagged and search is keyword-only] H --> I[Why aren't articles tagged?] I --> J[Because no process for maintenance and quality] style J fill:#d4edda J --> K[Real Problem: Knowledge management process,<br/>not just search technology] style K fill:#fff3cd

Five Whys Template:

Why #QuestionAnswerInsight
1Why [initial problem]?[First-level answer]Surface symptom
2Why [answer 1]?[Second-level answer]Proximate cause
3Why [answer 2]?[Third-level answer]Contributing factor
4Why [answer 3]?[Fourth-level answer]Systemic issue
5Why [answer 4]?[Fifth-level answer]Root cause

4. Journey Mapping

Visualize the end-to-end user experience:

journey title Customer Support Agent - Handling Customer Inquiry section Receive Request Read message: 3: Agent Check account: 4: Agent Understand context: 3: Agent section Find Solution Search knowledge base: 2: Agent Review articles: 2: Agent Ask colleague: 3: Agent Escalate if needed: 1: Agent section Respond Draft response: 3: Agent Verify accuracy: 3: Agent Send to customer: 5: Agent section Follow-up Update ticket: 4: Agent Add notes: 3: Agent Close case: 5: Agent

Journey Map Components:

ElementPurposeQuestions
StagesKey phases of the journeyWhat are the major steps?
ActionsWhat user does at each stageWhat are they doing? Using?
ThoughtsWhat they're thinkingWhat are they trying to figure out?
EmotionsHow they feel (1-5 scale)Where's the frustration? Joy?
Pain PointsSpecific strugglesWhere do they get stuck? Where are errors?
OpportunitiesWhere AI could helpWhat could be automated? Augmented?

5. Quantifying the Pain

Make the business case tangible:

Time & Cost Analysis:

Pain PointFrequencyTime CostPeople ImpactedAnnual Cost
Knowledge search40 searches/day/agent5 min avg300 agents2.4M(5min×40×300×250days×2.4M (5min × 40 × 300 × 250days × 40/hr)
Escalations15/day/agent10 min wait300 agents$1.5M
Response drafting40/day/agent3 min300 agents$1.2M
Rework (errors)5/day/agent8 min300 agents$1.0M
Total$6.1M

Quality Impact Analysis:

IssueCurrent RateTarget RateRevenue Impact
CSAT below 85%35% of contacts10% of contacts$3M (customer churn)
First-contact resolution60%85%$2M (reduced repeat contacts)
Average handle time9 minutes6 minutes$2.4M (capacity gain)

Techniques for Better Framing

Outcome vs. Output Thinking

Output Thinking (Bad):

  • Build a chatbot
  • Deploy a recommendation engine
  • Create a dashboard

Outcome Thinking (Good):

  • Reduce support costs by 30%
  • Increase conversion rate by 15%
  • Decrease forecast error by 20%
graph LR A[Output:<br/>What we build] --> B[Output:<br/>Chatbot deployed] C[Outcome:<br/>What changes] --> D[Outcome:<br/>30% cost reduction,<br/>+10 NPS points] style B fill:#f8d7da style D fill:#d4edda

Reframing Table:

Output StatementOutcome Statement
"Build an AI assistant for support agents""Reduce agent handle time by 25% while maintaining >85% CSAT"
"Implement ML-based forecasting""Improve forecast accuracy from 65% to 90% to optimize inventory"
"Create a document processing pipeline""Reduce claims processing time from 20 hours to 4 hours with <2% error rate"
"Deploy a recommendation system""Increase add-on sales by $10M annually through personalized suggestions"

Hypothesis-Driven Framing

Make your assumptions explicit and testable:

Hypothesis Template:

We believe that [intervention]
for [user]
will result in [measurable outcome]
because [theory/assumption].

We'll know we're right when we see [leading indicator]
within [timeframe].

Examples:

Hypothesis 1: AI-Powered Knowledge Search

We believe that providing semantic search with answer extraction
for Tier 1 support agents
will reduce average handle time by 25%
because 40% of handle time is spent searching for information.

We'll know we're right when we see:
- Search time drops from 5min to <30sec (leading)
- Handle time drops from 9min to 6.5min (lagging)
- Agent adoption >80% (adoption)
within 3 months of deployment.

Hypothesis 2: Intelligent Routing

We believe that ML-based routing to specialized agents
for incoming support requests
will improve first-contact resolution from 60% to 80%
because 40% of transfers are due to skills mismatch.

We'll know we're right when we see:
- Transfer rate drops from 40% to <20% (leading)
- FCR improves from 60% to 80% (lagging)
- CSAT improves by 10+ points (outcome)
within 4 months of deployment.

Assumption Logging

Track and test critical assumptions:

AssumptionRiskHow to TestOwnerStatus
Agents will adopt AI assistantHighPilot with 20 agents, measure usageProductIn progress
KB quality is sufficient for AIMediumAnalyze 100 articles, test retrievalML EngComplete ✓
Answer extraction accuracy >90%HighBenchmark on 500 Q&A pairsML EngNot started
Integration won't break workflowsMediumTechnical spike with ITEng LeadIn progress
Customers accept AI-drafted responsesHighA/B test in controlled pilotProductPlanned

Test Backlog Structure:

graph TD A[Critical Assumptions] --> B[Design Tests] B --> C{Quick to Test?} C -->|Yes| D[Spike/Prototype<br/>1-2 weeks] C -->|No| E[Pilot/Experiment<br/>4-8 weeks] D --> F{Validated?} E --> F F -->|Yes| G[Proceed to Build] F -->|No| H[Pivot or Kill] style G fill:#d4edda style H fill:#f8d7da

Deliverables

1. JTBD Canvas

One canvas per primary user or use case. See template in earlier section.

2. Problem Statements

Crisp, 1-paragraph statements for each opportunity. Include:

  • Context and user
  • Job to be done
  • Current struggle and quantified impact
  • Success criteria with metrics and timeline
  • Key constraints

3. Success Criteria & Evaluation Plan

Success Criteria Template:

Metric TypeMetricBaselineTargetMeasurement Method
Leading (Technical)Answer accuracyN/A>92%Eval set of 500 Q&A pairs
Leading (Usage)Agent adoption0%>80%Daily active users
Lagging (Output)Avg handle time9 min<6.5 minContact center system
Lagging (Outcome)CSAT score78>85Post-contact survey
Lagging (Financial)Annual savings$0>$2MFinance model

Evaluation Plan Components:

graph TB A[Evaluation Plan] --> B[Metrics Definition] A --> C[Data Sources] A --> D[Measurement Cadence] A --> E[Acceptance Thresholds] A --> F[Decision Criteria] B --> B1[Leading indicators<br/>Lagging indicators] C --> C1[Where data comes from<br/>How to access] D --> D1[Daily/Weekly/Monthly<br/>reporting] E --> E1[Go/no-go thresholds<br/>for each metric] F --> F1[What triggers scale,<br/>pivot, or kill] style A fill:#e1f5ff style F fill:#d4edda

4. Assumption Log & Test Backlog

Track all critical assumptions and how you'll validate them:

Assumption Categories:

CategoryExample Assumptions
User/MarketUsers will adopt the solution; problem is widespread
TechnicalData quality sufficient; accuracy achievable; latency acceptable
BusinessROI assumptions; cost projections; timeline estimates
OperationalTeam capacity; integration feasibility; change management

Test Backlog Template:

## High-Risk Assumptions

### Assumption 1: Agent Adoption
**Risk Level**: High
**Assumption**: Agents will actively use AI assistant for >80% of contacts
**Why Critical**: Low adoption = no ROI

**Test Plan**:
- Method: 4-week pilot with 20 volunteer agents
- Success: >70% daily usage, >4.0/5 satisfaction
- Timeline: Weeks 1-4
- Owner: Product Manager
- Budget: $15K

**Results**: [To be completed]
**Decision**: [Go/Pivot/No-go]

### Assumption 2: Answer Accuracy
**Risk Level**: High
**Assumption**: AI can achieve >90% accuracy on top 30 intents
**Why Critical**: Low accuracy = agent distrust, poor CX

**Test Plan**:
- Method: Offline eval with 500 historical Q&A pairs
- Success: >90% accuracy, >85% coverage
- Timeline: Week 2
- Owner: ML Engineer
- Budget: $5K

**Results**: [To be completed]
**Decision**: [Go/Pivot/No-go]

Why It Matters

The ROI of Good Framing

Impact on Success Rates:

  • Projects with clear problem statements: 72% success rate
  • Projects without: 28% success rate
  • 2.5x difference in outcomes

Time & Cost Savings:

  • 40% less rework when problem is well-defined
  • 30% faster delivery with clear success criteria
  • 50% reduction in scope creep

Alignment Benefits

Team Alignment:

  • Engineers know what "good enough" means
  • Product knows what to prioritize
  • Business knows what to expect
  • Everyone speaks the same language

Stakeholder Confidence:

  • Clear value proposition
  • Measurable success criteria
  • Honest about constraints and risks
  • Evidence-based decision making

Case Study: Insurance Claims Processing

Initial Request (Poor Framing)

"We want to use AI to automate our claims intake process. Build us a bot that can read documents and enter data into our system."

Problems with this framing:

  • Solution-first (bot)
  • No user identified
  • No success criteria
  • No understanding of context

Discovery Process

Step 1: User Interviews (5 claims processors, 2 managers)

Key findings:

  • Processors handle 30-40 claims/day
  • 12 min average per claim for data entry
  • 8% error rate requires rework (15 min each)
  • Highly variable document formats
  • Measured on throughput and accuracy

Step 2: Process Observation (2 days shadowing)

Detailed findings:

  • Manual data entry: 8 min
  • Cross-referencing fields: 3 min
  • Quality checks: 1 min
  • Most errors: date formats, policy numbers, unclear handwriting
  • Workaround: processors create custom cheat sheets

Step 3: Five Whys

Why automate? → Reduce costs Why reduce costs? → Processing takes too long Why too long? → Manual data entry is slow Why is entry slow? → Typing from PDFs and images Why from PDFs? → That's how customers submit claims

Root insight: The problem isn't typing speed—it's extracting structured data from unstructured documents.

Reframed Problem Statement

In our claims intake process,
claims processors need to extract customer and policy data from submitted documents
because manual entry is slow, error-prone, and prevents us from meeting our 4-hour SLA,
but currently processors spend 12 minutes per claim manually typing information,
with 8% requiring rework due to entry errors,
which causes $3M in annual labor costs and customer dissatisfaction from 20-hour average processing time.

Success means:
- Data extraction in <3 minutes per claim (75% reduction)
- Error rate <2% (75% reduction)
- Processing time <4 hours (80% reduction)
- $2M+ annual savings

Within 9 months, under constraints of:
- Existing document formats (PDFs, images, faxes)
- Integration with legacy claims system
- Maintaining compliance and audit trail
- 90% processor adoption

JTBD Canvas

Job Executor: Claims Processor, processes 30-40 claims daily

Functional Job: Extract accurate data from claims documents and enter into system

Emotional Job: Feel confident data is correct; avoid stress of rework

Social Job: Be seen as efficient and accurate; hit performance targets

Current Approach:

  1. Open submitted document (PDF/image)
  2. Manually type each field into claims system
  3. Cross-reference policy number and customer ID
  4. Double-check dates and amounts
  5. Submit claim for review

Struggles:

  • Inconsistent document formats
  • Poor image quality, unclear handwriting
  • Repetitive typing causes fatigue and errors
  • Context switching between documents and system
  • No way to validate data during entry

Success Criteria:

  • <3 min per claim
  • <2% error rate
  • No rework
  • Meet 4-hour SLA

Solution Design (Outcome of Good Framing)

Based on reframed problem, the team designed:

Not: A fully automated bot Instead: A human-in-the-loop system with:

  1. AI document extraction (90% accuracy)
  2. Confidence scoring for each field
  3. Smart UI highlighting low-confidence fields for review
  4. One-click corrections with learning
  5. Automated quality checks

Result:

  • 70% of claims: AI extracts all fields with high confidence, processor reviews in 2 min
  • 25% of claims: AI extracts most fields, processor corrects 2-3 fields in 3-4 min
  • 5% of claims: Poor quality, processor does manual entry in 8 min
  • Overall: 3 min average, 1.8% error rate, $2.1M annual savings

Key Success Factors

  1. Interviewed actual users to understand real workflow
  2. Observed the work rather than relying on descriptions
  3. Quantified the pain with specific time and cost data
  4. Reframed from solution to outcome (not "build a bot" but "reduce processing time and errors")
  5. Set clear success criteria that business stakeholders understood
  6. Scoped realistically with constraints and human-in-the-loop approach

Implementation Checklist

Discovery Phase

  • Identify 5-8 primary users to interview
  • Prepare interview guide with JTBD questions
  • Conduct interviews (60-90 min each)
  • Shadow users for 2-4 hours observing actual work
  • Document current process with times and tools
  • Quantify pain points (time, cost, frequency)

Analysis Phase

  • Complete JTBD canvas for each primary user
  • Run Five Whys to find root causes
  • Create journey map with pain points marked
  • Calculate annual cost of current approach
  • Identify constraints (technical, regulatory, cultural)

Framing Phase

  • Write problem statement using template
  • Define measurable success criteria (leading & lagging)
  • Articulate key hypotheses
  • Create assumption log with risk ratings
  • Design test plan for highest-risk assumptions

Validation Phase

  • Review JTBD canvas with actual users for accuracy
  • Validate problem statement with business sponsors
  • Confirm success criteria with finance/operations
  • Get technical feasibility review from engineering
  • Secure agreement on evaluation plan and thresholds

Documentation Phase

  • Finalize all problem statements
  • Complete evaluation plan with data sources
  • Publish assumption log and test backlog
  • Create executive summary (1-page)
  • Share with all stakeholders for alignment

Templates and Tools

1. Interview Script Template

## Introduction (5 min)
- Introduce yourself and purpose
- Explain how input will be used
- Confirm confidentiality
- Ask permission to record/take notes

## Background (10 min)
- Tell me about your role
- What does a typical day/week look like?
- What are you measured on?
- What tools do you use?

## Current Process (30 min)
- Walk me through how you do [specific job]
  - [Show me on your screen / in your workspace]
- How often do you do this?
- What triggers this work?
- What happens before? After?
- Who else is involved?

## Pain Points (15 min)
- What's most frustrating about this process?
- Where do you get stuck?
- What takes longer than it should?
- What causes rework or errors?
- What workarounds have you developed?
- If you could wave a magic wand, what would you change?

## Quantification (10 min)
- How long does [X] typically take?
- How often does [problem] occur?
- What's the impact when [error] happens?
- How much time do you spend on [pain point]?

## Wrap-up (10 min)
- What haven't I asked that I should know?
- Who else should I talk to?
- Can I follow up if I have questions?
- Would you be interested in testing solutions?

## Thank you and next steps

2. Problem Statement Generator

Answer these questions to generate a problem statement:

QuestionYour Answer
Who is the user/stakeholder?
What job are they trying to do?
What's the context/situation?
What's the current approach?
What's the specific struggle?
How much does it cost/impact? (quantified)
What does success look like? (metrics)
When do we need this by? (timeline)
What are the constraints?

3. Success Criteria Worksheet

Metric CategorySpecific MetricCurrent (Baseline)TargetHow MeasuredWho Owns
Leading - Technical
Leading - Usage
Lagging - Output
Lagging - Outcome
Financial

Common Pitfalls and How to Avoid Them

PitfallSymptomPreventionRecovery
Proxy UsersTalking to managers instead of actual usersInterview people who do the work dailyRedo interviews with actual users
Confirmation BiasOnly hearing what confirms your hypothesisAsk disconfirming questions; seek outliersDevil's advocate review session
Solution AnchoringUsers request specific featuresAsk "what problem does that solve?"Reframe to jobs and outcomes
Vague Metrics"Improve efficiency" or "better UX"Require specific, measurable targetsStakeholder workshop on metrics
Ignoring Constraints"We'll figure that out later"Document constraints upfrontRisk assessment with mitigation plans
Scope CreepProblem keeps expandingTime-box discovery; define clear boundariesPrioritize ruthlessly; phase the scope

Key Takeaways

  1. Start with users, not technology. Understand the job to be done before discussing AI solutions.

  2. Quantify the pain. Vague problems get vague solutions. Measure time, cost, frequency, and impact.

  3. Outcomes over outputs. Focus on what changes, not what you build.

  4. Make hypotheses explicit. If you can't test it, you can't validate it.

  5. Document constraints early. They'll shape your solution whether you acknowledge them or not.

  6. Success criteria must be measurable. If you can't measure it, you can't manage it.

  7. Bad framing compounds. Every downstream decision is affected by how you frame the problem.

  8. Reframing is normal. Expect to refine your problem statement as you learn—that's progress, not failure.

Further Reading

  • "Jobs to be Done" by Anthony Ulwick
  • "The Mom Test" by Rob Fitzpatrick (how to interview users)
  • "Sprint" by Jake Knapp (problem framing in design sprints)
  • "Lean Customer Development" by Cindy Alvarez
  • "Escaping the Build Trap" by Melissa Perri (outcome-driven product)