10. Business Case & Sponsorship
Chapter 10 — Business Case & Sponsorship
Overview
Quantify value, costs, and risks to secure executive sponsorship and governance. The business case is the foundation for AI investment decisions—it translates technical capabilities into business outcomes and provides sponsors with the clarity they need to commit resources.
A strong business case enables fast decisions, establishes accountability, and sets the stage for successful AI delivery. This chapter shows how to build compelling, evidence-based business cases that secure buy-in and minimize surprises.
The Three-Part Business Case Model
Every AI business case must address value, cost, and risk with equal rigor:
graph TB A[Business Case] --> B[Value Model] A --> C[Cost Model] A --> D[Risk Model] B --> B1[Revenue Impact] B --> B2[Cost Savings] B --> B3[Risk Reduction] C --> C1[Build Costs] C --> C2[Run Costs] C --> C3[Change Costs] D --> D1[Delivery Risk] D --> D2[Model Risk] D --> D3[Compliance Risk] B1 --> E[NPV Calculation] B2 --> E B3 --> E C1 --> E C2 --> E C3 --> E E --> F[ROI Metrics] D1 --> F D2 --> F D3 --> F F --> G[Go/No-Go Decision] style B fill:#d4edda style C fill:#fff3cd style D fill:#ffe1e1
Component 1: Value Model
Value Streams
Revenue Generation:
| Value Driver | Measurement | Example Calculation | Confidence Level |
|---|---|---|---|
| Conversion Lift | Increased purchase rate | Baseline 2.5% → AI 3.0% (+0.5pp) × 100K visitors × 75K/month | Medium (A/B test: +0.3-0.7pp) |
| Upsell/Cross-sell | Additional items per transaction | +0.3 items/order × 10K orders × 150K/month | Medium (Similar features: +0.2-0.4) |
| Customer Acquisition | New customers from better targeting | 20% ad improvement × 60K LTV | Low (12-month LTV assumption) |
| Retention | Reduced churn | 2% churn reduction × 50K customers × 360K annual | High (Pilot: 1.8-2.2% improvement) |
| Price Optimization | Margin improvement | 1% better pricing on 30% SKUs × 50K/month | Medium (Dynamic pricing benchmarks) |
Cost Savings:
| Value Driver | Measurement | Example Calculation | Confidence Level |
|---|---|---|---|
| Labor Automation | FTE hours saved | 5 FTEs × 50% time × 187.5K/year | High (Time studies: 45-55% savings) |
| Process Efficiency | Cycle time reduction | 50% faster × 10K cases × 125K/month | Medium (Pilot: 40-60% improvement) |
| Error Reduction | Fewer costly mistakes | 80% reduction × 100 errors × 40K/month | Medium (Historical + model accuracy) |
| Resource Optimization | Better utilization | 15% improvement × 300K/year | Low (Optimization model theoretical) |
Risk Reduction:
| Value Driver | Measurement | Example Calculation | Confidence Level |
|---|---|---|---|
| Fraud Prevention | Reduced fraud losses | 30% reduction × 300K/year | Medium (ML fraud benchmarks) |
| Compliance Violations | Fewer penalties | 50% reduction × 20 violations × 100K/year | Low (Historical pattern assumption) |
| Safety Incidents | Prevented accidents | 5 fewer incidents × 1M/year | High (Similar systems: 4-6 prevention) |
Building the Value Model
Step 1: Establish Baselines
## Baseline Data Collection
### Metric: Conversion Rate
- **Source**: Google Analytics, 12-month history
- **Current Performance**: 2.5% (range: 2.3-2.7% seasonal)
- **Data Quality**: High (100K+ samples/month)
- **Stability**: Stable, no major trends
### Metric: Customer Service Volume
- **Source**: Zendesk, 24-month history
- **Current Performance**: 2,500 tickets/month, 45-min avg handle time
- **Data Quality**: High (complete records)
- **Stability**: Growing 5%/year, seasonal Q4 peaks
Step 2: Estimate Impact with Ranges
Use three scenarios: Conservative, Base, Optimistic
| Scenario | Assumption | Calculation | Annual Value |
|---|---|---|---|
| Conservative | 30th percentile benchmark, 60% adoption | 0.3pp lift × 0.6 × volume × price | $450K |
| Base | 50th percentile benchmark, 75% adoption | 0.5pp lift × 0.75 × volume × price | $675K |
| Optimistic | 70th percentile benchmark, 90% adoption | 0.7pp lift × 0.9 × volume × price | $945K |
Probability Weighting: (0.25 × 675K) + (0.25 × 686K expected value**
Step 3: Model Adoption Curve
graph LR A[Month 0-3<br/>Alpha: 5% users<br/>$5K/month value] --> B[Month 4-6<br/>Beta: 25% users<br/>$25K/month value] B --> C[Month 7-9<br/>Rollout: 60% users<br/>$60K/month value] C --> D[Month 10-12<br/>Steady: 85% users<br/>$85K/month value] D --> E[Year 2+<br/>Optimize: 90% users<br/>$90K/month value] style E fill:#d4edda
Cumulative Value: Months 1-3: 75K + Months 7-9: 255K = **1.02M if instant 85% adoption)
Step 4: Sensitivity Analysis
| Assumption | Base Value | NPV Impact (Low) | NPV Impact (High) | Sensitivity |
|---|---|---|---|---|
| Adoption rate | 75% | -$850K (-42%) | +$680K (+34%) | Very High |
| Conversion lift | 0.5pp | -$520K (-26%) | +$520K (+26%) | High |
| Customer volume | 100K/mo | -$340K (-17%) | +$340K (+17%) | Medium |
| AOV | $150 | -$200K (-10%) | +$200K (+10%) | Medium |
| Run costs | $20K/mo | +$150K (+7%) | -$150K (-7%) | Low |
Insight: Focus de-risking on adoption (training, change management) and validating conversion lift (A/B tests).
Component 2: Cost Model
Total Cost of Ownership (TCO)
Build Costs:
| Category | One-Time Cost | Details | Assumptions |
|---|---|---|---|
| Data Preparation | $80K | Cleaning, labeling, pipeline build | 2 data engineers × 2 months @ $80K loaded |
| Model Development | $120K | Research, training, tuning, validation | 1.5 data scientists × 4 months @ $120K loaded |
| Platform Setup | $50K | MLOps deployment, configuration | SageMaker setup, CI/CD, monitoring |
| Integration | $100K | API development, system connections | 2 engineers × 2.5 months @ $100K loaded |
| Testing & Validation | $40K | QA, UAT, performance testing | 1 QA engineer × 2 months + tools |
| Change Management | $60K | Training, documentation, comms | Materials, sessions, support |
| Contingency (20%) | $90K | Buffer for unknowns | Standard risk buffer |
| Total Build | $540K |
Run Costs (Annual):
| Category | Annual Cost | Monthly | Details |
|---|---|---|---|
| Infrastructure | |||
| - Compute (GPU/CPU) | $60K | $5K | SageMaker ml.p3.2xlarge × 2 + autoscaling |
| - Storage | $12K | $1K | 50TB data lake @ $0.023/GB/month |
| - Network | $6K | $0.5K | Data transfer, CDN |
| Licenses & Services | |||
| - LLM API costs | $180K | $15K | 5M API calls/month @ $0.003/call |
| - MLOps platform | $24K | $2K | Enterprise tier subscription |
| - Monitoring tools | $12K | $1K | Datadog, PagerDuty, etc. |
| Team | |||
| - ML Engineer (0.5 FTE) | $60K | $5K | Ongoing maintenance, retraining |
| - Data Scientist (0.25 FTE) | $30K | $2.5K | Model improvements, experiments |
| - Support (0.25 FTE) | $20K | $1.7K | User support, incident response |
| Contingency (15%) | $60K | $5K | Operational buffer |
| Total Run (Year 1) | $464K | $38.7K |
Build vs. Buy vs. Partner Analysis
graph TD A[Solution Decision] --> B{Core Competency?} B -->|Yes| C{Have Capability?} B -->|No| D[Buy or Partner] C -->|Yes| E[Build] C -->|No| F{Time to Acquire?} F -->|<6 months| G[Build + Hire] F -->|>6 months| D D --> H{Differentiation?} H -->|High| I[Partner - maintain control] H -->|Low| J[Buy - standardize] E --> K[Pros: Control, IP, customization<br/>Cons: Time, cost, risk] G --> L[Pros: Capability building<br/>Cons: Hiring risk, time] I --> M[Pros: Speed, expertise, shared risk<br/>Cons: Dependency, cost] J --> N[Pros: Fast, proven, support<br/>Cons: Less differentiation, ongoing cost]
Decision Matrix:
| Criterion | Build | Buy (SaaS) | Partner (Co-develop) | Weight | Build Score | Buy Score | Partner Score |
|---|---|---|---|---|---|---|---|
| Time to market | 12 months | 2 months | 6 months | 25% | 2/10 | 9/10 | 6/10 |
| Total cost (3yr) | $1.8M | $2.4M | $2.0M | 20% | 8/10 | 5/10 | 7/10 |
| Customization | High | Low | Medium | 15% | 9/10 | 3/10 | 7/10 |
| IP ownership | Full | None | Shared | 10% | 10/10 | 0/10 | 5/10 |
| Expertise required | High | Low | Medium | 15% | 3/10 | 9/10 | 6/10 |
| Ongoing control | Full | Limited | Negotiated | 10% | 10/10 | 4/10 | 6/10 |
| Risk | High | Low | Medium | 5% | 3/10 | 8/10 | 6/10 |
| Weighted Score | 100% | 5.95 | 5.85 | 6.35 |
Recommendation: Partner approach scores highest—balance of speed, cost, and control.
Component 3: Risk Model
Risk Taxonomy
Delivery Risks:
| Risk | Probability | Impact | Severity | Mitigation | Owner | Cost |
|---|---|---|---|---|---|---|
| Data quality insufficient | 30% | High | High | Assessment + cleaning pipeline | Data Eng | $50K |
| Integration delays | 40% | Medium | Medium | Early POC, dedicated engineer | Eng Lead | $30K |
| Talent attrition | 20% | High | Medium | Knowledge sharing, retention bonuses | Eng Mgr | $20K |
| Vendor dependency | 25% | Medium | Medium | Multi-vendor strategy, abstraction | Architect | $15K |
| Timeline slippage | 50% | Medium | High | Agile sprints, weekly reviews | PM | $0 |
Model Risks:
| Risk | Probability | Impact | Severity | Mitigation | Owner | Contingency |
|---|---|---|---|---|---|---|
| Accuracy below target | 35% | High | High | Baseline testing, fallback rules, human-in-loop | DS Lead | $40K (1 month extension) |
| Model drift post-launch | 60% | Medium | High | Monitoring, auto-retraining, drift alerts | ML Eng | $15K/year monitoring |
| Bias/fairness issues | 25% | High | Medium | Bias testing, diverse data, audit process | DS + Legal | $10K bias tools |
| Performance degradation | 40% | Medium | Medium | Load testing, auto-scaling, caching | SRE | $20K/year infrastructure |
Compliance Risks:
| Risk | Probability | Impact | Severity | Mitigation | Owner | Cost |
|---|---|---|---|---|---|---|
| GDPR violation | 10% | Very High | Medium | Privacy impact assessment, legal review | Legal + PM | $60K |
| Data breach | 5% | Very High | Low | Encryption, access controls, security audit | Security | $150K |
| Regulatory objection | 15% | High | Low | Early regulator engagement, explainability | Compliance | $30K |
Risk-Adjusted Financial Model
Scenario Planning:
| Scenario | Probability | Value (3yr NPV) | Cost (TCO) | Net NPV | Risk-Adjusted NPV |
|---|---|---|---|---|---|
| Best Case | 15% | $4.5M | $1.6M | $2.9M | 2.9M) |
| Base Case | 50% | $3.2M | $1.9M | $1.3M | 1.3M) |
| Worse Case | 25% | $1.8M | $2.2M | -$400K | -400K) |
| Failure | 10% | $0 | $0.8M | -$800K | -800K) |
| Expected NPV | 100% | $905K |
Interpretation: Even with 35% chance of underperformance or failure, expected NPV is positive ($905K). However, 10% chance of total loss means strong governance and kill criteria are essential.
Sponsorship & Governance
Sponsor Role Definition
Executive Sponsor Charter:
## Executive Sponsor Charter
### Role Purpose
Champion the AI initiative, remove organizational blockers, ensure strategic alignment.
### Decision Rights (RACI: Accountable)
- Approve business case and budget allocation
- Resolve cross-functional conflicts and escalations
- Approve go-live and major scope changes
- Final say on kill/pivot decisions at phase gates
### Time Commitment
- 2-4 hours/month during discovery/build
- 30 minutes/week during critical phases
- Monthly steering committee (1 hour)
- Available for urgent escalations (<24hr response)
### Success Criteria
Project delivers [specific value targets] within [timeline] and [budget],
with [quality metrics] met.
### Sponsor: [Name, Title]
### Term: [Start Date] to [End Date or "until completion"]
Governance Structure
graph TB A[Steering Committee<br/>Monthly, 1hr<br/>Sponsor + C-suite] -->|Strategic guidance<br/>Budget approval| B[Core Team<br/>Weekly, 1hr<br/>PM + Tech + Product] B -->|Progress updates<br/>Escalations| A B -->|Coordination| C[Data Workstream<br/>2x/week, 30min] B -->|Coordination| D[Engineering Workstream<br/>Daily standup, 15min] B -->|Coordination| E[Change Mgmt Workstream<br/>Weekly, 30min] C -->|Data readiness<br/>Quality reports| B D -->|Dev progress<br/>Technical risks| B E -->|Adoption metrics<br/>User feedback| B F[RAID Log<br/>Tracked in Jira] -->|Risks| B F -->|Actions| C F -->|Actions| D F -->|Actions| E G[Phase Gates<br/>Quarterly] -->|Go/No-Go| A B -->|Gate materials| G style A fill:#e1f5ff style B fill:#fff3cd style G fill:#d4edda
Steering Committee Operating Model:
| Element | Details |
|---|---|
| Frequency | Monthly (more often during critical phases) |
| Duration | 60 minutes |
| Attendees | Executive Sponsor (chair), CFO delegate, CTO delegate, Product Owner, PM |
| Agenda | 1. Progress vs. plan (10min) 2. Metrics dashboard (10min) 3. RAID review - focus on Reds (20min) 4. Key decisions (15min) 5. AOB and actions (5min) |
| Pre-reads | Sent 48hrs advance: status report, metrics, RAID log, decision memos |
| Outputs | Decision log, action items with owners, updated RAID log |
Deliverables
1. Business Case Document
Executive Summary (1 page):
- Opportunity: [What problem are we solving?]
- Solution: [How will AI address it?]
- Value: [Expected benefits with ranges]
- Investment: [Total cost over 3 years]
- ROI: [Payback period, NPV, IRR]
- Risks: [Top 3 risks and mitigations]
- Recommendation: [Go/No-Go with rationale]
Value Model (2 pages):
| Benefit Category | Year 1 | Year 2 | Year 3 | Total | Confidence |
|---|---|---|---|---|---|
| Revenue growth | $200K | $500K | $700K | $1.4M | Medium |
| Cost savings | $300K | $400K | $450K | $1.15M | High |
| Risk reduction | $100K | $150K | $150K | $400K | Low |
| Total | $600K | $1.05M | $1.3M | $2.95M |
Cost Model (2 pages):
| Cost Category | Year 0 (Build) | Year 1 | Year 2 | Year 3 | Total |
|---|---|---|---|---|---|
| Build costs | $540K | - | - | - | $540K |
| Infrastructure | - | $78K | $75K | $72K | $225K |
| Licenses | - | $216K | $216K | $220K | $652K |
| Team | - | $110K | $105K | $105K | $320K |
| Contingency | - | $60K | $35K | $25K | $120K |
| Total | $540K | $464K | $431K | $422K | $1.86M |
Risk Model (2 pages):
- Top 10 risks with probability, impact, mitigation, owner
- Scenario analysis (Best/Base/Worse/Failure with probabilities)
- Contingency plan and kill criteria
2. Sponsor Brief (1-Pager)
# Sponsor Decision Brief: [Initiative Name]
**Date**: [Date]
**Decision needed by**: [Date]
## The Ask
[Clear, specific request: e.g., "Approve $540K to build AI customer service assistant"]
## The Business Case
| Metric | Value |
|--------|-------|
| **Investment** | $540K build + $460K/year run = $1.86M over 3 years |
| **Return** | $2.95M value over 3 years |
| **NPV** | $905K (3-year, 12% discount rate) |
| **Payback** | 9 months after launch |
| **Confidence** | Medium (50% probability of base case) |
## Key Risks & Mitigations
1. **Model accuracy**: 35% probability → Baseline testing, fallback to rules, human-in-loop
2. **Adoption slower**: 40% probability → Change management, phased rollout, champions
3. **Integration challenges**: 40% probability → POC completed successfully, dedicated engineer
## Phase Gates (Kill/Pivot Criteria)
- **Gate 1 (Month 3)**: Data quality >85%, model accuracy >target
- **Gate 2 (Month 6)**: Integration complete, UAT passed
- **Gate 3 (Month 9)**: Pilot shows >60% adoption, positive NPS
## Recommendation
**GO** - Proceed with build phase
- Strong business case with positive NPV in risk-adjusted scenarios
- Key technical risks mitigated through POCs
- Competitive pressure makes this strategically important
**Sponsor Decision**: [ ] Approve [ ] Modify: [ ] Reject
**Signature**: _________________ **Date**: _______
Case Study: AI Customer Service Assistant
Context: Mid-sized insurance company wants AI assistant to handle routine inquiries, reducing call volume and improving response times.
Value Model
Baseline Data:
- Call volume: 50,000 calls/month
- Average handle time: 12 minutes
- Cost per call: $8 (agent loaded cost)
- Call center budget: $4.8M/year
- Current CSAT: 3.2/5
Value Calculation (Base Case):
| Value Driver | Assumption | Calculation | Annual Value |
|---|---|---|---|
| Call deflection | 30% automated | 50K × 30% × 12 mo × $8 = | $1.44M |
| Agent productivity | 15% efficiency gain | 50K × 70% × 12 × $8 × 15% = | $605K |
| CSAT improvement | 0.3 point → 5% retention lift | 100K customers × 3% reduction × $400 LTV = | $200K |
| Total Annual Value | $2.25M |
Adoption Curve:
- Months 1-3: 10% (beta) = $225K
- Months 4-6: 40% (rollout) = $900K
- Months 7-12: 70% (steady state) = $1.58M
- Year 1 realized value: $2.7M cumulative
Cost Model
- Build: 60K, integration 80K, model tuning 90K, contingency $90K)
- Annual Run: 109K, infrastructure 90K, monitoring 40K)
- 3-Year TCO: $1.41M
Financial Summary
| Metric | Value |
|---|---|
| 3-Year NPV (12% discount) | $2.1M |
| IRR | 68% |
| Payback Period | 9 months after launch |
| ROI | 149% |
Scenario Analysis:
| Scenario | NPV | Probability |
|---|---|---|
| Best case (40% deflection) | $3.5M | 20% |
| Base case (30% deflection) | $2.1M | 50% |
| Worse case (20% deflection) | $800K | 25% |
| Failure (accuracy issues, cancelled) | -$600K | 5% |
| Expected NPV | $1.85M | 100% |
Outcome
Sponsor Decision: APPROVED with conditions
- Proceed to build phase
- Gate 1 criteria must be met before full budget commit
- Weekly status during critical months 4-9
- Hard stop if CSAT drops >0.2 points
Results (12 months post-launch):
- Actual call deflection: 28% (vs. 30% target) - close enough ✅
- CSAT improved 0.4 points (exceeded 0.3 target) ✅
- Agents initially resistant but became advocates after easier workload ✅
- Payback in 10 months (vs. 9-month projection) ✅
- Year 1 value: 2.25M projected) ✅
Why It Worked:
- Conservative assumptions: Didn't overpromise, built credibility
- Evidence-based: POC validated accuracy before full commit
- Risk-aware: Identified and mitigated key risks (agent resistance, accuracy)
- Strong sponsor: SVP provided air cover during resistance periods
- Clear governance: Monthly steering kept project on track, caught issues early
Implementation Checklist
Phase 1: Preparation (Weeks 1-2)
- Identify value drivers through stakeholder interviews
- Gather baseline data for all proposed metrics
- Research benchmarks and analogies for estimates
- Document current costs as comparison baseline
Phase 2: Value Model (Weeks 3-4)
- Build value model with conservative/base/optimistic scenarios
- Validate assumptions with domain experts
- Model adoption curve based on similar initiatives
- Run sensitivity analysis on key assumptions
- Determine if pilot or POC is worthwhile (VOI analysis)
Phase 3: Cost Model (Weeks 4-5)
- Enumerate all cost categories (build, run, change)
- Get vendor quotes for platforms and services
- Estimate team needs and loaded costs
- Build vs. buy vs. partner analysis
- Add contingency buffers (15-20%)
Phase 4: Risk Model (Week 5)
- Identify delivery, model, and compliance risks
- Assess probability and impact for each risk
- Develop mitigation plans with owners and costs
- Create scenario analysis (best/base/worse/failure)
- Calculate risk-adjusted NPV
Phase 5: ROI Synthesis (Week 6)
- Calculate NPV, IRR, payback period
- Build decision tree or tornado chart
- Validate calculations with finance team
- Pressure-test assumptions with skeptics
Phase 6: Sponsorship (Week 7)
- Identify and recruit executive sponsor
- Draft sponsor charter with clear decision rights
- Secure sponsor commitment (time and authority)
- Identify product owner and other key roles
Phase 7: Governance Design (Week 7)
- Design governance structure (steering, core team)
- Define RACI for key decisions
- Establish meeting cadence and agendas
- Set up RAID tracking system
- Define phase gate criteria and kill triggers
Phase 8: Documentation (Week 8)
- Draft full business case document
- Create 1-page sponsor brief
- Write governance charter
- Prepare presentation deck
- Assemble appendices (detailed models, research)
Phase 9: Socialization (Weeks 8-9)
- Pre-brief sponsor 1:1 before formal review
- Incorporate sponsor feedback
- Present to steering committee
- Address questions and concerns
- Revise based on feedback
Phase 10: Decision & Launch (Week 10)
- Obtain formal approval and signatures
- Communicate decision to broader team
- Kick off governance (schedule first steering)
- Set up tracking and reporting cadence
- Begin execution with clear success criteria
Ongoing: Track & Update
- Update financial model quarterly as actuals come in
- Revise risk register as risks materialize
- Report value realization vs. plan monthly
- Adjust forecast based on learnings
- Conduct post-launch review to validate assumptions
Key Takeaway: A strong business case is evidence-based, risk-aware, and decision-ready. It quantifies value with ranges not false precision, acknowledges costs holistically including change management, and treats risks explicitly with mitigation plans. Combined with clear sponsorship and governance, it sets the foundation for AI initiatives to deliver on their promise.