Part 2: Strategy & Opportunity Discovery

Chapter 11: AI Roadmapping & Budget Planning

Hire Us
2Part 2: Strategy & Opportunity Discovery

11. AI Roadmapping & Budget Planning

Chapter 11 — AI Roadmapping & Budget Planning

Overview

Create a multi-horizon roadmap that sequences initiatives, budgets resources, and honors constraints. A well-crafted roadmap makes tradeoffs explicit, aligns stakeholders on priorities, and ensures resources flow to the highest-value opportunities.

AI transformation isn't a single project—it's a portfolio of initiatives that must be carefully sequenced to balance quick wins with foundational capabilities. This chapter provides frameworks for building multi-horizon AI roadmaps and comprehensive budget models that keep initiatives funded and on track.

The Strategic Imperative

Why Roadmapping Matters:

  • 67% of AI initiatives fail to reach production due to poor planning (Gartner)
  • Organizations with structured roadmaps deliver 3.2x more value per dollar invested
  • Clear sequencing reduces time-to-value by 40% through parallelization
  • Explicit dependencies prevent 60% of common bottlenecks
  • Budget transparency reduces mid-flight funding crises by 75%

Consequences of Poor Planning:

  • Teams work on low-value projects while high-value ones wait
  • Platform dependencies block multiple initiatives simultaneously
  • Talent sits idle waiting for blockers to clear
  • Sponsors lose faith when promises slip repeatedly
  • Budget overruns force project cancellations mid-stream

Three-Horizon Framework

graph TB subgraph H1["Horizon 1: 0-6 Months<br/>Prove Value & Learn"] H1A[Quick Win 1<br/>AI Chatbot Pilot] H1B[Foundation POC<br/>Data Pipeline] H1C[Team Building<br/>Hire 3-5 people] end subgraph H2["Horizon 2: 6-18 Months<br/>Scale & Build Platforms"] H2A[Platform Deploy<br/>MLOps Infrastructure] H2B[Production Systems<br/>3-5 Use Cases] H2C[Capability Build<br/>Team 15-30 people] end subgraph H3["Horizon 3: 18+ Months<br/>Transform & Optimize"] H3A[Enterprise Platform<br/>Self-Service AI] H3B[Scaled Portfolio<br/>10+ Use Cases] H3C[AI Culture<br/>Centers of Excellence] end H1A --> H2B H1B --> H2A H2A --> H2B H2A --> H3A H2B --> H3B H2C --> H3C style H1A fill:#d4edda style H2A fill:#fff3cd style H3A fill:#e1f5ff

Horizon Characteristics

DimensionH1 (0-6 months)H2 (6-18 months)H3 (18+ months)
FocusProve value, learn, build credibilityScale proven patterns, establish platformsTransform operating model, industrialize AI
Initiatives1-3 pilots, narrow scope3-7 production systems10+ use cases, platform services
Risk toleranceMedium-high (learning)Medium (scaling)Low (production-grade)
Team size5-10 people15-30 people30-50 people
Budget500K500K-2M2M2M-10M$10M+
Success metricValidated use case, sponsor supportProduction systems, measurable ROIBusiness transformation, AI culture
Timeline to value3-6 months6-12 months12-24 months

Initiative Prioritization Framework

graph TD A[Evaluate Initiative] --> B{Business Value >7/10?} B -->|No| C[Backlog<br/>Revisit Quarterly] B -->|Yes| D{Technical Feasibility >6/10?} D -->|No| E{Can Improve<br/>Feasibility?} E -->|Yes| F[H2/H3 After<br/>Dependency Resolution] E -->|No| C D -->|Yes| G{Dependencies<br/>Met?} G -->|Yes| H[H1 Quick Win<br/>Start Immediately] G -->|No| I{Dependency<br/>Timeline?} I -->|<6 months| J[H1 Sequence<br/>After Dependency] I -->|6-18 months| F I -->|>18 months| K[H3 Long-Term<br/>Strategic Option] style H fill:#d4edda style F fill:#fff3cd style K fill:#e1f5ff

Prioritization Matrix

InitiativeBusiness Value (1-10)Feasibility (1-10)Strategic Fit (1-10)Effort (1-10, inverted)Priority ScoreHorizon
AI Chatbot79877.75H1
Fraud Detection96957.25H2
Personalization87767.0H2
Document AI68687.0H1
Forecasting77766.75H2
Voice AI85645.75H3

Priority Score = (Value + Feasibility + Strategic Fit + Effort) / 4

Dependency Mapping

graph LR A[Data Platform] -->|Enables| B[Fraud Detection] A -->|Enables| C[Personalization] A -->|Enables| D[Forecasting] E[Customer Data Pipeline] -->|Required for| C E -->|Required for| F[Churn Prediction] G[AI Chatbot Pilot] -->|Learns from| H[Voice AI] G -->|Pattern reuse| I[Email Assistant] J[MLOps Platform] -->|Required for| B J -->|Required for| C J -->|Required for| D J -->|Required for| F K[Team Hiring] -->|Required for| A K -->|Required for| J style A fill:#f8d7da style J fill:#f8d7da style K fill:#f8d7da

Dependency Categories

TypeExamplesImpact if MissingTypical Timeline
Data DependenciesData pipeline, quality threshold, labelingProject blocked or inaccurate2-6 months
Platform DependenciesMLOps, model serving, monitoringCannot deploy to production3-9 months
Compliance DependenciesLegal review, risk assessment, privacy controlsRegulatory risk, project halt1-6 months
Vendor DependenciesContracts, integrations, SLAsDelays, capability gaps1-4 months
People DependenciesKey hires, training, domain expertiseExecution bottleneck2-6 months

Dependency Resolution Matrix

InitiativeDataPlatformComplianceVendorPeopleUnblock DateRisk
AI Chatbot✅ Ready⚠️ In Progress✅ Ready⚠️ Contract Pending✅ ReadyApr 30Low
Fraud Detection❌ Blocked❌ Blocked✅ Ready✅ Ready⚠️ HiringJul 31High
Personalization⚠️ In Progress❌ Blocked✅ Ready✅ Ready✅ ReadyJun 30Medium
Document AI✅ Ready⚠️ In Progress✅ Ready✅ Ready✅ ReadyMay 15Low

Roadmap Sequencing

gantt title AI Initiative Roadmap (18 Months) dateFormat YYYY-MM-DD section H1 Quick Wins AI Chatbot Pilot :active, h1a, 2024-01-01, 90d Document AI POC :h1b, 2024-02-01, 60d Data Pipeline POC :h1c, 2024-01-15, 60d Hire Wave 1 (3-5 people) :h1d, 2024-01-01, 90d section Foundations MLOps Platform Deploy :h2a, 2024-03-15, 120d Data Platform Build :h2b, 2024-03-01, 150d section H2 Scale Fraud Detection :h2c, 2024-07-15, 120d Personalization :h2d, 2024-08-01, 90d Forecasting :h2e, 2024-09-01, 90d Hire Wave 2 (8-12 people) :h2f, 2024-06-01, 120d section H3 Transform Voice AI :h3a, 2024-11-01, 120d Self-Service Platform :h3b, 2025-01-01, 150d Center of Excellence :h3c, 2025-02-01, 90d

Sequencing Principles

1. Front-Load Quick Wins: H1 should include at least one high-visibility success

  • Example: Chatbot pilot showing 30% deflection in 3 months builds credibility

2. Parallel Foundation + Value: Build platforms while delivering pilot value

  • Data platform builds while chatbot delivers wins
  • Avoids "no value until platform ready" trap

3. Respect Dependencies: Don't promise what can't be delivered

  • Fraud detection waits for data platform
  • Voice AI waits for chatbot learnings

4. Maintain Momentum: No gaps >2 months without visible progress

  • Continuous stream of demos and wins
  • Prevents sponsor disengagement

5. Balance Portfolio: Mix of revenue, cost savings, and risk reduction

  • Not all cost-cutting or all revenue-generating
  • Diversified value story

6. Capacity Reality: Don't oversubscribe teams; leave 20% buffer

  • Account for unplanned work, learning curves
  • Prevents burnout and quality issues

Phase Gates & Milestones

graph LR A[Ideation] -->|Gate 0<br/>Strategic Fit| B[Discovery] B -->|Gate 1<br/>Feasibility| C[Build] C -->|Gate 2<br/>Quality| D[Pilot] D -->|Gate 3<br/>Value Proven| E[Scale] E -->|Gate 4<br/>ROI Realized| F[Optimize] style A fill:#e1f5ff style B fill:#fff4e1 style C fill:#ffe1e1 style D fill:#e1ffe1 style E fill:#f0e1ff style F fill:#d4edda

Gate Criteria

GateDecision PointSuccess CriteriaGo/No-Go CriteriaTypical Timeline
Gate 0: Ideation → DiscoveryIs this worth exploring?Business sponsor, strategic fitNPV >$500K, sponsor committed2 weeks
Gate 1: Discovery → BuildIs this feasible?Readiness assessment green/yellow, team committedAll dimensions >6/10, data accessible4-6 weeks
Gate 2: Build → PilotIs quality acceptable?Model accuracy >target, integration testedUAT passed, security approved8-12 weeks
Gate 3: Pilot → ScaleDid pilot prove value?Metrics >target, user feedback positiveNPS >30, no critical bugs12-16 weeks
Gate 4: Scale → OptimizeIs ROI realized?Adoption >75%, ROI achievedActuals vs. business case, stability6-12 months

Capacity Planning

Team Ramp Model

RoleH1 (0-6mo)H2 (6-18mo)H3 (18+mo)Loaded Cost/YearTotal Cost (3yr)
Data Scientists2 FTE5 FTE8 FTE$150K$2.25M
ML Engineers1 FTE4 FTE6 FTE$140K$1.54M
Data Engineers2 FTE4 FTE6 FTE$130K$1.56M
Product Managers1 FTE2 FTE3 FTE$160K$0.96M
UX Designers0.5 FTE1 FTE2 FTE$120K$0.42M
DevOps/SRE1 FTE2 FTE3 FTE$140K$0.84M
Program Manager1 FTE1 FTE2 FTE$150K$0.60M
Domain Experts2 FTE3 FTE4 FTE$100K$0.90M
Total Headcount10.52234
Annual Team Cost$1.4M$3.0M$4.6M$9.0M
graph LR A[Month 0:<br/>3 Core Team] --> B[Month 3:<br/>7 People<br/>Wave 1 Complete] B --> C[Month 6:<br/>12 People<br/>+Contractors] C --> D[Month 9:<br/>18 People<br/>Wave 2 Hiring] D --> E[Month 12:<br/>22 People<br/>H2 Full Team] E --> F[Month 18:<br/>28 People<br/>Wave 3 Started] F --> G[Month 24:<br/>34 People<br/>H3 Full Team] style A fill:#f8d7da style C fill:#fff3cd style E fill:#e1f5ff style G fill:#d4edda

Capacity Utilization

InitiativeDSMLEDETotal (person-months)TimelineUtilization
AI Chatbot2×3mo1×3mo1×2mo11Q175%
Data Platform POC01×2mo2×3mo8Q180%
MLOps Platform1×4mo2×4mo1×4mo16Q285%
Fraud Detection3×4mo2×4mo1×2mo22Q390%
Total Q1-Q31819155283%
Available Capacity24182163
Buffer6 (25%)-1 (⚠️)6 (29%)11 (17%)

Analysis: ML Engineers over-subscribed by 1 person-month. Mitigation: Hire 1 additional MLE in Wave 1 or push MLOps platform start by 2 weeks.

Budget Model

Infrastructure Costs

ComponentYear 0 (Build)Year 1Year 2Year 33-Year Total
Compute
- Training (GPU)$50K$103K$115K$130K$398K
- Inference (CPU)$10K$20K$22K$25K$77K
Storage
- Hot data (S3 Standard)$15K$40K$45K$50K$150K
- Archive (Glacier)$5K$40K$43K$45K$133K
Network
- Data transfer$10K$60K$65K$70K$205K
- CDN (CloudFront)$5K$15K$17K$20K$57K
Platform Services
- MLOps/monitoring$15K$45K$50K$55K$165K
Total Infrastructure$110K$323K$357K$395K$1.19M

AI Services & Licenses

ServiceUsage ModelYear 0Year 1Year 2Year 33-Year Total
LLM APIs
- GPT-4 (complex)Per 1K tokens$10K$35K$40K$45K$130K
- GPT-3.5 (simple)Per 1K tokens$5K$15K$17K$20K$57K
- Claude (mid-tier)Per 1K tokens$8K$25K$28K$30K$91K
Vector DBIndexes + queries$10K$40K$45K$50K$145K
Speech/VisionPer API call$5K$20K$22K$25K$72K
Platform Licenses
- MLflow/DatabricksPer user/DBU$15K$50K$60K$70K$195K
- Monitoring (Datadog)Per host$8K$25K$28K$30K$91K
- BI/AnalyticsPer user$5K$20K$22K$25K$72K
Total AI Services$66K$230K$262K$295K$853K

Vendor & Consulting

ServiceYear 0Year 1Year 2Year 33-Year Total
AI strategy consulting (2 days/mo)$60K$144K$100K$50K$354K
Data labeling service$30K$60K$40K$30K$160K
Security audit (annual)$10K$10K$10K$10K$40K
Legal/compliance$20K$24K$15K$10K$69K
Total Vendors$120K$238K$165K$100K$623K

Build vs. Buy vs. Partner

graph TD A[Component Decision] --> B{Strategic<br/>Differentiation?} B -->|High| C{Internal<br/>Expertise?} B -->|Low| D[Buy SaaS<br/>$100K/yr] C -->|Yes| E[Build In-House<br/>$400K + $50K/yr] C -->|No| F{Time<br/>Constraint?} F -->|Urgent| G[Partner Co-Develop<br/>$250K + $30K/yr] F -->|Not Urgent| H{Can Hire<br/>Talent?} H -->|Yes| E H -->|No| G style D fill:#d4edda style E fill:#fff3cd style G fill:#e1f5ff

Build/Buy/Partner Analysis

ComponentBuild CostBuy CostPartner CostRecommendedRationale
MLOps Platform800Kbuild+800K build + 80K/yr$100K/yr SaaS400K+400K + 50K/yrBuyNon-differentiating, mature market
Custom NLP Models300K+300K + 40K/yrLimited options250K+250K + 30K/yrPartnerNeed expertise + customization
Data Pipeline400K+400K + 50K/yr$60K/yr280K+280K + 20K/yrBuildCore competency, custom needs
Monitoring150K+150K + 25K/yr$10K/yrN/ABuyCommodity, not strategic
Vector Database500K+500K + 60K/yr$16K/yrN/ABuySpecialized, rapidly evolving

3-Year TCO Comparison:

ApproachYear 0 (Build)Year 1Year 2Year 3Total 3-Year
All Build$2.4M$400K$420K$440K$3.66M
All Buy$0$800K$850K$900K$2.55M
Hybrid (Recommended)$930K$550K$570K$590K$2.64M

Savings: Hybrid approach saves $1.02M vs. all-build, provides more control than all-buy.

Complete Budget Summary

Multi-Year Budget

CategoryYear 0 (Build)Year 1Year 2Year 33-Year Total
Team
- Salaries & benefits$700K$3.0M$4.2M$4.6M$12.5M
- Recruiting$100K$150K$100K$50K$400K
Infrastructure$110K$323K$357K$395K$1.19M
AI Services & Licenses$66K$230K$262K$295K$853K
Vendors & Consulting$120K$238K$165K$100K$623K
Build Costs (one-time)$400K$200K$100K$50K$750K
Change Management
- Training & enablement$80K$120K$80K$60K$340K
- Comms & engagement$30K$50K$40K$30K$150K
Contingency (15%)$250K$620K$750K$790K$2.41M
TOTAL$1.86M$4.93M$6.05M$6.37M$19.21M

Budget Allocation by Horizon

HorizonInvestmentExpected ValueNPV (12% discount)ROI
H1 (0-6mo)$1.86M$3.5M$1.2M64%
H2 (6-18mo)$8.1M$22M$10.5M130%
H3 (18+mo)$9.25M$35M$18.2M197%
Total 3-Year$19.21M$60.5M$29.9M156%

Quarterly Review Process

graph LR A[Month 1-2:<br/>Execute Plan] --> B[Month 3:<br/>Quarterly Review] B --> C[Month 4-5:<br/>Execute Updates] C --> D[Month 6:<br/>Quarterly Review] D --> E[Month 7-8:<br/>Execute Updates] E --> F[Month 9:<br/>Quarterly Review] F --> G[Month 10-11:<br/>Execute Updates] G --> H[Month 12:<br/>Annual Review] H --> A B -->|Adjustments| A D -->|Adjustments| C F -->|Adjustments| E H -->|Major Refresh| A style B fill:#fff3cd style D fill:#fff3cd style F fill:#fff3cd style H fill:#f8d7da

Review Agenda

SectionDurationFocusOutput
Actuals vs. Plan30 minProgress on milestones, budget vs. spend, team growth, value realizedVariance analysis
Learnings & Adjustments30 minWhat worked, what didn't, new opportunities, changed constraintsLessons learned
Next Quarter Preview20 minQ+1 priorities, resource allocation, dependency statusCommitment
Horizon Refresh20 minUpdate H2/H3 based on learnings, re-prioritizationUpdated roadmap
Decisions & Actions20 minScope changes, budget reallocation, hiring, escalationsAction items

Case Study: Financial Services AI Roadmap

Context

Mid-sized bank, $5B assets, 500 branches, starting from zero AI capability. Goal: Deploy AI across customer experience, fraud prevention, and operations.

Initial Inventory

25 potential initiatives identified across 4 categories:

  • Customer Experience: 8 ideas (chatbot, personalized offers, voice banking, sentiment analysis)
  • Fraud & Risk: 6 ideas (transaction fraud, KYC automation, credit risk scoring)
  • Operations: 7 ideas (document processing, call routing, forecasting, compliance)
  • Infrastructure: 4 ideas (data platform, MLOps, governance, self-service)

Prioritization Results

H1 Candidates (Top 5):

InitiativeValue ScoreFeasibility ScoreStrategic FitPriority ScoreSelected
Fraud Detection9798.5✅ H1
Customer Chatbot7988.0✅ H1
Document Processing8877.8✅ H1
Data Platform POC7887.5✅ H1
Credit Risk Model8687.3❌ H2 (regulatory)

Selected H1 Portfolio

graph TD A[Month 1-2:<br/>Data Platform POC] --> B[Month 2-4:<br/>Chatbot Pilot] A --> C[Month 3-6:<br/>Fraud Detection Build] B --> D[Month 5-6:<br/>Chatbot Production] C --> E[Month 6-9:<br/>Fraud Production] F[Parallel:<br/>Hire 3 People<br/>Month 1-3] G[Parallel:<br/>MLOps Vendor Selection<br/>Month 2-4] style A fill:#f8d7da style B fill:#fff3cd style C fill:#e1f5ff style D fill:#d4edda style E fill:#d4edda

Budget Allocation

QuarterInitiativeTeamInfrastructureVendorTotal
Q1Data POC + Chatbot$350K$50K$100K$500K
Q2Chatbot prod + Fraud build$450K$75K$150K$675K
Q3Fraud prod + Platform$600K$100K$200K$900K
Q4Document AI + Credit risk$750K$125K$150K$1.025M
Year 1 Total$2.15M$350K$600K$3.1M

Results After Year 1

✅ Delivered:

  • Fraud detection live, catching **2.4Mfraudannually(vs.2.4M fraud annually** (vs. 2M projected)
  • Chatbot handling 35% of tier-1 support calls ($850K annual savings)
  • Data platform POC validated, production build underway
  • Team grown to 18 people (vs. 19 planned, 1 open req)

⚠️ Challenges:

  • Document AI delayed 6 weeks due to integration complexity
  • Budget overrun of 8% due to higher LLM costs than estimated
  • Fraud detection took 10 weeks vs. 8 weeks planned (still acceptable)

🚫 Descoped:

  • Credit risk model pushed to Year 2 due to regulatory delays (expected)
  • Voice banking removed from roadmap due to low ROI re-assessment

Key Lessons

LearningImpactMitigation Applied
Foundation work paid offData platform POC enabled faster fraud/chatbot buildsContinue platform-first approach
Hiring took longer than planned3-week avg delay per roleAdded 2-week buffer for future hiring waves
LLM costs higher than researched20% varianceNow use 20% contingency on usage-based costs
Parallel workstreams created synergiesChatbot team shared learnings with fraud teamEncourage cross-team collaboration
Quarterly reviews caught issues earlyDocument AI delay identified in Month 7, scope adjustedContinue rigorous review cadence

Financial Results

MetricYear 1 TargetYear 1 ActualVariance
Investment$3.0M$3.24M+8%
Value Realized$2.5M$3.25M+30%
Payback Period18 months14 months-22% (better)
NPV (3-year)$8.5M$11.2M+32%

Implementation Checklist

Phase 1: Inventory & Prioritization (Weeks 1-3)

  • Conduct stakeholder workshops to identify all potential AI initiatives
  • Research industry benchmarks and use cases
  • Score each initiative on value (1-10), feasibility (1-10), strategic fit (1-10), effort (1-10 inverted)
  • Create long-list (20-30 ideas) and short-list (8-12 candidates)
  • Validate prioritization with executive team

Phase 2: Dependency Mapping (Weeks 2-4)

  • Identify data dependencies for each initiative
  • Map platform/infrastructure requirements
  • Document compliance and legal dependencies
  • Identify vendor/partner dependencies
  • Create dependency matrix showing blockers and unblock dates

Phase 3: Sequencing & Roadmap (Weeks 4-6)

  • Apply sequencing logic (quick wins + foundations in parallel)
  • Respect dependency constraints
  • Balance portfolio across value types (revenue, savings, risk)
  • Create wave plan with milestones
  • Define phase gates and criteria
  • Build visual roadmap (Gantt, swim lane, or horizon view)

Phase 4: Capacity Planning (Weeks 5-7)

  • Estimate effort for each initiative (person-months by role)
  • Create hiring plan with waves
  • Model capacity utilization by quarter
  • Identify over/under-subscription
  • Develop contractor/vendor strategy
  • Define onboarding and ramp assumptions

Phase 5: Budget Model (Weeks 6-8)

  • Estimate infrastructure costs (compute, storage, network)
  • Price out licenses and AI services
  • Calculate team costs (salaries, benefits, recruiting)
  • Include vendor/consulting costs
  • Add change management budget
  • Apply contingency buffers (15-20%)
  • Create multi-year budget summary

Phase 6: Build vs. Buy vs. Partner (Weeks 7-8)

  • List all major components/capabilities needed
  • Evaluate strategic differentiation for each
  • Assess internal capability and speed needs
  • Run 3-year TCO analysis for each option
  • Make build/buy/partner decisions
  • Document rationale

Phase 7: Socialization & Approval (Weeks 9-10)

  • Create executive roadmap presentation
  • Build detailed backup materials (dependency maps, budget workbook)
  • Pre-brief sponsor and key stakeholders
  • Present to steering committee or executive team
  • Incorporate feedback and revise
  • Obtain formal approval and budget commitment

Phase 8: Operationalize (Weeks 11-12)

  • Set up project tracking (Jira, Asana, etc.)
  • Create RAID log (Risks, Assumptions, Issues, Dependencies)
  • Schedule recurring roadmap reviews (quarterly)
  • Kick off first wave of hiring
  • Begin first H1 initiatives
  • Establish metrics and reporting dashboards

Ongoing: Review & Update

  • Quarterly roadmap review (actuals vs. plan, lessons, adjustments)
  • Monthly budget tracking (actuals vs. forecast)
  • Bi-weekly capacity utilization review
  • Trigger-based re-planning when major changes occur
  • Annual strategic refresh of H2 and H3

Key Takeaways

  1. Three-Horizon Framework balances quick wins (H1), foundational capabilities (H2), and transformation (H3). Don't skip to H3 without H1/H2 groundwork.

  2. Explicit Dependencies prevent bottlenecks. Map data, platform, compliance, vendor, and people dependencies upfront. Sequence initiatives to respect constraints.

  3. Capacity Planning prevents over-subscription. Model person-months by role, leave 20% buffer for unknowns. Hiring takes 2-3 months—plan accordingly.

  4. Build/Buy/Partner decisions drive 40-60% of budget. Buy commodities, build differentiators, partner for speed + expertise.

  5. Phase Gates enable course correction. Define clear go/no-go criteria at each milestone. Killing a project early is success, not failure.

  6. Quarterly Reviews keep roadmap aligned with reality. Update based on actual progress, learnings, and changing business priorities.

  7. Budget Transparency builds trust. Show all costs (team, infrastructure, vendors, change management) with contingency buffers. No surprises.

  8. Roadmaps Are Living Documents. Plan in detail for H1 (6 months), medium detail for H2 (18 months), light detail for H3 (24+ months). Refresh quarterly.

Further Reading