Part 2: Strategy & Opportunity Discovery

Chapter 9: Readiness & Feasibility Assessment

Hire Us
2Part 2: Strategy & Opportunity Discovery

9. Readiness & Feasibility Assessment

Chapter 9 — Readiness & Feasibility Assessment

Overview

Assess data, technology, people/process, and regulatory readiness to reduce delivery risk and avoid costly dead-ends. This chapter provides frameworks for evaluating whether your organization is ready to execute on an AI opportunity.

Before committing significant resources to an AI initiative, organizations must honestly evaluate their readiness across multiple dimensions. A thorough feasibility assessment identifies gaps, constraints, and risks early—when they're cheapest to address.

The Four Dimensions of Readiness

A comprehensive readiness assessment examines four critical dimensions that must align for successful AI delivery:

graph TB A[Readiness Assessment] --> B[Data Readiness] A --> C[Technology Readiness] A --> D[People & Process] A --> E[Legal & Compliance] B --> B1[Quality: >80%] B --> B2[Volume: Sufficient] B --> B3[Access: Available] B --> B4[Privacy: Compliant] C --> C1[Infrastructure: Scalable] C --> C2[Platform: Deployed] C --> C3[Integration: Tested] C --> C4[Monitoring: Ready] D --> D1[Skills: Adequate] D --> D2[Process: Defined] D --> D3[Governance: Established] D --> D4[Change Mgmt: Planned] E --> E1[Data Rights: Secured] E --> E2[Regulations: Mapped] E --> E3[Audit Trail: Enabled] E --> E4[Risk: Assessed] style B fill:#e1f5ff style C fill:#fff3cd style D fill:#d4edda style E fill:#ffe1e1

1. Data Readiness

Assessment Framework:

AspectGreen (Ready)Yellow (Needs Work)Red (Blocker)
AvailabilityData exists, accessible, documentedScattered across systemsMissing or inaccessible
Quality<5% errors, consistent schema5-20% issues, fixable>20% errors, unreliable
Volume10K+ examples for supervised learning1K-10K, may need augmentation<1K, requires synthetic data
LabelingPre-labeled or easy to labelRequires SME timeNo labels, expensive
PrivacyAnonymized, consent obtainedNeeds anonymizationPII exposure, legal blocks
FreshnessReal-time or near real-timeBatch updates, acceptable lagStale data, significant delays

Data Assessment Process:

flowchart TD A[Start Data Assessment] --> B[Catalog Data Sources] B --> C[Sample Data Quality] C --> D{Quality >80%?} D -->|Yes| E[Check Volume] D -->|No| F[Estimate Cleaning Effort] E --> G{Sufficient Volume?} G -->|Yes| H[Verify Access Rights] G -->|No| I[Explore Augmentation] F --> J[Build Quality Plan] H --> K{Privacy Compliant?} K -->|Yes| L[Data Ready - Green] K -->|No| M[Address Privacy Issues] I --> N[Data Volume - Yellow] J --> O[Data Quality - Yellow/Red] M --> P[Privacy - Red] style L fill:#d4edda style N fill:#fff3cd style O fill:#ffe1e1 style P fill:#ffe1e1

Practical Steps:

  1. Data Inventory: List all potential sources (internal databases, external APIs, unstructured documents)
  2. Sample Analysis: Pull 1,000-10,000 records, check completeness, consistency, accuracy
  3. Labeling Test: Have SMEs label 100 examples, measure inter-rater agreement (kappa >0.7)
  4. Privacy Audit: Map PII fields, verify consent, identify anonymization requirements
  5. Integration Test: Attempt sample joins, measure latency, test pipeline reliability

2. Technology Readiness

Infrastructure Assessment:

ComponentReady StateGap AnalysisAction Required
ComputeGPU/TPU available, auto-scalingOn-demand onlyProcure dedicated resources
StorageScalable data lake, <100ms queriesSiloed databasesImplement unified storage
ML PlatformMLOps deployed (MLflow, SageMaker)Basic tools, manual workflowsDeploy ML platform
APIsREST/GraphQL available, documentedLegacy systemsBuild API layer
MonitoringObservability stack runningBasic logging onlyDeploy APM tools
SecurityZero-trust, secrets managementPerimeter securityImplement RBAC, vault

Technology Spike Protocol:

sequenceDiagram participant PM as Product Manager participant Eng as Engineer participant Infra as Infrastructure participant Vendor as Vendor/Platform PM->>Eng: Define spike objectives Eng->>Eng: Build minimal POC Eng->>Infra: Request test environment Infra->>Eng: Provision resources Eng->>Vendor: Test API integration Vendor-->>Eng: Return performance data Eng->>Eng: Measure latency/cost Eng->>PM: Report findings + recommendation PM->>PM: Update feasibility assessment

Critical Tech Spikes:

Spike TypeObjectiveSuccess CriteriaTimelineGo/No-Go Threshold
Latency TestVerify response time meets UX requirementsp95 latency <500ms (user-facing)3 daysIf >2s, redesign needed
Cost ProbeValidate unit economics at scaleCost per transaction <target margin3 daysIf >2x budget, rethink approach
Integration POCConfirm systems can communicateEnd-to-end <5s, 99.9% reliable2 weeksIf >10s latency, go async
Platform LimitsIdentify technical constraintsDocument actual vs. claimed capabilities1 weekIf <50% claimed, change vendor

3. People & Process Readiness

Operating Model Maturity:

CapabilityLevel 1 (Ad Hoc)Level 2 (Repeatable)Level 3 (Defined)Level 4 (Managed)Level 5 (Optimizing)
Data ScienceNo DS team1-2 data scientistsDS team establishedCross-functional squadsCenters of excellence
ML EngineeringNotebooks onlyBasic deployment scriptsCI/CD for modelsAutomated MLOpsSelf-service platforms
GovernanceNo processProject-specificStandard reviewAutomated checksContinuous monitoring
Change MgmtNo trainingAd hoc sessionsFormal programEmbedded supportCommunity of practice
Incident ResponseFirefightingOn-call rotationDefined runbooksAutomated remediationPredictive prevention

Process Readiness Checklist:

  • Decision Rights: Clear RACI for model approval, data access, deployment
  • SLAs Defined: Response time, uptime, accuracy thresholds documented
  • Change Control: Process for model updates, rollback procedures in place
  • Vendor Management: Contract terms, SLAs, exit clauses reviewed
  • Knowledge Transfer: Documentation standards, onboarding process established
  • Stakeholder Engagement: Communication plan, feedback loops defined

Regulatory Landscape:

IndustryKey RegulationsAI-Specific RequirementsReadiness Actions
Financial ServicesGDPR, SOX, FCRAModel explainability, bias testing, audit trailsImplement model cards, bias dashboards
HealthcareHIPAA, FDAPHI protection, clinical validation, safetyDe-identify data, pursue FDA approval
Retail/E-commerceGDPR, CCPA, PCI-DSSConsent management, right to deletionConsent platform, deletion workflows
Public SectorAIA (EU), OMBHigh-risk requirements, transparencyImpact assessments, explainability tools

Compliance Assessment Flow:

graph TD A[Identify Regulations] --> B[Map Data Flows] B --> C[Classify Risk Level] C --> D{High Risk?} D -->|Yes| E[Enhanced Requirements] D -->|No| F[Standard Requirements] E --> G[Impact Assessment] G --> H[External Audit] F --> I[Internal Review] H --> J[Implement Controls] I --> J J --> K[Documentation Package] K --> L[Legal Sign-off] L --> M{Approved?} M -->|Yes| N[Compliant - Green] M -->|No| O[Remediation - Red] style N fill:#d4edda style O fill:#ffe1e1

Critical Compliance Questions:

  1. Data Rights: Do we have legal basis to use this data for AI? (Consent, contract, legitimate interest)
  2. Model Governance: What level of explainability is required? Are there bias/fairness requirements?
  3. Auditability: Can we reconstruct model decisions for regulatory review? Retention requirements?
  4. Safety & Security: What are acceptable failure modes? How do we handle adversarial inputs?

Readiness Heat Map

Create a visual dashboard showing readiness across all dimensions:

DimensionSub-ComponentStatusConfidenceOwnerMitigation PlanTarget Date
DataAvailability🟢 GreenHighData EngN/A-
Quality🟡 YellowMediumData EngCleaning pipeline Q2Jun 30
Volume🟢 GreenHighData EngN/A-
Labeling🟡 YellowMediumProductSME labeling sprintMay 15
Privacy🟢 GreenHighLegalN/A-
TechnologyCompute🟢 GreenHighInfraN/A-
ML Platform🟡 YellowMediumML EngDeploy MLflowApr 30
Integration🔴 RedLowEng LeadAPI developmentJul 31
Monitoring🟡 YellowMediumSREObservability stackMay 31
PeopleSkills🟡 YellowMediumEng MgrHire 2 ML engineersJun 30
Process🟡 YellowMediumPMDefine governanceMay 15
Change Mgmt🔴 RedLowOrg DevTraining programAug 31
ComplianceData Rights🟢 GreenHighLegalN/A-
Model Gov🟡 YellowMediumComplianceBias testing protocolJun 15
Audit Trail🟡 YellowMediumSecurityLogging enhancementMay 31

Confidence Scoring:

  • High: Evidence-based assessment, verified through testing
  • Medium: Reasoned estimate, some validation performed
  • Low: Assumption-based, requires further investigation

Mitigation Planning

For each identified gap, develop a specific mitigation plan:

flowchart LR A[Identify Gap] --> B{Severity?} B -->|Critical| C[Block Project] B -->|High| D[Mitigation Required] B -->|Medium| E[Monitor & Plan] B -->|Low| F[Accept Risk] C --> G[Escalate to Sponsors] D --> H[Develop Mitigation] H --> I{Can Close Gap?} I -->|Yes| J[Execute Plan] I -->|No| K[Adjust Scope] E --> L[Set Review Milestone] F --> M[Document in Risk Register] J --> N[Track Progress] K --> O[Redefine Success] L --> N M --> N

Mitigation Strategies by Gap Type:

Gap TypeMitigation OptionsExampleTimelineCost
Data QualityCleaning pipeline, source fixes, augmentationPoor addresses → validation service8 weeks$80K
Data VolumeTransfer learning, augmentation, active learningOnly 500 images → pre-trained model6 weeks$40K
Skills GapHiring, training, consulting, vendor solutionsNo LLM expertise → vendor partnership12 weeks$150K
Tech ConstraintsPlatform upgrade, redesign, vendor alternativesLatency too high → batch processing10 weeks$100K
ComplianceLegal remediation, privacy tech, data minimizationGDPR consent → consent platform16 weeks$120K

Case Study: Real-Time Personalization Assessment

Context: E-commerce company wants real-time product recommendations using LLMs to personalize descriptions based on user behavior.

Initial Vision:

  • User browses → System generates personalized descriptions in real-time
  • Expected: <1s page load, cost <$0.05/page view
  • Scale: 100K daily users, 10 page views/session = 1M requests/day

Readiness Assessment:

Data Readiness: 🟡 Yellow

  • Availability: User behavior data in clickstream (Green)
  • ⚠️ Quality: Product catalog 15% missing attributes (Yellow)
  • Volume: 50K products, 1M users, sufficient (Green)
  • 🔴 Real-time: 2-second lag in event stream (Red for <1s target)

Technology Readiness: 🔴 Red

Latency Spike Results:

LLM inference (1K token context): 3.2s (p95)
Product catalog lookup: 0.15s (p95)
Total end-to-end: 3.5s vs. 1s target ❌

Cost Spike Results:

Average tokens: 1,200 input + 400 output
Cost: $0.002/1K input, $0.006/1K output = $0.0048/request
At 1M requests/day: $4,800/day = $144K/month ❌ (Budget: $15K/month)

People Readiness: 🟡 Yellow

  • Team has 1 ML engineer, needs 2 more for real-time systems
  • No experience with real-time LLM serving
  • Product team trained on AI concepts (Green)

Compliance Readiness: 🟢 Green

  • Personalization consent already obtained
  • No new PII collected
  • Existing privacy controls sufficient

Mitigation Analysis:

graph TD A[Real-time Latency Issue] --> B{Solutions?} B --> C[Faster Model] B --> D[Edge Deployment] B --> E[Architecture Change] C --> F[Test GPT-3.5 vs GPT-4] F --> G[Still 2.1s - Not Enough] D --> H[Edge Inference POC] H --> I[Complexity Too High] E --> J[Shift to Batch Processing] J --> K{Acceptable?} K -->|Yes| L[Pre-generate for Top 1000] A2[Cost Issue] --> B2{Solutions?} B2 --> C2[Reduce Tokens] B2 --> D2[Cheaper Model] B2 --> E2[Batch Processing] C2 --> F2[Optimize Prompts] F2 --> G2[Still $90K/month] E2 --> J J --> M[Daily Batch: $3K/month ✅] style M fill:#d4edda style G fill:#ffe1e1 style I fill:#ffe1e1 style G2 fill:#fff3cd

Revised Approach (Feasible):

  1. Batch Recommendations: Pre-generate personalized descriptions daily for top 1000 products per user segment

    • Latency: Instantaneous (pre-computed)
    • Cost: ~$3K/month (48x cheaper)
    • Coverage: 80% of traffic (top products)
  2. Phase 2 (Future): Add real-time for remaining 20% after faster models available

Key Decisions:

  • Scope Adjusted: Real-time → Batch processing
  • Timeline: Launch in 6 weeks vs. 12+ weeks for real-time
  • Risk Reduced: Avoid expensive failure, prove value first
  • Sponsor Approval: Gained with honest assessment

Outcome:

  • ✅ Batch system launched successfully
  • ✅ 18% increase in conversion for personalized products
  • ✅ Cost stayed within budget
  • ✅ Built credibility for Phase 2 expansion

Lessons Learned:

  1. Early spikes prevented mistakes: Latency/cost issues found in week 2, not month 6
  2. Scope flexibility is power: Adjusting ambition to capability delivered faster value
  3. Evidence builds trust: Showing spike data convinced sponsors
  4. Mitigation isn't failure: Finding viable alternative is success

Deliverables

1. Readiness Assessment Report

Structure:

# AI Readiness Assessment: [Project Name]

## Executive Summary
- Overall readiness: [Red/Yellow/Green]
- Key blockers: [List critical issues]
- Recommended path: [Go/Modify Scope/No-Go]

## Assessment by Dimension

### Data Readiness: [Status]
[Findings, evidence, metrics]

### Technology Readiness: [Status]
[Findings, evidence, metrics]

### People & Process Readiness: [Status]
[Findings, evidence, metrics]

### Legal & Compliance Readiness: [Status]
[Findings, evidence, metrics]

## Mitigation Plans
- [Gap 1]: [Plan, owner, timeline]
- [Gap 2]: [Plan, owner, timeline]

## Gating Criteria
Proceed when:
- [ ] All red items mitigated to yellow/green
- [ ] Critical spikes completed successfully
- [ ] Sponsor approval obtained

2. Gating Criteria Document

Gate Template:

## Phase Gate: Discovery → Build

### Must-Have (Red Gate)
- [ ] Data quality >85% in production sources
- [ ] Legal sign-off on data usage and privacy
- [ ] Key tech spike successful (latency/cost in budget)
- [ ] Core team hired (ML engineer, data scientist)

### Should-Have (Yellow Gate)
- [ ] MLOps platform deployed or approved
- [ ] Model governance process defined
- [ ] Integration architecture validated

### Decision
- **Green**: Proceed to build
- **Yellow**: Proceed with risk acknowledgment
- **Red**: Do not proceed until must-haves met

**Sponsor Sign-off**: _________________ Date: _______

Implementation Checklist

Phase 1: Planning (Week 1)

  • Identify assessment team (data, tech, process, legal leads)
  • Define scope: Which AI initiative to assess?
  • Gather baseline artifacts: architecture, data dictionaries, org charts
  • Schedule stakeholder interviews

Phase 2: Data Assessment (Weeks 2-3)

  • Complete data inventory across all sources
  • Run data quality analysis on samples (1K-10K records)
  • Test data joins and integrations
  • Conduct labeling feasibility test
  • Privacy audit: map PII, verify consent
  • Document findings in heat map

Phase 3: Technology Assessment (Weeks 2-4)

  • Catalog current infrastructure and platforms
  • Execute critical tech spikes (latency, cost, integration)
  • Evaluate vendor/platform options
  • Document findings and recommendations

Phase 4: People & Process (Weeks 3-4)

  • Skills inventory: current vs. required
  • Operating model maturity assessment
  • RACI definition and validation
  • Change management readiness
  • Document findings and mitigation plans
  • Identify applicable regulations
  • Map data flows and risk levels
  • Conduct compliance gap analysis
  • Engage legal counsel for review
  • Develop remediation plan for gaps

Phase 6: Synthesis & Mitigation (Week 5)

  • Aggregate heat map across all dimensions
  • Prioritize gaps by severity and impact
  • Develop mitigation plans with owners
  • Assess overall readiness: Green/Yellow/Red
  • Create scenario analysis

Phase 7: Gating & Decision (Week 6)

  • Define phase gate criteria
  • Draft readiness assessment report
  • Present findings to sponsors
  • Obtain go/no-go decision
  • If go: Confirm mitigation commitments

Ongoing: Track & Update

  • Monitor mitigation progress weekly
  • Update heat map as gaps close
  • Escalate new risks immediately
  • Re-assess readiness at each phase gate
  • Incorporate learnings into next initiative

Key Takeaway: Readiness assessment is not a one-time gate but a continuous discipline. Honest evaluation early, backed by evidence from spikes and probes, prevents costly failures and builds organizational trust. The goal is not perfection but informed decision-making—knowing what's ready, what needs work, and what must be mitigated before proceeding.