9. Readiness & Feasibility Assessment
Chapter 9 — Readiness & Feasibility Assessment
Overview
Assess data, technology, people/process, and regulatory readiness to reduce delivery risk and avoid costly dead-ends. This chapter provides frameworks for evaluating whether your organization is ready to execute on an AI opportunity.
Before committing significant resources to an AI initiative, organizations must honestly evaluate their readiness across multiple dimensions. A thorough feasibility assessment identifies gaps, constraints, and risks early—when they're cheapest to address.
The Four Dimensions of Readiness
A comprehensive readiness assessment examines four critical dimensions that must align for successful AI delivery:
graph TB A[Readiness Assessment] --> B[Data Readiness] A --> C[Technology Readiness] A --> D[People & Process] A --> E[Legal & Compliance] B --> B1[Quality: >80%] B --> B2[Volume: Sufficient] B --> B3[Access: Available] B --> B4[Privacy: Compliant] C --> C1[Infrastructure: Scalable] C --> C2[Platform: Deployed] C --> C3[Integration: Tested] C --> C4[Monitoring: Ready] D --> D1[Skills: Adequate] D --> D2[Process: Defined] D --> D3[Governance: Established] D --> D4[Change Mgmt: Planned] E --> E1[Data Rights: Secured] E --> E2[Regulations: Mapped] E --> E3[Audit Trail: Enabled] E --> E4[Risk: Assessed] style B fill:#e1f5ff style C fill:#fff3cd style D fill:#d4edda style E fill:#ffe1e1
1. Data Readiness
Assessment Framework:
| Aspect | Green (Ready) | Yellow (Needs Work) | Red (Blocker) |
|---|---|---|---|
| Availability | Data exists, accessible, documented | Scattered across systems | Missing or inaccessible |
| Quality | <5% errors, consistent schema | 5-20% issues, fixable | >20% errors, unreliable |
| Volume | 10K+ examples for supervised learning | 1K-10K, may need augmentation | <1K, requires synthetic data |
| Labeling | Pre-labeled or easy to label | Requires SME time | No labels, expensive |
| Privacy | Anonymized, consent obtained | Needs anonymization | PII exposure, legal blocks |
| Freshness | Real-time or near real-time | Batch updates, acceptable lag | Stale data, significant delays |
Data Assessment Process:
flowchart TD A[Start Data Assessment] --> B[Catalog Data Sources] B --> C[Sample Data Quality] C --> D{Quality >80%?} D -->|Yes| E[Check Volume] D -->|No| F[Estimate Cleaning Effort] E --> G{Sufficient Volume?} G -->|Yes| H[Verify Access Rights] G -->|No| I[Explore Augmentation] F --> J[Build Quality Plan] H --> K{Privacy Compliant?} K -->|Yes| L[Data Ready - Green] K -->|No| M[Address Privacy Issues] I --> N[Data Volume - Yellow] J --> O[Data Quality - Yellow/Red] M --> P[Privacy - Red] style L fill:#d4edda style N fill:#fff3cd style O fill:#ffe1e1 style P fill:#ffe1e1
Practical Steps:
- Data Inventory: List all potential sources (internal databases, external APIs, unstructured documents)
- Sample Analysis: Pull 1,000-10,000 records, check completeness, consistency, accuracy
- Labeling Test: Have SMEs label 100 examples, measure inter-rater agreement (kappa >0.7)
- Privacy Audit: Map PII fields, verify consent, identify anonymization requirements
- Integration Test: Attempt sample joins, measure latency, test pipeline reliability
2. Technology Readiness
Infrastructure Assessment:
| Component | Ready State | Gap Analysis | Action Required |
|---|---|---|---|
| Compute | GPU/TPU available, auto-scaling | On-demand only | Procure dedicated resources |
| Storage | Scalable data lake, <100ms queries | Siloed databases | Implement unified storage |
| ML Platform | MLOps deployed (MLflow, SageMaker) | Basic tools, manual workflows | Deploy ML platform |
| APIs | REST/GraphQL available, documented | Legacy systems | Build API layer |
| Monitoring | Observability stack running | Basic logging only | Deploy APM tools |
| Security | Zero-trust, secrets management | Perimeter security | Implement RBAC, vault |
Technology Spike Protocol:
sequenceDiagram participant PM as Product Manager participant Eng as Engineer participant Infra as Infrastructure participant Vendor as Vendor/Platform PM->>Eng: Define spike objectives Eng->>Eng: Build minimal POC Eng->>Infra: Request test environment Infra->>Eng: Provision resources Eng->>Vendor: Test API integration Vendor-->>Eng: Return performance data Eng->>Eng: Measure latency/cost Eng->>PM: Report findings + recommendation PM->>PM: Update feasibility assessment
Critical Tech Spikes:
| Spike Type | Objective | Success Criteria | Timeline | Go/No-Go Threshold |
|---|---|---|---|---|
| Latency Test | Verify response time meets UX requirements | p95 latency <500ms (user-facing) | 3 days | If >2s, redesign needed |
| Cost Probe | Validate unit economics at scale | Cost per transaction <target margin | 3 days | If >2x budget, rethink approach |
| Integration POC | Confirm systems can communicate | End-to-end <5s, 99.9% reliable | 2 weeks | If >10s latency, go async |
| Platform Limits | Identify technical constraints | Document actual vs. claimed capabilities | 1 week | If <50% claimed, change vendor |
3. People & Process Readiness
Operating Model Maturity:
| Capability | Level 1 (Ad Hoc) | Level 2 (Repeatable) | Level 3 (Defined) | Level 4 (Managed) | Level 5 (Optimizing) |
|---|---|---|---|---|---|
| Data Science | No DS team | 1-2 data scientists | DS team established | Cross-functional squads | Centers of excellence |
| ML Engineering | Notebooks only | Basic deployment scripts | CI/CD for models | Automated MLOps | Self-service platforms |
| Governance | No process | Project-specific | Standard review | Automated checks | Continuous monitoring |
| Change Mgmt | No training | Ad hoc sessions | Formal program | Embedded support | Community of practice |
| Incident Response | Firefighting | On-call rotation | Defined runbooks | Automated remediation | Predictive prevention |
Process Readiness Checklist:
- Decision Rights: Clear RACI for model approval, data access, deployment
- SLAs Defined: Response time, uptime, accuracy thresholds documented
- Change Control: Process for model updates, rollback procedures in place
- Vendor Management: Contract terms, SLAs, exit clauses reviewed
- Knowledge Transfer: Documentation standards, onboarding process established
- Stakeholder Engagement: Communication plan, feedback loops defined
4. Legal & Compliance Readiness
Regulatory Landscape:
| Industry | Key Regulations | AI-Specific Requirements | Readiness Actions |
|---|---|---|---|
| Financial Services | GDPR, SOX, FCRA | Model explainability, bias testing, audit trails | Implement model cards, bias dashboards |
| Healthcare | HIPAA, FDA | PHI protection, clinical validation, safety | De-identify data, pursue FDA approval |
| Retail/E-commerce | GDPR, CCPA, PCI-DSS | Consent management, right to deletion | Consent platform, deletion workflows |
| Public Sector | AIA (EU), OMB | High-risk requirements, transparency | Impact assessments, explainability tools |
Compliance Assessment Flow:
graph TD A[Identify Regulations] --> B[Map Data Flows] B --> C[Classify Risk Level] C --> D{High Risk?} D -->|Yes| E[Enhanced Requirements] D -->|No| F[Standard Requirements] E --> G[Impact Assessment] G --> H[External Audit] F --> I[Internal Review] H --> J[Implement Controls] I --> J J --> K[Documentation Package] K --> L[Legal Sign-off] L --> M{Approved?} M -->|Yes| N[Compliant - Green] M -->|No| O[Remediation - Red] style N fill:#d4edda style O fill:#ffe1e1
Critical Compliance Questions:
- Data Rights: Do we have legal basis to use this data for AI? (Consent, contract, legitimate interest)
- Model Governance: What level of explainability is required? Are there bias/fairness requirements?
- Auditability: Can we reconstruct model decisions for regulatory review? Retention requirements?
- Safety & Security: What are acceptable failure modes? How do we handle adversarial inputs?
Readiness Heat Map
Create a visual dashboard showing readiness across all dimensions:
| Dimension | Sub-Component | Status | Confidence | Owner | Mitigation Plan | Target Date |
|---|---|---|---|---|---|---|
| Data | Availability | 🟢 Green | High | Data Eng | N/A | - |
| Quality | 🟡 Yellow | Medium | Data Eng | Cleaning pipeline Q2 | Jun 30 | |
| Volume | 🟢 Green | High | Data Eng | N/A | - | |
| Labeling | 🟡 Yellow | Medium | Product | SME labeling sprint | May 15 | |
| Privacy | 🟢 Green | High | Legal | N/A | - | |
| Technology | Compute | 🟢 Green | High | Infra | N/A | - |
| ML Platform | 🟡 Yellow | Medium | ML Eng | Deploy MLflow | Apr 30 | |
| Integration | 🔴 Red | Low | Eng Lead | API development | Jul 31 | |
| Monitoring | 🟡 Yellow | Medium | SRE | Observability stack | May 31 | |
| People | Skills | 🟡 Yellow | Medium | Eng Mgr | Hire 2 ML engineers | Jun 30 |
| Process | 🟡 Yellow | Medium | PM | Define governance | May 15 | |
| Change Mgmt | 🔴 Red | Low | Org Dev | Training program | Aug 31 | |
| Compliance | Data Rights | 🟢 Green | High | Legal | N/A | - |
| Model Gov | 🟡 Yellow | Medium | Compliance | Bias testing protocol | Jun 15 | |
| Audit Trail | 🟡 Yellow | Medium | Security | Logging enhancement | May 31 |
Confidence Scoring:
- High: Evidence-based assessment, verified through testing
- Medium: Reasoned estimate, some validation performed
- Low: Assumption-based, requires further investigation
Mitigation Planning
For each identified gap, develop a specific mitigation plan:
flowchart LR A[Identify Gap] --> B{Severity?} B -->|Critical| C[Block Project] B -->|High| D[Mitigation Required] B -->|Medium| E[Monitor & Plan] B -->|Low| F[Accept Risk] C --> G[Escalate to Sponsors] D --> H[Develop Mitigation] H --> I{Can Close Gap?} I -->|Yes| J[Execute Plan] I -->|No| K[Adjust Scope] E --> L[Set Review Milestone] F --> M[Document in Risk Register] J --> N[Track Progress] K --> O[Redefine Success] L --> N M --> N
Mitigation Strategies by Gap Type:
| Gap Type | Mitigation Options | Example | Timeline | Cost |
|---|---|---|---|---|
| Data Quality | Cleaning pipeline, source fixes, augmentation | Poor addresses → validation service | 8 weeks | $80K |
| Data Volume | Transfer learning, augmentation, active learning | Only 500 images → pre-trained model | 6 weeks | $40K |
| Skills Gap | Hiring, training, consulting, vendor solutions | No LLM expertise → vendor partnership | 12 weeks | $150K |
| Tech Constraints | Platform upgrade, redesign, vendor alternatives | Latency too high → batch processing | 10 weeks | $100K |
| Compliance | Legal remediation, privacy tech, data minimization | GDPR consent → consent platform | 16 weeks | $120K |
Case Study: Real-Time Personalization Assessment
Context: E-commerce company wants real-time product recommendations using LLMs to personalize descriptions based on user behavior.
Initial Vision:
- User browses → System generates personalized descriptions in real-time
- Expected: <1s page load, cost <$0.05/page view
- Scale: 100K daily users, 10 page views/session = 1M requests/day
Readiness Assessment:
Data Readiness: 🟡 Yellow
- ✅ Availability: User behavior data in clickstream (Green)
- ⚠️ Quality: Product catalog 15% missing attributes (Yellow)
- ✅ Volume: 50K products, 1M users, sufficient (Green)
- 🔴 Real-time: 2-second lag in event stream (Red for <1s target)
Technology Readiness: 🔴 Red
Latency Spike Results:
LLM inference (1K token context): 3.2s (p95)
Product catalog lookup: 0.15s (p95)
Total end-to-end: 3.5s vs. 1s target ❌
Cost Spike Results:
Average tokens: 1,200 input + 400 output
Cost: $0.002/1K input, $0.006/1K output = $0.0048/request
At 1M requests/day: $4,800/day = $144K/month ❌ (Budget: $15K/month)
People Readiness: 🟡 Yellow
- Team has 1 ML engineer, needs 2 more for real-time systems
- No experience with real-time LLM serving
- Product team trained on AI concepts (Green)
Compliance Readiness: 🟢 Green
- Personalization consent already obtained
- No new PII collected
- Existing privacy controls sufficient
Mitigation Analysis:
graph TD A[Real-time Latency Issue] --> B{Solutions?} B --> C[Faster Model] B --> D[Edge Deployment] B --> E[Architecture Change] C --> F[Test GPT-3.5 vs GPT-4] F --> G[Still 2.1s - Not Enough] D --> H[Edge Inference POC] H --> I[Complexity Too High] E --> J[Shift to Batch Processing] J --> K{Acceptable?} K -->|Yes| L[Pre-generate for Top 1000] A2[Cost Issue] --> B2{Solutions?} B2 --> C2[Reduce Tokens] B2 --> D2[Cheaper Model] B2 --> E2[Batch Processing] C2 --> F2[Optimize Prompts] F2 --> G2[Still $90K/month] E2 --> J J --> M[Daily Batch: $3K/month ✅] style M fill:#d4edda style G fill:#ffe1e1 style I fill:#ffe1e1 style G2 fill:#fff3cd
Revised Approach (Feasible):
-
Batch Recommendations: Pre-generate personalized descriptions daily for top 1000 products per user segment
- Latency: Instantaneous (pre-computed)
- Cost: ~$3K/month (48x cheaper)
- Coverage: 80% of traffic (top products)
-
Phase 2 (Future): Add real-time for remaining 20% after faster models available
Key Decisions:
- Scope Adjusted: Real-time → Batch processing
- Timeline: Launch in 6 weeks vs. 12+ weeks for real-time
- Risk Reduced: Avoid expensive failure, prove value first
- Sponsor Approval: Gained with honest assessment
Outcome:
- ✅ Batch system launched successfully
- ✅ 18% increase in conversion for personalized products
- ✅ Cost stayed within budget
- ✅ Built credibility for Phase 2 expansion
Lessons Learned:
- Early spikes prevented mistakes: Latency/cost issues found in week 2, not month 6
- Scope flexibility is power: Adjusting ambition to capability delivered faster value
- Evidence builds trust: Showing spike data convinced sponsors
- Mitigation isn't failure: Finding viable alternative is success
Deliverables
1. Readiness Assessment Report
Structure:
# AI Readiness Assessment: [Project Name]
## Executive Summary
- Overall readiness: [Red/Yellow/Green]
- Key blockers: [List critical issues]
- Recommended path: [Go/Modify Scope/No-Go]
## Assessment by Dimension
### Data Readiness: [Status]
[Findings, evidence, metrics]
### Technology Readiness: [Status]
[Findings, evidence, metrics]
### People & Process Readiness: [Status]
[Findings, evidence, metrics]
### Legal & Compliance Readiness: [Status]
[Findings, evidence, metrics]
## Mitigation Plans
- [Gap 1]: [Plan, owner, timeline]
- [Gap 2]: [Plan, owner, timeline]
## Gating Criteria
Proceed when:
- [ ] All red items mitigated to yellow/green
- [ ] Critical spikes completed successfully
- [ ] Sponsor approval obtained
2. Gating Criteria Document
Gate Template:
## Phase Gate: Discovery → Build
### Must-Have (Red Gate)
- [ ] Data quality >85% in production sources
- [ ] Legal sign-off on data usage and privacy
- [ ] Key tech spike successful (latency/cost in budget)
- [ ] Core team hired (ML engineer, data scientist)
### Should-Have (Yellow Gate)
- [ ] MLOps platform deployed or approved
- [ ] Model governance process defined
- [ ] Integration architecture validated
### Decision
- **Green**: Proceed to build
- **Yellow**: Proceed with risk acknowledgment
- **Red**: Do not proceed until must-haves met
**Sponsor Sign-off**: _________________ Date: _______
Implementation Checklist
Phase 1: Planning (Week 1)
- Identify assessment team (data, tech, process, legal leads)
- Define scope: Which AI initiative to assess?
- Gather baseline artifacts: architecture, data dictionaries, org charts
- Schedule stakeholder interviews
Phase 2: Data Assessment (Weeks 2-3)
- Complete data inventory across all sources
- Run data quality analysis on samples (1K-10K records)
- Test data joins and integrations
- Conduct labeling feasibility test
- Privacy audit: map PII, verify consent
- Document findings in heat map
Phase 3: Technology Assessment (Weeks 2-4)
- Catalog current infrastructure and platforms
- Execute critical tech spikes (latency, cost, integration)
- Evaluate vendor/platform options
- Document findings and recommendations
Phase 4: People & Process (Weeks 3-4)
- Skills inventory: current vs. required
- Operating model maturity assessment
- RACI definition and validation
- Change management readiness
- Document findings and mitigation plans
Phase 5: Legal & Compliance (Weeks 3-4)
- Identify applicable regulations
- Map data flows and risk levels
- Conduct compliance gap analysis
- Engage legal counsel for review
- Develop remediation plan for gaps
Phase 6: Synthesis & Mitigation (Week 5)
- Aggregate heat map across all dimensions
- Prioritize gaps by severity and impact
- Develop mitigation plans with owners
- Assess overall readiness: Green/Yellow/Red
- Create scenario analysis
Phase 7: Gating & Decision (Week 6)
- Define phase gate criteria
- Draft readiness assessment report
- Present findings to sponsors
- Obtain go/no-go decision
- If go: Confirm mitigation commitments
Ongoing: Track & Update
- Monitor mitigation progress weekly
- Update heat map as gaps close
- Escalate new risks immediately
- Re-assess readiness at each phase gate
- Incorporate learnings into next initiative
Key Takeaway: Readiness assessment is not a one-time gate but a continuous discipline. Honest evaluation early, backed by evidence from spikes and probes, prevents costly failures and builds organizational trust. The goal is not perfection but informed decision-making—knowing what's ready, what needs work, and what must be mitigated before proceeding.