6Part 6: Solution Patterns (Classical & Applied AI)
36. Optimization & Decisioning
Chapter 36 — Optimization & Decisioning
Overview
Operationalize optimization and simulation to drive business decisions through intelligent automation and decision-support UIs. This chapter covers operations research (OR) techniques, heuristic approaches, and simulation methods that transform predictions into actionable decisions while maintaining human oversight and trust.
Optimization Architecture Patterns
End-to-End Optimization Pipeline
graph TB A[Business Problem] --> B[Problem Formulation] B --> C[Decision Variables] B --> D[Constraints] B --> E[Objective Function] C --> F{Problem Size & Type} D --> F E --> F F -->|Small Linear| G[LP/MILP Solver<br/>Exact Solution] F -->|Large/Non-linear| H[Heuristic/Metaheuristic<br/>Approximate] F -->|Uncertain| I[Simulation + Optimization<br/>Robust] G --> J[Solution Validation] H --> J I --> J J --> K{Feasible & Optimal?} K -->|No| L[Relax Constraints<br/>or Adjust Objective] K -->|Yes| M[Decision Support UI] L --> F M --> N[Human Review] N --> O{Approve?} O -->|No| P[Manual Override] O -->|Yes| Q[Execute Decision] P --> F Q --> R[Monitor Outcomes] R -.Feedback Loop.-> B style G fill:#c8e6c9 style H fill:#fff3e0 style I fill:#e1f5fe style M fill:#f3e5f5
Method Selection Framework
graph TD A[Optimization Problem] --> B{Problem Size} B -->|Small <1000 vars| C{Linear Constraints?} B -->|Large >1M vars| D[Heuristics/Metaheuristics] C -->|Yes| E{Integer Variables?} C -->|No| F[Nonlinear Programming] E -->|No| G[Linear Programming - LP] E -->|Yes| H{Mostly Continuous?} H -->|Yes| I[Mixed Integer LP - MILP] H -->|No| J[Integer Programming - IP] G --> K[Exact Optimal Solution<br/>Seconds] I --> K J --> L[Exact Optimal<br/>Minutes-Hours] F --> M[Approximate<br/>Minutes] D --> N[Near-Optimal<br/>Configurable] style K fill:#c8e6c9 style L fill:#fff3e0 style M fill:#ffccbc style N fill:#e1f5fe
Optimization Method Comparison
| Method | Problem Size | Variables | Solution Quality | Speed | Best For |
|---|---|---|---|---|---|
| Linear Programming (LP) | <100K | Continuous | Optimal | Seconds | Resource allocation, blending |
| Mixed Integer LP (MILP) | <10K | Mixed | Optimal | Minutes | Scheduling, routing |
| Constraint Programming | <1K | Discrete | Optimal | Hours | Complex constraints, timetabling |
| Genetic Algorithms | Any | Any | Near-optimal (95-99%) | Minutes | Non-convex, multi-objective |
| Simulated Annealing | Any | Any | Near-optimal (90-98%) | Minutes | Combinatorial, TSP |
| Greedy Heuristics | Any | Any | Approximate (70-90%) | Milliseconds | Real-time, simple rules |
Linear & Mixed-Integer Programming
LP/MILP Implementation Patterns
Production Optimization (PuLP):
from pulp import LpProblem, LpMaximize, LpVariable, lpSum, value
# Define problem
prob = LpProblem("Production", LpMaximize)
# Decision variables
products = ['A', 'B', 'C']
x = LpVariable.dicts("produce", products, lowBound=0)
# Objective: maximize profit
profits = {'A': 40, 'B': 30, 'C': 50}
prob += lpSum([x[p] * profits[p] for p in products])
# Constraints: resource limits
prob += 10*x['A'] + 8*x['B'] + 12*x['C'] <= 1000 # labor hours
prob += 5*x['A'] + 6*x['B'] + 8*x['C'] <= 800 # materials
prob += 8*x['A'] + 4*x['B'] + 10*x['C'] <= 600 # machine hours
# Solve
prob.solve()
# Results
print(f"Status: {prob.status}")
print(f"Profit: ${value(prob.objective):.2f}")
for p in products:
print(f"{p}: {x[p].varValue:.2f} units")
Workforce Scheduling (MILP):
# Binary decision variables: employee e works shift s
x = LpVariable.dicts("assign",
((e, s) for e in employees for s in shifts),
cat='Binary'
)
# Objective: maximize satisfaction
prob += lpSum([x[e,s] * preferences[e][s]
for e in employees for s in shifts])
# Constraints
for shift in shifts:
prob += lpSum([x[e,shift] for e in employees]) >= min_staff[shift]
for employee in employees:
prob += lpSum([x[employee,s] for s in shifts]) <= max_shifts
Heuristics & Metaheuristics
Algorithm Comparison
| Algorithm | Convergence | Hyperparameters | Solution Quality | Use Case |
|---|---|---|---|---|
| Genetic Algorithm | Slow (100+ gens) | Pop size, mutation, crossover | 95-99% optimal | Multi-objective, complex |
| Simulated Annealing | Medium (50-100 iters) | Temp schedule, cooling rate | 90-98% optimal | Combinatorial (TSP, scheduling) |
| Particle Swarm | Fast (20-50 iters) | Swarm size, inertia, cognitive | 92-97% optimal | Continuous optimization |
| Tabu Search | Medium (50-100 iters) | Tabu tenure, aspiration | 93-98% optimal | Neighborhood search |
| Greedy/Local Search | Very fast (<10 iters) | None | 70-85% optimal | Real-time, simple problems |
Genetic Algorithm (Minimal):
import numpy as np
def genetic_algorithm(fitness_func, n_vars, generations=100):
population = np.random.rand(100, n_vars) # 100 individuals
for gen in range(generations):
# Evaluate
fitness = np.array([fitness_func(ind) for ind in population])
# Select top 20% (elitism)
elite_idx = np.argsort(fitness)[-20:]
elite = population[elite_idx]
# Create next generation
next_pop = elite.copy()
while len(next_pop) < 100:
# Crossover
p1, p2 = elite[np.random.choice(20, 2)]
child = (p1 + p2) / 2
# Mutate
if np.random.rand() < 0.1:
child += np.random.randn(n_vars) * 0.1
next_pop = np.vstack([next_pop, child])
population = next_pop[:100]
best_idx = np.argmax([fitness_func(ind) for ind in population])
return population[best_idx]
Simulation & Policy Evaluation
Monte Carlo Simulation (Minimal):
def monte_carlo_simulation(policy_func, scenario_gen, eval_func, n=10000):
results = []
for _ in range(n):
scenario = scenario_gen()
decision = policy_func(scenario)
outcome = eval_func(scenario, decision)
results.append(outcome)
return {
'mean': np.mean(results),
'std': np.std(results),
'p10': np.percentile(results, 10),
'p50': np.percentile(results, 50),
'p90': np.percentile(results, 90)
}
Decision Support UI
API Integration Pattern:
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI()
class OptimizationRequest(BaseModel):
products: list
resources: list
constraints: dict
@app.post("/optimize")
async def optimize(request: OptimizationRequest):
# Run optimization
result = solve_optimization(
request.products,
request.resources,
request.constraints
)
return {
"status": "optimal",
"objective_value": result.value,
"solution": result.variables,
"sensitivity": result.shadow_prices
}
Case Study: Last-Mile Delivery Route Optimization
Business Context
- Industry: Last-mile delivery logistics
- Scale: 200 vehicles, 5,000 deliveries/day, 50 distribution centers
- Problem: $8.50/delivery cost, 87% on-time rate, 4-hour manual planning
- Goal: Reduce costs, improve on-time delivery, automate routing
- Constraints: Vehicle capacity, driver hours, customer time windows, traffic
Solution Architecture (VRP with Hybrid Optimization)
graph TB A[Order Intake<br/>5000/day] --> B[Order Clustering<br/>by Zone] B --> C{Problem Size} C -->|Small <100| D[Exact MILP<br/>Google OR-Tools] C -->|Large >100| E[Heuristic<br/>Genetic Algorithm] D --> F[Route Candidates] E --> F F --> G[Monte Carlo Simulation<br/>Traffic + Demand Variance] G --> H[Cost & Risk Assessment] H --> I[Decision Support Dashboard] I --> J[Planner Review<br/>& Override] J --> K{Approve?} K -->|Yes| L[Dispatch to Drivers] K -->|No| M[Manual Adjustments] M --> F L --> N[Real-time Tracking<br/>GPS + Updates] N --> O[Post-Delivery Analytics] O -.Daily Retrain.-> E style D fill:#c8e6c9 style E fill:#fff3e0 style G fill:#e1f5fe style I fill:#f3e5f5
Implementation & Results
Technical Stack:
optimization:
exact_solver: Google OR-Tools MILP
heuristic: Genetic Algorithm (pop=200, gens=100)
clustering: K-means by delivery density
constraints: capacity, hours, time_windows, traffic
simulation:
method: Monte Carlo (10K scenarios)
variables: demand_variance, traffic_patterns
metrics: cost, time, utilization
infrastructure:
api: FastAPI
optimization_engine: Python + OR-Tools
ui: React dashboard
monitoring: Real-time GPS tracking
Performance Results:
| Metric | Before | After | Improvement |
|---|---|---|---|
| Avg Delivery Cost | $8.50 | $6.20 | -27% ($2.3M/year) |
| On-time Delivery | 87% | 96% | +10% |
| Vehicle Utilization | 68% | 85% | +25% |
| Miles Driven/Day | 12,500 | 9,800 | -22% |
| Planning Time | 4 hours | 30 min | -88% |
| CO2 Emissions/Day | 3.2 tons | 2.5 tons | -22% |
| Solution Quality | 70% optimal | 94% optimal | +34% |
ROI Analysis:
investment:
development: $180,000 (6 months, 2 engineers)
infrastructure: $5,000/month
total_first_year: $240,000
annual_savings:
delivery_cost_reduction: $2,300,000
fuel_savings: $450,000
labor_efficiency: $320,000
total_benefit: $3,070,000
roi: 1,179%
payback_period: 28 days
Key Learnings
- Hybrid optimization wins: Exact MILP for <100 stops (optimal); GA for >100 stops (94% optimal, 10x faster)
- UI drives adoption: Planners override 12% of routes; trust built through transparency and control
- Real-time adaptation critical: Re-optimize every 30 min with live traffic reduced delays by 18%
- Constraint relaxation needed: Soft time windows (±15 min penalty) vs hard constraints improved feasibility 40%
- Simulation validates solutions: Monte Carlo revealed 5% of "optimal" routes failed under variance; robust solutions prioritized
Implementation Checklist
Phase 1: Problem Formulation (Week 1)
- Define objective function (minimize cost, maximize profit/throughput)
- Identify decision variables (what to optimize)
- Document all constraints (hard: capacity, hours; soft: preferences)
- Establish baseline (current manual/heuristic solution)
- Define success metrics (cost, time, quality, utilization)
Phase 2: Method Selection (Week 2)
- Assess problem size (<1K vars → exact; >1K → heuristic)
- Choose optimization method (LP, MILP, GA, SA)
- Prototype on small dataset (10% of real data)
- Validate feasibility (all constraints satisfiable?)
- Benchmark runtime (must meet SLA)
Phase 3: Implementation (Week 3-4)
- Implement optimization model (PuLP, OR-Tools, custom)
- Add constraint handling (soft penalties for violations)
- Test on realistic scenarios (edge cases, peak load)
- Optimize performance (parallel, caching, warm starts)
- Build API/service interface (REST/gRPC)
Phase 4: Decision Support UI (Week 5-6)
- Design dashboard (visualize solution, constraints, trade-offs)
- Add what-if analysis (adjust params, re-optimize)
- Implement override mechanisms (manual adjustments)
- Show sensitivity analysis (which constraints binding?)
- User acceptance testing with domain experts
Phase 5: Deployment (Week 7-8)
- Deploy in shadow mode (compare to baseline)
- Monitor solution quality (optimality gap, runtime)
- Collect user feedback (overrides, pain points)
- A/B test (optimized vs baseline)
- Document runbooks and troubleshooting
Common Pitfalls & Solutions
| Pitfall | Symptom | Solution | Prevention |
|---|---|---|---|
| Over-constrained | No feasible solution | Relax soft constraints, prioritize | Start with minimal constraints |
| Ignoring uncertainty | Optimal but brittle in practice | Robust optimization, simulation | Monte Carlo validation |
| Black-box decisions | Low user trust, resistance | Explainable UI, override control | Co-design with users |
| Unrealistic assumptions | Solutions fail in production | Validate with domain experts early | Iterative prototyping |
| Computational limits | Timeout, no solution | Heuristics, problem decomposition | Benchmark early |
| Static models | Performance degrades over time | Retrain, adapt to changing conditions | Monitor drift |
Key Takeaways
- Method selection critical: Exact (LP/MILP) for <10K vars; heuristics (GA, SA) for >10K; hybrid for best of both
- Human-in-the-loop wins: Decision support (transparency + override) beats full automation; 85% adoption vs 40%
- Simulation prevents failures: Robust solutions under uncertainty outperform brittle "optimal" solutions
- Constraints define feasibility: Model real-world limits accurately; relax soft constraints with penalties
- UI drives trust: Visualization, what-if analysis, and overrides turn skeptics into advocates
- Iterate with domain experts: Optimization models evolve; involve stakeholders from day one