Making AI work at enterprise scale doesn’t happen by accident.
It depends on a set of capabilities that work together: the systems, structures, and behaviours that make AI reliable, scalable, and valuable in the real world.
This model sets out the 14 foundational capabilities that underpin successful enterprise AI adoption and delivery. These aren’t maturity stages or checklists. They’re the building blocks of a real, working system.
Where This Comes From
This capability model wasn’t built from surveys or academic research. It comes from hands-on work with real enterprise teams, navigating delivery challenges, adoption barriers, and organisational friction.
It reflects observed patterns, repeated pain points, and what actually helps AI succeed beyond pilots.
As the field matures, this model should too. It is a practitioner’s map, open to testing, refinement, and contribution.
What We Mean by Capability
The term “capability model” is often misunderstood, especially in large enterprises where it can trigger outdated mental models of tool rollouts, transformation checklists, and one-off training plans.
That is not what this is. This model defines capabilities as active system conditions: ways of working, delivering, learning, and adapting that make AI successful at scale.
These are not artefacts to install. They are ongoing behaviours and structures to observe, support, and evolve. This framing aligns with how capability is treated in practice-driven models like Accelerate (Forsgren et al.), not static maturity schemes.
In short: do not mistake naming a capability for enabling it. This model is about what works, not what looks complete.
Explore the Capabilities
Jump to any of the 14 core capabilities:
- Data Foundations
- Model & Algorithm Development
- Reusable Components & Services
- Model Infrastructure & Lifecycle Ops
- System Design & Architecture
- Workflow & Interface Integration
- AI Performance Engineering
- Security & Trust Engineering
- Governance, Risk & Compliance
- Knowledge & Feedback Loops
- Delivery Discipline & Operating Model
- People & Organisational Capability
- Use Case Framing & Validation
- Strategy, Value & Change Enablement
🟦 Core Foundations
1. Data Foundations
Clean, high-quality, and accessible data pipelines, storage, and observability layers. Includes data warehouses, metadata standards, data versioning, governance policies, privacy protections, access protocols, and lineage tracking. Enables model training, retrieval, monitoring, and reliable enterprise-scale AI operations.
2. Model & Algorithm Development
Design and development of machine learning models, LLMs, and hybrid architectures. Covers experimentation, feature engineering, prompt engineering, model evaluation, explainability, documentation (e.g. model cards), and performance benchmarking.
3. Reusable Components & Services
Creation and sharing of modular AI components such as inference APIs, prompt libraries, RAG retrievers, embedding generators, and orchestration templates. Supports reusability, consistency, and composability across teams and use cases.
🟩 Infrastructure & Integration
4. Model Infrastructure & Lifecycle Ops
Tooling and operational practices that support the full AI model lifecycle, from experimentation and training to deployment, rollback, evaluation, and continuous improvement. Includes runtime environments, CI/CD for models, A/B testing, reproducibility mechanisms, and LLMOps/MLOps platforms.
5. System Design & Architecture
Architectural principles and frameworks that ensure AI systems are modular, explainable, auditable, and interoperable. Includes use of APIs, microservices, event-driven frameworks, and design for governance, extensibility, and integration with legacy systems.
6. Workflow & Interface Integration
Embedding AI into operational workflows and end-user interfaces. Includes API integration into business apps, real-time vs batch processing, human-in-the-loop flows, user prompt interfaces, and adoption-sensitive design. Crucial for actualising business impact and usage.
7. AI Performance Engineering
Designing, testing, and tuning AI systems for efficiency, speed, cost-effectiveness, and scale. Includes latency profiling, caching, token budgeting, usage observability, fallback logic, and throughput tuning. Enables sustainable AI operations in production and ensures systems remain responsive and viable as adoption grows.
🟨 Governance & Resilience
8. Security & Trust Engineering
Systems and controls that ensure AI systems are secure, resilient, and trustworthy. Includes threat modelling, encryption, secure data pipelines, access control, secrets management, and sandboxed deployment. Also covers protections against prompt injection, auditability of inference, and technical safeguards required for regulatory compliance (e.g. GDPR, healthcare data).
9. Governance, Risk & Compliance
Structures and policies that ensure AI systems are ethical, legally compliant, and aligned with organisational risk frameworks. Includes model monitoring, fairness audits, regulatory alignment (e.g. GDPR, EU AI Act, sector-specific laws), internal ethics boards, and artefacts for explainability, accountability, and audit readiness.
10. Knowledge & Feedback Loops
Mechanisms for capturing, tracking, and applying real-world feedback to improve AI system behaviour and value. Includes prompt versioning, user feedback logging, performance tracing (e.g. RAG quality), error correction workflows, and institutional learning systems that support reuse, safety, and iteration.
🟧 Delivery & Organisational Capability
11. Delivery Discipline & Operating Model
Delivery structures and practices tailored for AI-specific risks and coordination needs. Includes agile and product methods adapted for model validation, portfolio alignment, risk gates, and cross-functional accountability. Supports predictable, compliant delivery at scale and ensures governance and feedback mechanisms are operationalised rather than just defined.
12. People & Organisational Capability
Teams, roles, skills, and learning structures required to support AI at scale. Includes AI product managers, MLOps engineers, ethics officers, communities of practice, upskilling programmes, and organisational design for cross-functional collaboration and accountability.
🟥 Strategy & Alignment
13. Use Case Framing & Validation
Structured process for identifying, articulating, and validating AI use cases. Includes problem definition, AI role specification, feasibility assessment (technical, legal, operational), success metrics, and ROI analysis. Prevents solutionism and aligns AI efforts with real needs.
14. Strategy, Value & Change Enablement
Enterprise-level alignment of AI with strategic priorities, value pathways, and organisational readiness. Includes AI roadmaps, investment framing, change management, adoption scaffolding, success metrics, and leadership engagement to ensure long-term value is realised and sustained.
Version: 1.2
Last updated: July 2025
Author: Scott Cable