AI Maturity Model
Five Stages From Chatting to Operating
Most organizations don't know where they stand with AI. Not because they haven't tried, but because they've been measuring the wrong things. Tool adoption rates and pilot counts don't tell you much. The question that actually matters is behavioral: what is your organization doing with AI right now, and what would it take to do more?
An AI maturity model, sometimes called an AI capability model, answers that by describing observable stages of AI adoption from individual tool use to full organizational AI operations. Without one, organizations make predictable navigation errors: buying enterprise platforms before their teams have working personal systems, funding training before anyone has accountability for applying it, setting organizational goals before a single team has proved the methodology holds.
The Five AI Maturity Levels
Each stage has a behavioral signature you can observe rather than just report. The stages are sequential because each one builds the capability required for the next. Skipping stages doesn't accelerate progress; it creates the gap most AI initiatives fall into.
Chatting with AI
At Stage 1, people on your team use AI tools individually, inconsistently, and without systems. Someone opens ChatGPT to draft an email. Someone else uses it to summarize a document. The tab closes. Nothing persists.
There are no shared prompts, no documented workflows, no AI systems that run without someone initiating them fresh each time. The productivity gains are real but they don't compound. Each person starts from zero on every task. The capability lives in individual habits, not in the organization.
Tell-tale signs
- →AI use varies person to person with no shared standard
- →No prompt libraries, templates, or documented AI workflows exist
- →When the person who uses AI leaves, the capability leaves with them
Automate Myself
At Stage 2, an individual has moved from using AI tools to building personal AI systems. The difference is persistence. A Stage 2 practitioner has workflows that run without starting from scratch, that improve over time, and that compound their output in ways a single session cannot.
This is where methodology begins. The practitioner has documented what works, built repeatable processes, and started treating AI as infrastructure rather than a search engine. Their output looks different from their peers. The gap widens over time because the system is getting better while everyone else resets each morning.
Tell-tale signs
- →One or more AI workflows run consistently without daily setup
- →The practitioner has documented their system well enough that someone else could follow it
- →Productivity gains are measurable and growing, not just situational
Automate My Team
At Stage 3, the individual methodology extends to a team. There is one standard. Everyone follows the same approach, which means capability is now transferable: when someone joins, they adopt the methodology; when someone leaves, the methodology stays.
This stage requires something Stage 2 doesn't: agreement. The team has to decide on shared tools, shared prompts, shared processes, and shared accountability. The methodology isn't personal anymore; it's institutional. That transition almost always meets friction from people who haven't built their own systems and don't yet see the value of adopting someone else's.
Tell-tale signs
- →New team members reach full AI productivity within a defined onboarding period
- →The team uses a shared prompt library and documented AI workflows
- →Output quality is consistent across team members, not dependent on who runs the task
Automate My Company
At Stage 4, AI operations moves from a team practice to an organizational one. There is executive ownership, a defined investment path, and AI governance that applies across departments. Organizational AI maturity at this stage means AI is part of how the company operates, not a project running alongside operations.
This is where most enterprise AI strategies are aimed. It's also where most of them stall. Organizational AI operations requires a different skill set than team AI operations. The methodology has to scale without losing fidelity. The governance has to hold without becoming a bottleneck. The executive sponsor has to own outcomes, not just communications.
Tell-tale signs
- →A specific executive owns AI operations with accountability for outcomes, not just oversight
- →AI governance policies exist and are actually followed, not filed
- →AI investment decisions follow a defined framework rather than ad hoc approval
Automate Any Company
At Stage 5, the practitioner or organization has built enough depth in AI operations methodology that they can deploy it inside other organizations. This is the certified practitioner and licensed partner layer: people who have internalized the methodology well enough to teach it, adapt it to unfamiliar contexts, and certify others in it.
Stage 5 isn't advanced Stage 4. It requires a different orientation: toward teaching rather than doing, toward replicability rather than performance, toward building a methodology that survives context changes rather than one optimized for a single environment. Most organizations never make this transition. The ones that do have either a commercial reason or a mission reason. Both are legitimate.
Tell-tale signs
- →The practitioner can deploy the methodology inside an unfamiliar organization and produce results
- →Others have been trained and certified using the methodology
- →The methodology is documented at a level that transfers without the originator present
How to Assess Your Current Stage
Before choosing a program or planning a next step, locate yourself honestly. These questions are designed to cut through aspirational self-assessment:
Do you have any AI systems running in production that don't require your daily involvement to initiate?
Does your team follow a shared AI methodology, or does each person operate independently?
Have you deployed AI outside of writing and summarization tasks?
If your most AI-capable team member left tomorrow, would the capability stay?
Mostly no across all four: Stage 1. One or two yes at the individual level: Stage 2. Yes to all four at the individual level but not at the team level: Stage 2 approaching Stage 3. Yes to all four at the team level: Stage 3 or beyond.
For a precise AI maturity assessment, the free assessment takes about ten minutes and returns a stage score with a recommended next step.
What Changes at Each Stage Transition
The transitions between stages are not about tools. They're about behavior, structure, and methodology. New software doesn't move you between stages; new operating discipline does.
Stage 1 to Stage 2
Requires one person committing to documentation. The shift is from using AI to building with AI. It takes discipline more than skill. The practitioner has to resist the pull of the next quick output and instead invest time in capturing what works so it compounds.
Stage 2 to Stage 3
Requires convincing others. The shift is from personal methodology to shared standard. It almost always meets resistance, because people who haven't built their own systems don't see the value of adopting someone else's. The transition succeeds when the Stage 2 practitioner can demonstrate results, not just describe them.
Stage 3 to Stage 4
Requires executive sponsorship and governance design. Without a named owner and a defined AI governance structure, the methodology doesn't cross departmental lines. It stays with the team that built it and gets reinvented elsewhere.
Stage 4 to Stage 5
Requires a decision to externalize. The shift is from operating the methodology to teaching it. It requires documenting the methodology at a level of fidelity that survives without its authors. Most organizations have never had to do that. The ones that get there find that the documentation process itself sharpens the methodology.
Know Your Stage Before You Pick a Next Step
The free assessment gives you a precise diagnosis in ten minutes. Already know your stage? Browse the programs built for where you are.