65 Activities · 8 Phases
A complete AI engagement runs from the first phone call through ongoing support after launch. That arc contains roughly sixty-five distinct activities across eight phases. AI contributes to every single one. Some it drives. Some it supports. Some require both human and AI to be present, pushing through iterative toil until something true emerges. We call that space shaping. And some require the human to do the hard work of learning through AI so they can perform solo when it matters. We call that educates.
Three questions run simultaneously at every step: What must the humans on the solution side protect? Where does AI actually sit on the maturity curve? What does the client owe the engagement? Each uses a different unit of measure. Each reveals something the other two miss. Asked together, they show the full shape of the work that produces lasting outcomes.
Where Engagements Begin
Work arrives through three doors, and each one carries a different kind of trust.
Channel partners identify opportunities and bring them to us. A management consultant hears a client describe a process that's bleeding time. An accountant sees a firm drowning in manual reconciliation. The introduction arrives with trust already attached. Someone the client already relies on said this was worth their time.
Our team works the network directly. Following up on conversations, reconnecting with contacts whose situation has changed. The trust is warm, built on history.
And we run marketing outreach. LinkedIn, cold email, reaching people who don't know us yet. Trust starts at zero.
Three entry points, three starting levels of trust. But once a prospect is real, the same preparation kicks in. AI takes over company research, competitive landscape, firmographic signals, the pre-meeting briefing. Three of the first six activities are AI-driven, producing these at a speed that humans working from browser tabs and memory can't match.
The human role here is selection. Which prospects are ready. Whether the timing is right. Filtering AI research for what's real versus marketing language. And whether to walk toward this or step back. That decision requires reading what doesn't appear in data: timing, cultural fit, whether this company is ready to do the work or just curious.
The client brings whatever they've gathered so far. Assumptions that may or may not be true. Things they've heard, things they've tried. Expectations that range from cautious to wildly optimistic. Sometimes the person leading the AI initiative internally isn't the right fit for what this work actually requires. Our first job is to bring clarity, so their mental model about this path is based on what's actually ahead of them.
Earning the Foundation
The discovery meeting is where the engagement earns its foundation or doesn't.
AI can't produce the moment when a client stops describing their process in general terms and names the specific frustration they feel on a specific day of the week. That moment requires presence, patience, and the skill to stay in the question long enough for the real problem to surface underneath the presented one.
But the meeting itself can be designed. We've built several different meeting experiences for teams of different sizes, and what we found is that when human judgment and AI-powered structure collaborate in how the room works, clients reach clarity faster. They can make a real decision sooner: move forward or stop. Either answer is good. What matters is they can make the call. The human reads the energy, adapts in real time, knows when to push and when to let silence work. The experience they're guiding the client through was shaped collaboratively. Neither the structure alone nor the person alone produces the outcome.
Then the meeting ends, and the distribution shifts. AI processes the transcript. Extracts key quotes. Structures raw discovery into an intelligence profile. But that profile has to match what was sensed in the conversation. The structured output has to match what was felt. If it doesn't, if the profile is technically accurate but emotionally wrong, everything downstream inherits the distortion. That verification, the back and forth between what the system produced and what the human experienced, is shaping work.
The client's contribution here becomes load-bearing for the first time. Send the person who does the work, not the one who describes it. Correct any misunderstandings before they flow downstream. Signal whether there's real budget, real authority, real urgency. Everything that follows inherits the quality of what happens in this phase.
When human judgment and AI-powered structure collaborate in how the room works, clients reach clarity faster. They can make a real decision sooner: move forward or stop. Either answer is good. What matters is they can make the call.
The Collaborative Stretch
Now the engagement enters its most collaborative stretch. The solution takes shape. And the dynamic that first appeared in the discovery meeting becomes the dominant mode of work.
AI proposes workers from business intelligence. Humans challenge whether each one earns its place in the lineup. AI estimates build hours. Humans cross-check against what delivery actually took on similar work. AI categorizes pain points by value driver. Humans adjust for how this specific client talks about value, which is rarely how the system categorizes it.
Some of this work is AI-driven: estimate hours, calculate ROI, produce architecture diagrams, package blueprints, calculate business cases with payback periods. The system is productive and fast, and the outputs are real. But four activities in this phase are shaping work. Naming workers so they resonate with this client's world. Translating system output into language the client would actually use. Writing Now/After narratives. Creating mock outputs that look like something useful in the client's daily work.
The Now/After cycle is the clearest example of what shaping looks like. AI drafts a version. The delivery team rejects it. AI adjusts. The team rejects it again. This cycle repeats until something emerges that sounds like what the client actually said, in language they would use, describing a frustration they would recognize as theirs. Multiple rounds to get right. Neither the instrument nor the musician alone would produce what comes out the other side.
This is the same dynamic from the discovery meeting, now running at higher frequency. It's not unique to this phase. It's the pattern wherever the work requires both parties to be present and push until something true emerges. The toil is real. Learning to play the instrument well takes effort. But what it produces can't be produced any other way.
The client's role expands from informant to participant. Name which problems are most urgent, and don't let everything be priority one. Recognize your own frustration in the description of your current state, or say so if you don't. Describe what a useful output actually looks like in your daily work, not what sounds impressive in a demo. If the client is passive through this phase, the solution reflects what the team thinks they need. If they engage, it reflects what they actually need. The ceiling lives here.
Neither the instrument nor the musician alone would produce what comes out the other side. The toil is real. But what it produces can't be produced any other way.
Human Territory
The engagement shifts into human-heavy territory.
Pre-sales conversations. Presenting the roadmap to client leadership. Creating an offer compelling enough to close. Setting expectations before the contract is signed. The execution of these activities is human: reading reactions in real time, handling objections that weren't in the deck, knowing when to push and when to let silence work.
But one activity here is shaping work. Creating an irresistible offer is neither pure art nor pure system. It requires both: AI-generated business cases and ROI frameworks shaped through human judgment about what will actually move this specific client. The offer that closes isn't the one the system drafted. It's the one that emerged from rounds of shaping.
Set expectations honestly. Name what's realistic given what they've committed to spend and what they expect to receive. This conversation prevents the frustration that kills projects later. It's a conversation about limits, delivered with enough care that the relationship survives the honesty. The words have to come from a person the client trusts, and they have to land in real time.
The client's role sharpens: make a real decision with real authority, and don't defer to absent stakeholders. Bring the people who feel the pain, not just the people with budget. Hear honest expectations and adjust rather than insisting on the original vision.
The same engagement that was shaping-heavy during solution design is now mostly human in execution. Same project. Same people. Completely different distribution of work.
The Cascade
A signed contract triggers a cascade of system work. Extract scope from the legal language. Produce a delivery brief for the build team. Write acceptance criteria for every feature. Build a sprint plan aligned to contractual milestones. Identify blockers before build starts. Draft a kickoff email.
AI drives most of the handoff. Six of seven activities. The system is fast, structured, and accurate. The delivery team catches what the system can't: vague acceptance language that will cause disputes later, verbal commitments that don't match the contract's technical language, the gap between ideal sequencing and this client's actual availability rhythm. Important work, but the system does the heavy lifting.
Then build starts. And the distribution fragments.
Where Build Fragments
Develop, test, and deploy AI workers. This is shaping work at the code level. AI accelerates development, but humans interpret specs when reality contradicts the assumptions that seemed sound two weeks ago. The developer and AI iterate back and forth: the system produces, the human redirects, the system adjusts, the human catches what the system can't see. The client provides sample data, edge cases, and access to the people who know the exceptions everyone else forgot. Three parties, each contributing something the others can't.
Detect project health patterns from available signals. AI reads data, flags anomalies, tracks velocity against the plan. The human senses what the dashboard misses: champion fatigue, a political shift in the client's organization, quiet disengagement that presents as polite meeting attendance and zero follow-through. AI recognizes nine documented failure patterns. The human catches the tenth.
Then there's the other list. Five activities during build where the human performs solo. But the human who performs did significant work beforehand: recognizing the gap in their own capability, extracting the right frameworks from AI, building skills they could actually use in the room. These deserve to be named individually.
These activities share qualities. They require presence. They can't be templated. They happen in real time between people who are deciding, consciously or not, whether this project is worth their continued effort. But the human who walks into these moments did real work beforehand: recognizing the gap, curating the right sources, building AI-powered skills from the best training available, practicing until the frameworks transferred. The execution is solo. The learning was not.
The client's contribution during build is the most demanding and the least visible. Show up prepared to meetings. Raise concerns in the meeting, not after it ends. Champion the project internally and help the vendor get what they need. Be honest in status updates, even when honesty is uncomfortable. Treat the project as yours. Protect the calendar time. The note on this last one: "One of the hardest things in delivery."
The execution is solo. The learning was not. The human recognized the gap, built the skill using AI, and practiced until the frameworks transferred. That effort is invisible in the room. It's everything that makes the room work.
Done, or Called Done
The project is done. Or it's called done, which is a different thing. Now the work shifts register.
Completion brief. Punch list. Client-facing summary. Testimonial recovery plan. Baseline measurements. Production verification. Documentation. Maintenance scope definition. Operational runbook. And the conversation about what the ongoing relationship looks like, which is the conversation that determines everything that follows.
AI drives the structured close-out: briefs, punch lists, runbooks, deployment verification. The system is productive here. But the delivery team's column reads differently than it did in earlier phases. The verbs change. Earlier in the engagement: filter, challenge, translate, verify. Now: celebrate without overpromising. Acknowledge without blaming. Read champion engagement level honestly. Stay committed when the urgency of launch fades into the grind of maintenance.
The emotional register shifts. The work becomes about honesty, care, and the long view.
Two activities are educates work. Re-engage a disengaged champion: the human studied re-engagement patterns using AI, but earning back attention through visible results requires showing up and delivering. Set expectations for the ongoing relationship: the human learned boundary framing through AI, but the conversation that determines whether the relationship survives the project has to come from a person the client trusts.
One activity sits on the frontier. Business rules shift quietly after launch. Users adjust their workflows in ways nobody anticipated. Edge cases emerge from real usage that testing never found. AI can monitor for anomalies. The human diagnoses the cause. This is emerging territory, and it's where the maturity curve will move next.
The client's final contribution: own your punch list items and don't let them age without response. If the experience was good, say so without making the vendor chase you for it. Provide real before-and-after numbers without inflating them for politeness. Accept the maintenance boundary. Value the relationship enough to fund its continuation.
The engagement started with one party working alone, researching a company in the dark. It ends with three parties whose sustained contributions determine whether the project survives its own launch.
Sixty-five activities. Twenty-one where AI drives the output. Fifteen where it supports human-led work. Eleven where human and AI shape the outcome together through iterative toil. Twelve where the human did the hard work of learning through AI and then performed solo. Six where the capability is still emerging. And zero where AI has no role at all.
That last number is the one worth sitting with. The original version of this map had thirteen activities marked "no role." The honest answer, tested against how this work actually gets done, is zero. AI contributes to everything. The question was never whether AI belongs in the work. It was always how: by driving, supporting, shaping, or educating.
The work is wider and deeper than what people see. It always was. AI just made the full shape visible.
How this map found its shape →See How the Full Engagement Works
We map the complete lifecycle before we build anything. Start with a conversation about your workflow.
Start a Conversation See Our Work →