Skip to main content
EMil Wu

#08

Mindset 1: From Tools to Workflows

Mindset 6 min read
Scattered tools evolving into a connected workflow pipeline Scattered tools evolving into a connected workflow pipeline
From scattered tools to a coherent workflow

Mindset 1: The Evolution from Tools to Workflows

The first seven articles broke down the technical foundations of AI Agent systems:

  1. Model ≠ Runtime
  2. The five-layer architecture: Command → Agent → Tool + Skill → Context
  3. Context Engineering: JIT, Token budgets, Progressive Disclosure
  4. The Skill ecosystem competition
  5. Skill vs Subagent: Context flow determines everything
  6. Combination patterns: Explore → Decide → Execute
  7. Agent Team: isolated thinking, connected communication, shared state

These are background knowledge — the foundation. You don’t need to fully understand every technical detail. You could even hand all seven articles to an AI and have it follow them as working principles while you start building. That said, you do need to understand the concepts well enough that when something goes wrong, you know what to adjust and how to break the problem down.

As I mentioned at the start: from this article on, we’re no longer talking about technology or principles. We’re talking about mindset — because once you’ve learned the moves, you need the mindset to turn those techniques into an actual way of working.

One thing to keep in mind: a mindset isn’t a fixed rule. It’s a direction. What it tells you is: how to produce a workflow, how to form a closed loop — and then how to keep evolving inside that loop.


Phase Zero: Break Your Work into Components, Then Make Them AI-Powered

I imagine most of you have already been through this phase — especially those of you at companies that have been using AI tools for over a year. I’ll keep it brief. Before you can build a workflow, you need “components”: the discrete, independently describable task units that make up your day-to-day work. Writing a piece of code is a component. Organizing meeting notes is a component. Generating a report is a component.

What you do in this phase is intuitive: break your work apart, then try using AI to handle each piece. Some tasks AI nails on the first try. Others require you to iterate on the prompt. Some you’ll find AI simply can’t do yet — and that’s all fine. The point isn’t to “AI-ify everything.” The point is that through this process, you start to understand what AI is good at, what it struggles with, and how you need to describe something for AI to correctly grasp your intent. That ability to describe — clearly and precisely — is the foundation for everything that follows.

Once you have a collection of independently functioning AI components, the next question naturally surfaces: how do you connect them?

Phase One: From A, B, C to A→B→C

Now you have A (writing code), B (writing tests), and C (writing documentation) — three independent AI components, each working well on its own. But you’re still the one deciding: after A finishes, do I run B or C next? Should I check which files changed in between? After the test results come in, do I go back and update the docs?

Those “check → decide → hand off the result” steps are the glue between components — and right now, you’re still the one applying it.

Left: hands manually gluing modules A, B, C together; Right: organic vines automatically connecting the same modules Left: hands manually gluing modules A, B, C together; Right: organic vines automatically connecting the same modules
Handing the glue to AI: from manual wiring to automatic connection

The goal of this phase is to hand that glue to AI too.

This isn’t a big undertaking. You might just add a line to a Skill: “After writing the code, automatically check which files changed and determine which tests need to be added or updated.” Or add a rule to CLAUDE.md: “Before committing, confirm that documentation is in sync with the code.”

As you do this, the Agents, Skills, and Rules you’ve already built will go through repeated cycles of splitting and merging, constantly evolving. What were once three separate modules each handling A, B, and C independently will become a coherent workflow handling A+B+C together. And that workflow will start making its own judgments — automatically choosing A→C→B, A→B→C, or just B and C, depending on the situation.

Anthropic’s 2026 Agentic Coding report [10] confirms this as the right starting strategy: developers are using AI for about 60% of their work, but only 0–20% of tasks can be fully delegated to an AI Agent.

Loading chart…
60% of work uses AI assistance, but only 0-20% can be fully delegated — glue work is the entry point for expanding delegation
Engineers tend to delegate tasks that are “easy to verify for correctness” or low-risk first, then gradually expand. Handing the glue to AI is exactly that first step of incremental delegation.


Phase Two: Design the Context Handoff

Once you’ve connected your workflow, the next question comes up almost immediately:

When AI hands off to AI, what Context does the receiving agent need to continue the work?

This brings us back to the core of the first seven articles — Subagents, Skills, and Context flow.

If your workflow is “investigate → design → execute,” how does the Agent from the investigation phase pass its findings to the design phase Agent? What’s critical and must be preserved? What’s intermediate process that can be discarded?

This is the Subagent Return Contract from Article 3: explore deeply, return shallowly. The combination pattern in Article 6 is addressing the same problem. A Google Developers Blog post on context-aware multi-agent frameworks [22] makes the same point: context handoff is the most central design decision in a multi-agent architecture — you need to explicitly control how much context is passed at each handoff (full vs. none mode). This aligns exactly with the Subagent Return Contract we covered in Article 3.

But there’s a trap here that’s very easy to miss — I call it the Context Blind Spot.

The Trap: Agent Context Blind Spots

Agents have a habit: they treat information that’s already in their Context as a given, and then leave it out of their output.

Two notebooks separated by a fog gap, with question marks appearing in the right notebook — context lost between sessions Two notebooks separated by a fog gap, with question marks appearing in the right notebook — context lost between sessions
The Context Blind Spot: once the session ends, information the Agent took for granted simply disappears

In a conversation this usually isn’t a problem — because you share the same Context with the Agent, and you already know that information. But when the Agent is writing documentation, preparing a handoff, or producing a design document, that’s when things go wrong.

Here’s a concrete example: you ask an Agent to document some reference materials. It will produce an index for you — but it may not include links to the actual file locations, because it “knows” where those files are inside its Context and doesn’t think it needs to write them down.

But what happens after the session ends? What happens when another Agent takes over? That “known” information is gone.

The fix is straightforward: ask the Agent to do a self-check before the handoff.

Tell it: “If you were to write this document now, is there any information that would disappear once the session is cleared? Are there any links, file paths, or references you currently know but haven’t written down? Please include all of them.”

This step seems small, but it prevents a large number of handoff breakdowns.

In the next article, we’ll talk about the most critical phase after a workflow is established — the refinement cycle. Iterative improvement sounds wonderful in theory, but in practice it’s a double-edged sword.


References:

[10] Anthropic, “2026 Agentic Coding Trends Report” — 60% of work uses AI, 0–20% can be fully delegated https://resources.anthropic.com/2026-agentic-coding-trends-report

[22] Google Developers Blog, “Context-Aware Multi-Agent Framework” — context handoff is the most central design decision in a multi-agent architecture https://developers.googleblog.com/architecting-efficient-context-aware-multi-agent-framework-for-production/

Support This Series

If these articles have been helpful, consider buying me a coffee