Insight

The Next Shift: From Using AI to Orchestrating It

The interface has moved from a chat window to something closer to a Bloomberg Terminal — a team of specialised agents coordinating with each other while you watch, intervene, and architect.

Damien Healy·
The Next Shift: From Using AI to Orchestrating It

For a long time, the picture of working with AI looked the same. You sit at your desk. You open a chat. You type something. The AI responds. You type something else. Back and forth. You're the engine. The AI is the tool you're driving.

Even when I described the most advanced setups in this series, the picture held. I was the one in the loop. Choosing the model. Loading the context. Running the research. Bringing the results back. The work moved fast, but it moved through me.

That's increasingly not how I work anymore.


I wrote in my last article about reducing the friction between AI and your output. About building bridges so the work doesn't slow down at the boundaries. And I made the point that sometimes the sharper question isn't how to make a crossing smoother. It's whether the crossing needs to exist at all.

It turns out that question applies to me too.

For most of this year, my main interface has been a coding environment where I sit with one AI and direct it through complex work. Building software. Writing documents. Producing presentations. Running projects. It's powerful. The models are remarkable. But I'm still the bottleneck. Every task starts with me and waits for me. Every handoff between steps is me making a decision and typing a prompt.

What I've shifted to over recent weeks is something different. My main interface now looks more like a Bloomberg Terminal than a chat window. Indicators, activity feeds, status panels, conversations between agents, jobs moving through stages. I'm sitting in front of it watching a team of specialised agents work alongside each other. Each one has a defined role. A product manager. A CTO. A developer. A QA agent. A security agent. A privacy officer. A CMO. A handful of others, depending on what I'm building. They wake up on schedules. They pick up open jobs. They pass work to each other when something needs another perspective. Most of the time, I'm not telling them what to do. I'm watching them do it, occasionally stepping in to set direction or break a tie.

I'm not using AI anymore. I'm running a team.


If that sounds like science fiction, it isn't. The technology to build this has arrived in the last few months, and it's getting better quickly. Agent harnesses, the systems that let multiple AI agents coordinate with each other, are now accessible enough that one person can stand up a small autonomous team for a real project. Not in a research lab. On a laptop, this week.

The mental model that helps is to stop thinking of an agent as a smarter chatbot and start thinking of it as a colleague with a specific job. A real colleague has a role, a remit, opinions about what good looks like in their area, and the ability to take initiative inside their patch. They also know when to bring something to someone else. Modern agents can do all of that. You give one a role, a set of tools, access to the project context, and a way to talk to the other agents on the team. From there, it operates.

The product manager agent reviews the backlog and decides what should come next. It writes a brief and hands it to the developer agent. The developer builds it and passes it to the QA agent for testing. The QA agent finds an issue and sends it back. The security agent flags a concern about how user data is being handled and the privacy officer weighs in. Most of that happens without me. I see it on the dashboard. I read the conversations. I make a call when something genuinely needs my judgment.

This isn't theoretical. I am using these tools now.


The reason this matters is the same reason every previous shift in this series has mattered. The boundary of what one person can do keeps moving outward, and each new capability lands on a base that's already grown.

When I started this series, the multiplier was AI as a faster brain. Then it was AI as a partner that holds context. Then it was working across multiple models for different strengths. Then it was reducing friction at the boundaries. Each one added genuine speed. Each one expanded the territory I could cover alone. And now this. I've gone from being a person who works with AI to being a person who directs a small organisation of AI workers. That's not a metaphor. Functionally, that's what's happening.

The work I'm getting through right now would have required a real team with real coordination overhead even six months ago. Not the production speed. The breadth. I'm running engineering, product, marketing, security, and compliance threads in parallel on multiple projects, and the threads don't slow each other down because they're not all routing through me.


Here's the part that I find genuinely interesting, though. The skill required to do this well is not the skill of using AI. It's the skill of running a team.

The questions I find myself asking are the same questions I've asked in 25 years of leading transformations and teams. What's the right structure? Who owns what? Where are the handoff points? How do I make sure the developer doesn't just do whatever the product manager says without pushing back when something doesn't make sense? How do I set up the right tension between security and speed? How do I make sure the team has clear objectives and the autonomy to pursue them?

The parallels run deeper than I expected. Most of the problems I hit in the early weeks weren't about agent capability. They were coordination problems. Handoffs that lost context. Work routed to the wrong agent. Tasks that fell between two roles because neither owned them clearly. Exactly the kind of thing I've spent decades fixing in human teams. The agents were doing their individual jobs well. The team wasn't working as a team yet. That's not a technology problem. That's an operating model problem, and the fix comes from the same playbook you'd use anywhere else.

These are management questions. Operating-model questions. The agents can do extraordinary work, but they can't tell you how to organise them. That part is on you, and it draws on everything you already know about how teams actually function. The people who'll get the most out of this aren't necessarily the most technical. They're the ones who understand how to design a team, set objectives, and stay out of the way.

That's worth sitting with for a moment. The most valuable layer just got pushed up another level. It used to be prompting. Then it was context. Then it was orchestration across models. Now it's organisational design. Each shift has moved the human contribution further from the production work and closer to the judgment work.


There's one more move that takes this further, and it's the one I find genuinely remarkable.

I've put an agent in charge of continuous improvement. Its job is to monitor how the rest of the team is performing across a set of operational metrics, and every few cycles, propose changes. Sometimes it updates the instructions of an existing agent because it's noticed a pattern of mistakes. Sometimes it changes how jobs are routed between agents because the current flow is creating bottlenecks. Sometimes it recommends adding a new agent for a role that's been quietly underserved. It takes a range of actions, and I've been measuring the results. The improvements are real and they compound.

Think about what that actually means. The team is improving itself. I'm not just running an organisation of AI workers. I'm running one that's continuously redesigning its own operating model in response to what's working and what isn't. The agent that runs that loop is doing the thing a good operations leader does, except it's doing it constantly and it never stops paying attention.

That's the layer above orchestration. Self-improving orchestration. Once you have it running, the system gets better while you sleep.


I should be honest about the cost, though, because I'm feeling it.

Running a team of agents is cognitively heavier than I expected. When I was doing the work myself, even at AI speed, the load was bounded by my own bandwidth. One thing at a time. Now I'm running multiple projects in parallel, each with its own team of agents, and at any given moment something is happening on one of them. A job finishes. A handoff needs a decision. A QA agent finds an issue worth my attention. The terminal is always alive. I'm constantly context-shifting between projects, holding the state of each in my head, deciding when to intervene and when to leave things alone.

This is not unique to me. I'm seeing other practitioners working at this layer describe the same thing. The shift from doing the work to orchestrating the work doesn't reduce mental effort. It changes where the effort goes. You're no longer taxed by the production. You're taxed by the monitoring, the prioritisation, the constant active judgment about what needs you and what doesn't. It's the difference between being an individual contributor and being a manager, except the team never sleeps and the cycles are measured in minutes.

I'm not raising this to put anyone off. I'm raising it because the picture I painted earlier in this article, of the boundary moving outward and one person doing the work of a team, is true. But it's not free. The multiplier comes with a different kind of demand on you, and pretending otherwise would be dishonest.


There's a second cost I should be honest about, and it's more concrete. The amount I'm spending on intelligence is climbing fast.

I'm subscribed to the highest tier of every major AI provider I use, and I'm also paying API costs on top of that for the agent teams. Even the largest plans are not enough on their own. Running a self-improving team of agents around the clock consumes tokens at a rate that bears no resemblance to what an individual chat user spends in a month. I'm not going to give exact numbers because they'll be out of date in weeks, but I'll say plainly that my monthly intelligence bill is the fastest-growing line item in my business and it's not close.

This isn't a complaint. The return on that spend is enormous. I'm getting work done that would otherwise require salaries an order of magnitude larger. But it's a real shift in how the maths works, and anyone moving into heavy AI use is going to walk this same path eventually. The free tier gets you started. The paid tier covers most individual use. The frontier of what I'm describing in this article costs real money, and it's only going to grow as the agents get more capable and the workloads get larger.

The implication for businesses is significant and underappreciated. If you're planning AI investment by buying a single seat per employee, you're budgeting for the previous era. The cost of intelligence per highly leveraged employee is going to climb meaningfully, because the people getting the most out of AI aren't using one tool occasionally. They're running serious workloads across multiple providers continuously. That's a different cost structure, and it needs to be factored into how organisations think about both budgets and ROI. The good news is that the productivity gains comfortably outweigh the spend. The bad news is that the spend isn't a rounding error anymore.


I'm not going to pretend this is for everyone today. If you haven't done the foundational work I've described in earlier articles, this is the wrong place to start. You need to be fluent with single agents before you can run teams of them. You need to understand context and friction before you can design around them. The reason I'm writing about it now is not because I think most readers should rush into it tomorrow. It's because the trajectory is unmistakable, and the people who built fluency early are about to get another large multiplier.

The software to run this is still quite technical. It takes real effort to set up, and most people working with AI today wouldn't know where to start. But that's exactly where Claude Code was a year ago, before Cowork made the same kind of capability accessible to anyone with a Mac. Agent orchestration is on the same trajectory. The harnesses I'm using today will look like developer tools in hindsight. The versions that arrive in the next twelve months will be designed for everyone else, and the gap between "this is for engineers" and "this is for any serious knowledge worker" will close fast.

If you're earlier in the journey, the message is the same as it's been throughout this series. Keep moving. The base you're building right now is what makes the next layer accessible when you reach it. The people running agent teams six months from now are the people running single agents well today.

If you're already operating at the front, this is the next move. Find a harness. Set up a small team for a real project. Resist the temptation to micromanage them and see what happens when you let them coordinate with each other. The first time you watch your QA agent push back on your developer agent without you in the conversation, something shifts. You realise the centre of gravity has moved. You're not the engine anymore. You're the architect.

Your move, human.


Damien Healy is the founder of Qanara, an Australian AI consultancy helping businesses accelerate from strategy to impact. He writes about AI-native workflows, frontier AI capabilities, and practical transformation.

My LinkedIn articles are available via my post history and here: LinkedIn Articles | Damien Healy

More Research & Insights

Explore more of our original research and practical insights.

View all research