Agentic AI in Martech: The new org chart

A friend recently asked me what a marketing ops manager actually does when AI agents are handling campaign execution, optimization, and conflict resolution. It's a fair question. If systems are making billions of decisions autonomously, what's left for us mortal humans?
My first instinct was to give the standard consultant answer about "strategic oversight" and "creative direction." But the more I thought about it, the more I realized that's probably wrong. The interesting human roles in an agentic world are likely to be things we haven't fully invented, yet.
Beyond the "Human in the Loop" fiction
Most discussions about AI and jobs fall into two camps: either AI will replace everyone, or humans will always be "in the loop" providing oversight and approval. Both seem misguided when you consider the math we covered in Part 1.
If your AI is making 200 billion decisions per week, you're not in the loop. You can't be. The loop is happening too fast and at too large a scale for meaningful human participation in individual decisions.

But that doesn't mean humans become irrelevant. It means human work evolves to operate at different scales and different timeframes than the AI work.
The Emerging Roles
Based on conversations with teams experimenting with agentic systems, I'm seeing several new categories of human work emerge that I would like to share with you. Now, these job titles are all suggestive, but like anything new, we need to start somewhere, to see where it ends:
- Context Architects design the frameworks within which agents operate. They don't make individual campaign decisions, but they define what good outcomes look like and create the measurement systems that help agents learn. They're essentially building the environment that shapes AI behavior rather than controlling specific AI actions.
- Agent Whisperers specialize in understanding how individual AI systems "think" and helping them work together more effectively. When agents conflict or produce unexpected results, these people can interpret what the system was optimizing for and adjust its parameters or training data accordingly.
- Outcome Guardians focus on monitoring the cumulative effects of billions of micro-decisions. They're less concerned with why the AI chose to send a particular email and more focused on whether the overall pattern of decisions is moving business metrics in the right direction.
- Exception Handlers deal with the edge cases and novel situations that fall outside the AI's training. When the system encounters something genuinely new, these people provide judgment and help the AI learn how to handle similar situations in the future.
The pattern recognition shift
What strikes me about these roles is how they're all about pattern recognition rather than task execution. Humans become responsible for recognizing patterns that operate above the level of individual decisions, meta-patterns about how the AI is learning, how different systems are interacting, how customer behavior is evolving.

This requires a different kind of analytical thinking than traditional marketing ops. Instead of optimizing campaigns, you're optimizing the systems that optimize campaigns. Instead of segmenting customers, you're designing the frameworks that help AI discover better segmentation strategies.
It's a bit like the difference between playing chess and coaching a chess player. The coach doesn't make moves during the game, but they shape how the player thinks about strategy, help them learn from mistakes, and prepare them for situations they haven't encountered before.
Delegation design
One of the most interesting challenges is figuring out what to delegate and what to keep. It's a business design problem that requires understanding both AI capabilities and organizational dynamics.
Some teams are experimenting with progressive delegation, where they gradually expand the scope of decisions they're comfortable outsourcing to AI. They might start with email send time optimization, then move to subject line testing, then to audience selection, then to offer personalization.
Others are taking a domain-based approach, where certain business areas (like customer retention) become fully autonomous while others (like brand messaging) remain human-controlled.
But perhaps most intriguingly, some teams are designing what I think of as "collaborative domains" where AI handles execution while humans handle strategy and exception management. The AI runs the day-to-day optimization, but humans design the experiments, interpret unusual results, and adjust the overall direction.
Calibrating trust
All of these new roles share a common challenge: calibrating trust appropriately. You need to trust the AI enough to let it operate autonomously, but not so much that you stop paying attention to what it's doing.

This is harder than it sounds because AI systems can be simultaneously incredibly reliable and completely unpredictable. They might optimize email campaigns flawlessly for months and then suddenly start recommending bizarre send times because they detected a pattern in the data that doesn't actually exist.
The humans who work well with agentic systems seem to develop an intuitive sense for when AI behavior feels "off" even when the metrics look fine. They're not monitoring individual decisions, but they're monitoring the system's overall behavior patterns for signs of drift or confusion.
Skills translation for us humans
Traditional marketing ops skills don't disappear in an agentic world, but they do translate differently. Understanding segmentation logic helps you design better frameworks for AI-driven personalization. Knowing how email deliverability works helps you set appropriate guardrails for AI-generated send strategies.
But there are also entirely new skills emerging. People need to understand how to interpret AI confidence scores, how to design effective feedback loops, how to debug systems that learn and adapt autonomously.
I'm convinced that some of the most valuable marketing professionals in the coming years will be those who can bridge between traditional marketing intuition and AI system behavior. They'll understand both what good marketing feels like and how to shape AI systems to consistently deliver that feeling.
The collaboration evolution
Perhaps the biggest change is that working with agentic AI starts to feel less like using tools and more like managing a very strange team. You can't micromanage AI agents any more than you can micromanage talented employees. But you can set clear expectations, provide good feedback, and create environments that encourage the behavior you want.
This requires developing new management intuitions. How do you motivate a system that doesn't have emotions? How do you provide feedback to something that learns differently than humans? How do you build trust with a team member that makes decisions at superhuman speed but sometimes for incomprehensible reasons?
These questions don't have established answers yet, which makes this an interesting time to be experimenting with agentic systems. The teams that figure out effective human-AI collaboration patterns early will have developed valuable organizational capabilities that are hard for competitors to replicate.
Of course, all of this assumes you can actually trust systems you can't fully understand, which brings us to an even more fundamental challenge about building confidence in autonomous agents.
What new roles are emerging in your organization as AI takes on more execution? Are you seeing different patterns of human-AI collaboration? I'm especially curious about how teams are handling the transition from direct control to outcome-focused oversight.
Read the Agentic AI in Martech series:
- Part 1: The billion decision problem
- Part 2: From complicated to complex
- Part 3: When agents disagree
- Part 5: Trust without understanding
- Part 6: The Handoff
👥 Connect with me on LinkedIn:
📲 Follow Martech Therapy on:
- Instagram: https://www.instagram.com/martechtherapy/
- Youtube: https://www.youtube.com/@martechtherapy
- Bluesky: https://bsky.app/profile/martechtherapy.bsky.social
Be the first to know when part 5 drops. Subscribe for free today 👇🏻
Member discussion