Lakehouse
Insights/

Agentic Workflows: It Is Not Just a Technology Problem

Everyone is talking about AI agents. Few are talking about what it actually takes to incorporate agentic workflows into an organization -- the people, process, and data foundations that determine success or failure.

By Sean McInerney·AI StrategyDigital Transformation
Agentic Workflows: It Is Not Just a Technology Problem

The AI industry has a new favorite word: agentic. Autonomous AI agents that can reason, plan, use tools, and execute multi-step workflows without constant human supervision. The technology is real, it is improving rapidly, and it will fundamentally change how knowledge work gets done.

But here is what twenty-five years of watching technology transformations has taught me: the technology is rarely the hard part. The hard part is everything around it -- the people who need to trust it, the processes that need to accommodate it, the data that needs to feed it, and the organizational culture that needs to accept a fundamentally different way of working.

Team planning workflow changes

The Four Dimensions of Agentic Readiness

When I work with clients on AI integration strategy, I use a framework that goes beyond the technology conversation. Before we talk about which agent framework to use or which LLM to deploy, we need to assess readiness across four dimensions: people, process, tools, and data.

People

This is where most agentic initiatives stall. Not because the technology does not work, but because the humans in the system are not ready for what changes.

An AI agent that can draft customer communications, route support tickets, or generate financial reports is only useful if the people who currently do those tasks are willing to shift from doing to overseeing. That is a profound psychological change. People derive professional identity and job security from their ability to execute specific tasks. Telling them that an AI will now handle the execution while they handle the judgment and exceptions is not inherently reassuring -- even when it is true.

Organizations that succeed at agentic adoption invest in change management before they invest in technology. They identify the roles that will be most affected, have honest conversations about how those roles will evolve, and provide training that focuses on the new skills -- prompt engineering, output evaluation, exception handling, and quality assurance of AI-generated work.

The worst approach is to introduce agents quietly and hope people figure it out. That breeds resentment, shadow workarounds, and ultimately rejection of tools that could genuinely make people's work more interesting and impactful.

Process

Every agentic workflow sits inside a broader business process, and that process was designed for humans. The approval chains, the handoff points, the escalation paths, the quality checkpoints -- all of it assumes that a person is doing the work and another person is reviewing it.

When an agent takes over part of that process, you cannot just slot it in and leave everything else unchanged. The process needs to be redesigned around the new reality:

Where does human judgment still matter? Not every step needs human oversight, but some absolutely do. Identifying these critical control points and designing appropriate review mechanisms is essential. An agent that drafts a customer email might not need approval for routine responses, but a response to a complaint from a major account probably should route to a human.

What happens when the agent fails? Every agent will produce bad output sometimes. The process needs graceful fallback paths that do not require someone to manually reconstruct what the agent was supposed to do. This means maintaining enough process documentation and institutional knowledge that human takeover is always possible.

How do you measure quality? The metrics that work for human-executed processes may not work for agent-executed ones. A human customer service representative might handle 30 tickets per day with a 92 percent satisfaction rate. An agent might handle 300 tickets per day -- but how do you measure satisfaction when the customer does not know they are talking to an agent? You need new quality frameworks.

Tools

The tools dimension is where most organizations start, and ironically where they need the least help. The agent frameworks, LLM APIs, orchestration platforms, and integration tools are evolving rapidly and becoming increasingly accessible. The technology is not the bottleneck.

What matters more is the tool ecosystem around the agents:

Observability. You need to see what agents are doing, why they are making specific decisions, and where they are struggling. This is not just logging -- it is interpretable auditability that lets a human understand the agent's reasoning chain after the fact.

Guardrails. Agents need boundaries. What systems can they access? What actions can they take autonomously versus what requires approval? What topics or decisions are out of scope? These guardrails need to be configurable without redeploying the agent.

Integration. The most valuable agents are the ones that can interact with the systems your organization already uses -- your CRM, your project management tools, your communication platforms, your data warehouses. API access, authentication, and data flow architecture matter more than which LLM you choose.

Data

Data is the foundation that everything else rests on, and it is usually the ugliest dimension to address. Agentic workflows are only as good as the data they consume and produce.

Data quality. An agent that pulls from a CRM full of duplicates, outdated records, and inconsistent formatting will produce unreliable outputs. The old rule applies with even more force: garbage in, garbage out -- but now at machine speed.

Data access and governance. Agents need access to data to be useful, but that access needs to be governed. An agent handling customer inquiries needs access to account information but should not have access to internal financial data. Role-based access control needs to extend to AI agents, not just human users.

Data feedback loops. The most effective agentic systems learn from their outputs. This requires structured data collection about what the agent did, what the outcome was, and where human corrections were needed. Without this feedback loop, the agent never improves beyond its initial capability.

Start Small, Learn Fast

The organizations I see succeeding with agentic workflows are not the ones making the biggest bets. They are the ones starting with contained, well-defined use cases where the four dimensions are already relatively strong:

  • A team with good data hygiene
  • A process with clear decision criteria
  • People who are curious rather than threatened
  • Existing tools with solid API access

They run the agent alongside the human process for a period, compare outputs, identify gaps, and iterate. They build organizational muscle and institutional knowledge about how to work with agents before they scale.

The organizations that struggle are the ones that treat agentic AI as a technology deployment rather than an organizational transformation. They buy the platform, build the agents, and wonder why adoption stalls at 15 percent.

The Advisory Gap

This is where I see the biggest unmet need in the market. There are plenty of companies that can build you an AI agent. There are very few that can help you prepare your organization to actually use one effectively.

The work of assessing readiness across people, process, tools, and data -- and then sequencing the changes so that each dimension supports the others -- is fundamentally strategic work. It requires understanding both the technology and the organizational dynamics, and being honest about which dimension needs attention first.

Usually, it is not the technology.

Ready to discuss your project?

We would love to hear about what you are working on.

Let's Talk