On Building Agents & Agentic Workflows (Part 1)

Greg Ainbinder, Co-founder and CTO
January 19, 2026

The Knowledge Problem: The Hidden Reason Smart Systems Fail

We keep building smarter systems.

Bigger data platforms. More sophisticated dashboards. LLMs trained on vast swaths of human knowledge. Agentic workflows that promise autonomy, reasoning, and coordination.

And yet, in practice, something keeps breaking.

Clients don’t trust the dashboards. Engineers override “automated” decisions. Security teams ignore alerts they know are wrong. And the more intelligent the system becomes, the more brittle it often feels.

This is not a failure of compute. It’s not a failure of modeling. And it’s not a temporary maturity issue.

It’s a knowledge problem.

Nearly a century ago, Friedrich Hayek articulated this problem with striking clarity. Although he was writing about economies and central planning, his theory of knowledge is a remarkably useful lens for understanding why modern data platforms, AI systems, and emerging agentic architectures struggle in real organizations.

The Assumption We Keep Making

Modern data and AI systems often rest on a quiet assumption:

If we can collect and centralize enough data, and reason over it, systems can act with increasing autonomy at a scale humans cannot sustain.

This assumption rarely appears as an explicit design goal. It shows up indirectly - in architecture decisions and operating models:

  • “Single source of truth” data platforms
  • Treating data lakes as a prerequisite for “intelligence”
  • Pushing for end-to-end automation in domains that remain deeply contextual

Hayek’s insight was that this assumption breaks, not because we lack intelligence, but because of the nature of knowledge itself.

Hayek’s Core Insight: Knowledge is Dispersed

Hayek’s core claim is simple, but deeply unsettling for system designers:

The knowledge required to run a complex system is dispersed.

It is spread across individuals, teams, and moments in time. No single actor ever possesses more than a fragment of what matters.

In real organizations, this isn't abstract. It looks like:

  • A frontline engineer who knows which workaround actually works, despite what the documentation says
  • A security engineer who knows that a “Critical” vulnerability doesn't matter on this specific server because it's air-gapped
  • A team that knows the ownership tag is wrong in the CMDB, but right in the Slack channel description

This knowledge is local, contextual, temporal, and unevenly distributed.

By the time it is centralized, normalized, and modeled into a "golden record," much of its meaning has already decayed.

The Hard Part: Tacit Knowledge

Even worse, Hayek emphasized that some of the most important knowledge is tacit.

Tacit knowledge is knowledge people use but cannot fully articulate. You see it when:

  • Someone “just knows” an alert isn’t worth escalating
  • A team “senses” that a remediation will break something downstream
  • An experienced operator notices that “this incident feels different”

Tacit knowledge resists formalization. That’s why adding more rules, schemas, or reasoning layers can make systems worse.

These systems don’t fail loudly. They fail quietly by replacing judgment with false certainty.

Systems begin to execute where they should escalate, and to act decisively in situations where judgment, not speed, is the scarce resource.

Why This Matters for Agents

Agentic AI is often framed as a leap toward autonomy and intelligence.

But autonomy built on the illusion of complete understanding is dangerous.

If knowledge is fragmented, contextual, tacit, and constantly evolving, then agents designed around global understanding are guaranteed to be brittle.

The question is no longer: “How do we make agents smarter?”

It’s: “How do we design agents that operate effectively despite never knowing enough?”

That is the real design challenge - and the starting point for this series on building agents and agentic workflows.

Where This Series Is Going

Hayek didn't just diagnose the problem; he hinted at the solution. He observed that complex systems like markets work not through global understanding, but through signals.

This distinction between global omniscience and signal-based coordination, is the key to building agents that actually work.

In the next few posts, we will explore:

  • The Myth of the Omniscient Agent: Why "one smart brain" is the wrong mental model
  • Signals Over State: How to design agents that coordinate without shared world models
  • Tacit Knowledge: Why human judgment is a permanent design element, not a temporary crutch

Thegoal is not to romanticize human intuition. It’s to design systems that work with the grain of reality, rather than fighting it.

Next: The Myth of the Omniscient Agent - Why “One Smart Agent” is the Wrong Mental Model.

Greg Ainbinder

Greg is a technology leader with over 20 years in development and R&D, specializing in artificial intelligence, big data, and cloud computing. He founded the AI unit in 8200, led cloud and big data R&D, and delivered core data and multilingual AI systems. He holds a BSc from the Technion and an MBA from Tel Aviv University.

Agentic AI