
We keep building smarter systems.
Bigger data platforms. More sophisticated dashboards. LLMs trained on vast swaths of human knowledge. Agentic workflows that promise autonomy, reasoning, and coordination.
And yet, in practice, something keeps breaking.
Clients don’t trust the dashboards. Engineers override “automated” decisions. Security teams ignore alerts they know are wrong. And the more intelligent the system becomes, the more brittle it often feels.
This is not a failure of compute. It’s not a failure of modeling. And it’s not a temporary maturity issue.
It’s a knowledge problem.
Nearly a century ago, Friedrich Hayek articulated this problem with striking clarity. Although he was writing about economies and central planning, his theory of knowledge is a remarkably useful lens for understanding why modern data platforms, AI systems, and emerging agentic architectures struggle in real organizations.
Modern data and AI systems often rest on a quiet assumption:
If we can collect and centralize enough data, and reason over it, systems can act with increasing autonomy at a scale humans cannot sustain.
This assumption rarely appears as an explicit design goal. It shows up indirectly - in architecture decisions and operating models:
Hayek’s insight was that this assumption breaks, not because we lack intelligence, but because of the nature of knowledge itself.
Hayek’s core claim is simple, but deeply unsettling for system designers:
The knowledge required to run a complex system is dispersed.
It is spread across individuals, teams, and moments in time. No single actor ever possesses more than a fragment of what matters.
In real organizations, this isn't abstract. It looks like:
This knowledge is local, contextual, temporal, and unevenly distributed.
By the time it is centralized, normalized, and modeled into a "golden record," much of its meaning has already decayed.
Even worse, Hayek emphasized that some of the most important knowledge is tacit.
Tacit knowledge is knowledge people use but cannot fully articulate. You see it when:
Tacit knowledge resists formalization. That’s why adding more rules, schemas, or reasoning layers can make systems worse.
These systems don’t fail loudly. They fail quietly by replacing judgment with false certainty.
Systems begin to execute where they should escalate, and to act decisively in situations where judgment, not speed, is the scarce resource.
Agentic AI is often framed as a leap toward autonomy and intelligence.
But autonomy built on the illusion of complete understanding is dangerous.
If knowledge is fragmented, contextual, tacit, and constantly evolving, then agents designed around global understanding are guaranteed to be brittle.
The question is no longer: “How do we make agents smarter?”
It’s: “How do we design agents that operate effectively despite never knowing enough?”
That is the real design challenge - and the starting point for this series on building agents and agentic workflows.
Hayek didn't just diagnose the problem; he hinted at the solution. He observed that complex systems like markets work not through global understanding, but through signals.
This distinction between global omniscience and signal-based coordination, is the key to building agents that actually work.
In the next few posts, we will explore:
Thegoal is not to romanticize human intuition. It’s to design systems that work with the grain of reality, rather than fighting it.
Next: The Myth of the Omniscient Agent - Why “One Smart Agent” is the Wrong Mental Model.

Greg is a technology leader with over 20 years in development and R&D, specializing in artificial intelligence, big data, and cloud computing. He founded the AI unit in 8200, led cloud and big data R&D, and delivered core data and multilingual AI systems. He holds a BSc from the Technion and an MBA from Tel Aviv University.