
In Part 1, we used Hayek’s theory of knowledge to establish a fundamental constraint: the knowledge required to run complex systems is dispersed, local, and often tacit. It exists in fragments embedded in people, processes, and environments, and cannot be fully centralized without losing the context that makes it meaningful.
Yet, when agentic systems struggle due to missing data or context, our instinct is rarely to decentralize. Instead, we try to build a smarter center.
We reach for the Omniscient Agent: a single, high-fidelity reasoning engine expected to ingest global state, resolve conflicting data, and act autonomously across the entire system.
While conceptually appealing, in reality it inevitably collapses under the weight of enterprise complexity.
Omniscient agents eventually break against a hard wall: intelligence does not scale linearly with scope. As scope expands, dependencies multiply. Interactions become nonlinear. Edge cases explode.
Mathematically, the complexity is staggering:
Ten components with just three possible states each already create 3¹⁰ (59,049) possible system configurations.
A thousand systems? More than atoms in the universe.
This isn't a hardware limitation solved by more compute. It is a structural bottleneck where the problem space expands faster than any central reasoning process can cover.
Organizations learned this lesson long ago: the sheer volume and the resolution of localized data outpaces the bandwidth of a single decision-maker. Rather than attempting to process every granular event from the top down, resilient organizations rely on distributed authority to overcome the inherent bottleneck of centralized reasoning.
In software, we recognize this as the “God Object” anti-pattern: a component that tries to know and control everything. It’s not just hard to maintain - it’s impossible to complete.
Complex systems, whether they are agents or microservices, do not maintain effectiveness and robustness through global understanding.
They maintain both through local adaptation.
This means the agent closest to a specific problem uses its immediate, contextual knowledge to react to changes, rather than waiting for a central authority to process a global map that will always be incomplete.
If global visibility is unattainable, what could replace it?
Hayek’s answer wasn’t to demand more intelligence from the center, but to rely on a different form of coordination: Signals.
Signals are changes that matter enough to trigger action: a shift in priority, a rising risk, a missed response, a threshold crossed.
Participants in a system, whether they are human experts or AI agents, do not need to understand the whole system to respond coherently to these changes. Effective agentic systems don’t ask agents to understand everything; they ask them to respond intelligently to what has changed within a clearly defined scope.
The goal is not to build an all-knowing machine.
It is to build a network of agents that can function effectively despite never knowing enough.
Up Next
In Part 3, we will go deeper into Signals.
What makes a good signal?
How do you design agents that coordinate without a shared state?
And why do signal-driven systems consistently outperform systems that try to model the entire world?
Check out Part 1 On Building Agents & agentic Workflows.