
A Note on the Series
This series explores the messy realityof building agentic workflows in the wild, moving past clean demos to addressthe volatile data, shifting contexts, and high-stakes consequences of real-world organizations.
The Story So Far
Part 1: We established that knowledge is dispersed, messy, and contextual - it can’t be centralized without losing its vital context.
Part 2: We dismantled the"Omniscient Agent" myth, proving that a central brain cannot reason through the infinite complexities of a real organization.
The alternative to the "Omniscient Agent" isn't a smarter central brain; it’s a better nervous system.
To build resilient agentic workflows that don't break in production, we must abandon the pursuit of Global State and embrace the precision of Signals.
State is the massive, comprehensive record of everything in your environment—like a sprawling enterprise data lake or an interconnected IT service management graph. Since we already established that centralizing all this knowledge is fundamentally flawed, forcing an agent to constantly parse this global database of stale and incomplete data just to detect a relevant state change guarantees context bloat, blind spots, and hallucinations.
A Signal, by contrast, is a precise, actionable trigger. It’s a tap on the shoulder saying, "something changed." Instead of forcing an agent to pull and process a mountain of data to figure out if it needs to work, a signal pushes the exact context required for an immediate decision and action. It strips away the noise and delivers only the delta—what was found, what threshold was crossed, or what action is required next.
To coordinate independent agents without a shared brain, they need a clear vocabulary.
The golden rule of this vocabulary is: Signals are insights, not telemetry.
A stream of audit logs or a raw JSON dump of every software package installed on a server is just telemetry. That is just a continuous stream of State, and it is noisy.
A Signal, however, must carry an actionable intent. In a decentralized agentic system, this intent takes four forms:
A Directive is a strict command to execute a specific outcome. When a Security Agent issues a directive like Action Required: Apply Patch KB5036335 to Database_Cluster_Alpha, it dictates what must happen, but completely delegates the how.
It doesn’t need to know how to safely patch the database or run backups - it leaves that domain expertise entirely to the specialist database agent.
The Response is the critical closing loop of the agentic handshake. After receiving a directive, an agent must reply with a definitive callback, such as Task Complete: Patch Applied, System Stable or Task Blocked: Critical batch workload in progress.
Without this explicit signal, the sending agent might assume success and execute dependent downstream actions prematurely, leading to cascading failures.
An Inquiry allows an agent to surgically pull missing context without parsing the entire organizational state. Instead of mining a global directory, an agent lacking a specific parameter simply issues a targeted request, such as Context Request: Who is the business owner of Database_Cluster_Alpha for downtime approval?
The Broadcast is the ultimate insight refined from raw telemetry. The sender simply announces a significant change into the environment without knowing or caring who is listening.
If a vulnerability scanner broadcasts New Vulnerability Disclosed: Critical CVE-2024-3400 found on Edge Firewall, the need-to-know agents catch it and react simultaneously.
Both agents execute their specific jobs based on the exact same insight, without a central controller orchestrating their moves.
This architecture creates systems that consistently outperform centralized world models for two key reasons.
First, it turns "ignorance" into a feature. Narrow scope creates stability and enables highly specialized, capable domain-specific agents. Because these specialists are isolated by signals, they aren't crushed by the cognitive load of understanding the entire organization. They only need to know their specific triggers—and excel at their specific jobs.
Second, it generates true anti-fragility through decoupling. The intelligence of the system lives in the protocol - the signals, and not in a shared brain or a specific vendor’s black box. This means your workflow logic is independent of your underlying tooling. You can upgrade your patching agent or evolve your risk assessment logic tomorrow without risking the entire system stability. As long as the new agents adhere to the established signal vocabulary, the orchestration remains intact and the workflow never breaks.
Signals solve the mechanical problem of coordination without the need for omniscience. But even the best signals rely on clear, logical inputs.
What happens when a signal is ambiguous? What happens when an expert "just knows" that a server shouldn't be touched today, even if the data looks perfect? In our next post (Part 4), we’ll tackle the ultimate ghost in the machine: Tacit Knowledge, and why human judgment is a permanent design requirement, not a temporary bug.
Next in the series
Part 4, Tacit Knowledge - Why Human Judgment is a Permanent Design Element

Greg is a technology leader with over 20 years in development and R&D, specializing in artificial intelligence, big data, and cloud computing. He founded the AI unit in 8200, led cloud and big data R&D, and delivered core data and multilingual AI systems. He holds a BSc from the Technion and an MBA from Tel Aviv University.