On Building Agents & Agentic Workflows (Part 4)

Greg Ainbinder, Co-founder and CTO
March 19, 2026

Tacit Knowledge: Why Human Judgment is a Design Element, Not a Bug

If you’ve ever tried to prompt a senior engineer’s "gut feeling," you’ve already hit the hardest limit in AI. Decades before the LLM-boom, Michael Polanyi captured this frustration in a single sentence: "We know more than we can tell."

It’s known as the Polanyi Paradox. Think about riding a bike or picking a friend’s face out of a crowded terminal. You do these things perfectly every day, yet you couldn’t write down the exact physics of your balance or describe the geometry of a friend’s features. You have the skill, but you can’t "export" the instructions. You just know.

In the world of agents, this is a fatal hurdle. If an expert cannot articulate the "how" or the "why" behind their decision, you cannot turn it into a rule, a prompt, or a database record. Tacit knowledge is the unwritten wisdom gathered over years of "surviving the system," and it is exactly what your agents are missing.

Most agentic workflows are built on explicit knowledge - the stuff that is written down in manuals, logs, and policy docs. But rules without context lead to "correct but disastrous" decisions. The agent does exactly what it was programmed to do, even when it’s the wrong call for the situation. It flags a security anomaly because the rule says so, even though a human knows, tacitly, that it’s just the dev team doing a scheduled stress test that never made it onto the formal calendar.

The Collision: Why Autonomy Requires a "Help Me" Button

The instinct is to solve this "blindness" by pulling back and keeping the agent on a very short leash. But if you build an agent that asks for permission before every action, you haven't built an autonomous worker - you’ve built a bottleneck with an API.

The goal of agentic design isn't to create a "Mother, may I?" loop; it is to maximize autonomy while ensuring the agent stops only when explicit logic is exhausted.

We call this point the signal collision. It’s the precise moment where an agent’s internal instructions point in opposite directions. The agent recognizes the conflict and realizes: "My rules are exhausted; I need Tacit Knowledge to break the tie." This is the only moment the "Help me" button should be pressed - not because the AI is "dumb," but because the logic has run out of road.

The Scenario: The Vulnerability Patch

Imagine an autonomous security agent monitoring a Tier-0 Payment Gateway. It’s hit with two competing, high-priority signals:

Signal A - Security Policy: "Critical vulnerabilities must be patched within 12 hours."

Signal B - Business Priority: "This asset is Tier-0. Unauthorized downtime is strictly prohibited."

Left to its own, the agent tries to "reason" its way through. It calculates that the risk of a breach outweighs the cost of a potential downtime and executes the patch. The vulnerability is fixed, but the company loses millions in transaction volume during peak hours. The agent was "correct" according to the manual, but the decision was a business disaster.

The missing piece wasn't in the "security policy." It lived in the senior engineer’s head: "I know the policy says patch, but I also know that this specific legacy cluster has a configuration that makes it resilient to this specific exploit. It’s not worth the downtime risk right now." That insight isn't in a database. It's an unwritten pattern recognized by a human expert who knows the "Friday quirks" of the system.

The Anatomy of an Intelligent Escalation

We need to stop viewing an agent "asking for help" as a failure. In a high-stakes environment, a well-timed escalation isn't a glitch - it’s a high-value output.

However, the agent shouldn’t ask, "What should I do?" That’s just passing the buck and forcing the human to do the agent's research. Instead, it should surface a context-rich briefing that identifies exactly which signals are colliding, which policies are at odds, and why the logic has run out of road. This enables the human to provide the final, tie-breaking judgment.

By designing for the collision rather than the permission, you get an agent that executes at scale when the path is clear but knows exactly when to surface the conflict for a human to settle.

Conclusion: Routing, Not Replacing

We’ve spent decades trying to automate the human out of the system. But in the world of agents, the human is a core design element - the only component capable of handling the Polanyi Paradox.

Signals Part 3 solve the mechanical coordination of a complex agentic system.

Human judgment Part 4 provides the tacit "glue" that prevents the system from falling apart.

In the next chapter, we will uncover why Knowledge Graphs are the Enterprise Blueprint your agents need to ensure their judgment is grounded in the messy, interconnected reality of your organization - where a single automated action can hit the bottom line in seconds.

Greg Ainbinder

Greg is a technology leader with over 20 years in development and R&D, specializing in artificial intelligence, big data, and cloud computing. He founded the AI unit in 8200, led cloud and big data R&D, and delivered core data and multilingual AI systems. He holds a BSc from the Technion and an MBA from Tel Aviv University.