Back to ThoughtsANALYSIS

Unified Context Management (UCM): A Reference Architecture for Reliable Knowledge in Agentic AI

A proposed architecture for improving reliability in enterprise agent knowledge systems.

Summer Shaw·March 2026·9 min read

S&P Global reports that 42% of companies abandoned most AI initiatives in 2025, up from 21% the prior year. The pattern is consistent: models work, but the knowledge they operate on does not. Enterprise knowledge degrades silently: facts go stale at different rates, contradictions accumulate across sources, and when something changes, nothing indicates which downstream decisions are now unreliable.

Reliable agents require three properties simultaneously: (1) calibrated trust: every fact carries a confidence score grounded in evidence; (2) change awareness: when a fact changes, the system knows what else is affected; (3) targeted revalidation: maintenance is focused on what actually changed. No current approach delivers all three.

Why Current Approaches Fail

RAG ranks by vector similarity, which measures topical relevance rather than factual reliability. A three-year-old code comment and yesterday’s API spec score are identical. There is no mechanism for confidence calibration, staleness detection, or selective revalidation.

Scheduled rebuilds apply uniform cadence to non-uniform change. Google’s TAP system (2B LOC, ~1 commit/sec) found that 50% of targets changed fewer than 14 times per month, while a volatile minority changed hundreds of times. Nightly rebuilds reprocess the stable majority and miss the volatile subset.

Longer context windows amplify the problem. Models ignore mid-context information (“lost in the middle”), and TAP showed that beyond 10 dependency hops, testing produces a 99:1 noise-to-signal ratio. More tokens mean more sources of confidently wrong answers.

What UCM Proposes

UCM is a proposed reference architecture, not shipped software. It comprises three production-validated techniques from adjacent domains. No integrated pipeline has been built or tested on enterprise data. The contribution is the composition argument and the mapping from each research foundation to its enterprise analog.

Layer 1: Calibrated Trust

This layer draws on Google’s Knowledge Vault (KDD 2014), which extracted facts using independent methods and fused evidence so that multi-source agreement compounds confidence. Enterprise mapping: an agent can quantify agreement across CRM data, API responses, support tickets, and internal docs, while down-weighting claims supported by a single stale source.

Layer 2: Change Awareness

This layer draws on Meta’s predictive test selection research (ICSE-SEIP 2019), which showed shortest-path dependency distance is a strong failure predictor after change. Enterprise mapping: a changed API contract should trigger confidence reductions proportional to graph distance across dependent artifacts.

Layer 3: Targeted Revalidation

This layer draws on the Salsa engine used in Rust tooling, where inputs are classified by expected volatility and only dependent computations re-run on change. Enterprise mapping: business rules, API contracts, and feature flags each receive revalidation cadence matched to observed volatility instead of blanket schedules.

Why Composition Matters

Knowledge Vault calibrates trust but does not track temporal change. Meta’s approach propagates impact but does not assign confidence scores. Salsa revalidates efficiently but does not reason about data quality. UCM’s thesis is that these primitives are orthogonal and composable. The interaction topology—sequential pipeline, event-driven, or hybrid—remains an open design decision.

Current Status and Open Questions

UCM is a design specification. The layers have not been implemented together, tested on enterprise data, or benchmarked for latency. Source research was validated in non-enterprise domains (web facts, code, compiler inputs). Open questions include latency overhead, RAG integration strategy, cost tradeoffs of continuous revalidation, multi-agent consistency mechanisms, and enterprise validation on compliance docs, API specs, and policy memos.

For CTOs, VPs of Engineering, and technical leaders evaluating knowledge architectures for agentic AI.

Attian AI, March 2026