⬡ Fronesis Labs · Technical White Paper

Leibniz
Layer™

A Cryptographic Verification Protocol
for AI Agent Decisions

VERSION 1.0 DATE March 2026 AUTHOR Fronesis Labs STATUS Patent Pending PRODUCT DCL Evaluator™ v1.2.0

⬡ Abstract

We introduce Leibniz Layer™ — a deterministic cryptographic protocol for sealing, verifying, and auditing AI agent decisions. Every action an agent takes is committed into a tamper-evident SHA-256 hash chain. Any modification to past records invalidates the entire chain — providing mathematical proof of integrity without relying on trusted third parties. We describe the protocol architecture, its formal properties, and DCL Evaluator™ — the first production implementation, deployed and operational since March 2026.

⬡ Contents

⬡ 01

The Problem: AI Agents Without Accountability

AI agents are making consequential decisions at scale. They approve transactions, generate medical recommendations, execute trades, and interact with millions of users. Yet the infrastructure for verifying that these decisions were made correctly, consistently, and without tampering does not exist in any standardised form.

Three failure modes define the current state:

1.1 The Black Box Problem

Most AI systems produce outputs without any cryptographically verifiable record of how a decision was reached. Log files exist, but they are mutable — any sufficiently privileged actor can alter them. This is insufficient for regulated environments where audit trails must be tamper-evident by design.

1.2 The Non-Determinism Problem

Probabilistic LLM outputs mean that running the same input twice may produce different decisions. Without a committed record of what was decided at time T, there is no verifiable way to prove what an agent actually did — only what it would do if asked again. These are not equivalent.

1.3 The Drift Problem

AI agent behaviour shifts over time — through model updates, prompt changes, or subtle environmental factors. By the time a compliance team investigates, the agent that made the original decision may no longer exist in the same form. Retroactive verification is impossible without a committed audit trail.

"The question is not whether an AI agent made a decision. The question is whether you can prove it — cryptographically, irrefutably, without relying on anyone's word."

Leibniz Layer™ addresses all three failure modes through a single primitive: the deterministic commitment chain.

⬡ 02

The Leibniz Layer™ Protocol

2.1 Core Primitive: The Commitment Entry

Every agent action produces a CommitmentEntry — an immutable record containing the action type, payload, timestamp, agent identifier, and a cryptographic hash linking it to all previous entries.

// CommitmentEntry structure { index: integer, // position in chain timestamp: unix_float, // when decision was made agent_id: string, // which agent action: string, // what was decided payload: object, // decision parameters prev_hash: sha256_hex, // links to previous entry entry_hash: sha256_hex // seals this entry }

The entry_hash is computed as SHA-256 over the canonical JSON serialisation of all other fields. This means any modification — to any field, in any entry — produces a different hash, which breaks the chain at that point and all subsequent entries.

2.2 The Hash Chain

Each entry's prev_hash references the entry_hash of the immediately preceding entry. The first entry references a known genesis value ("000...0", 64 zeros). This creates a linked sequence where integrity is verifiable in O(n) time by recomputing each hash and checking the chain links.

CHAIN STRUCTURE

GENESIS
prev: 000...0
ENTRY 0
hash: a3f9...
ENTRY 1
hash: b7c2...
ENTRY N
hash: e1d4...

Modify any entry → its hash changes → all subsequent prev_hash values become invalid → chain breaks.

2.3 The Merkle Extension

DCL Evaluator v1.2.0 extends the basic chain with a Merkle tree over each session's entries. The Merkle root hash covers the entire session, enabling efficient proof that any single entry belongs to a committed session without transmitting the full chain. This is essential for compliance reporting: a regulator can verify a specific decision without accessing all agent logs.

2.4 The Policy Engine

Before any entry is committed, the payload is evaluated against a deterministic policy. A policy is a set of rules that produce a binary COMMIT / NO_COMMIT decision. Policies are versioned, hash-committed themselves, and referenced in each entry — so it is always possible to prove which policy governed a given decision.

⬡ 03

Formal Properties

P1

Tamper-Evidence

Any modification to any committed entry — including the payload, timestamp, or agent_id — changes the entry_hash, which cascades to invalidate all subsequent prev_hash references. Detection is O(n) and requires no external oracle.

P2

Determinism

Given identical inputs and policy version, the protocol always produces identical CommitmentEntries. This is in explicit contrast to probabilistic LLM outputs — the commitment layer is deterministic even when the generating model is not.

P3

Append-Only

The chain is append-only by construction. There is no delete operation. Past entries cannot be modified without breaking the chain. Historical decisions are permanent once committed.

P4

Verifiability

Any party with access to the chain can independently verify its integrity using only SHA-256 — no trusted third party, no network access, no special hardware required. Verification is fully offline-capable.

P5

Policy Traceability

Every entry references the hash of the policy that governed it. It is always possible to prove which rules applied to a given decision — and to prove those rules have not been retroactively changed.

P6

Model-Agnosticism

The protocol operates at the output layer — it commits what an agent decided, regardless of which model produced it. Claude, GPT-4, Grok, Gemini, Ollama, or any OpenAI-compatible endpoint produces the same commitment structure.

⬡ 04

System Architecture

4.1 Layer Position

Leibniz Layer™ sits between the application layer (your AI agent) and the model layer (the LLM provider). It intercepts agent outputs, evaluates them against policy, and commits the decision before it is acted upon. This placement is deliberate: the commitment happens at the decision point, not retrospectively.

// Layer position in the stack 05 · APPLICATION Your AI Agent 04 · VERIFICATION ⬡ Leibniz Layer™ ← commitment happens here 03 · PROTOCOL MCP / OpenAI API 02 · MODEL LLM Provider (Claude, GPT-4, Grok...) 01 · COMPUTE Infrastructure

4.2 MCP Integration

The reference implementation exposes Leibniz Layer™ as a Model Context Protocol (MCP) server — the open standard now maintained by the Linux Foundation's Agentic AI Foundation, supported by Anthropic, Google, Microsoft, OpenAI, and AWS among others.

This means any MCP-compatible agent — Claude, Cursor, GitHub Copilot, or any custom implementation — can connect to Leibniz Layer™ with a single configuration block:

// Connect any MCP-compatible agent { "mcpServers": { "dcl-evaluator": { "url": "https://mcp.fronesislabs.com/sse", "headers": { "x-api-key": "your-key" } } } }

4.3 Drift Detection

Beyond static verification, Leibniz Layer™ includes a statistical drift monitor. A Z-test runs continuously over the commit/no-commit ratio of each agent. When behaviour deviates beyond a configurable threshold, the system escalates through four levels: NORMAL → WARNING → ESCALATION → BLOCK. This catches behavioural drift before it becomes a compliance failure.

⬡ 05

DCL Evaluator™ — Reference Implementation

DCL Evaluator™ is the first production implementation of Leibniz Layer™. It is a desktop-native application built on Wails (Go + React), designed to run fully offline — zero data leaves the user's infrastructure.

Current capabilities in v1.2.0:

Capability Description Status
SHA-256 hash chain Every agent action sealed, tamper-evident by design ✓ v1.0+
Merkle tree audit Session-level root hash, efficient membership proofs ✓ v1.2
Policy engine 6 built-in templates + custom YAML policies ✓ v1.0+
Drift monitor Z-test, 4-level escalation, configurable thresholds ✓ v1.1+
MCP server SSE transport, API key auth, 4 tools ✓ v1.2
CEF export Common Event Format for SIEM integration ✓ v1.2
Compliance PDF Tamper-evident report with integrity hash ✓ v1.0+
Multi-LLM support Claude, GPT-4, Grok, Gemini, DeepSeek, Ollama ✓ v1.0+
Offline mode Full functionality with local Ollama models ✓ v1.0+

The MCP server is live at https://mcp.fronesislabs.com and accessible via API key. The desktop application is available at github.com/Fronesis-Labs/dcl-evaluator.

⬡ 06

Regulatory Compliance Applications

The shift toward AI regulation is accelerating. Leibniz Layer™ is designed to provide the evidence layer these regulations require.

EU AI Act (2026)

High-risk AI systems must maintain logs "to the extent necessary to identify situations that may give rise to a risk." Leibniz Layer™ provides cryptographically verifiable logs that satisfy this requirement — and goes further by making those logs tamper-evident by construction, not merely by policy.

Financial Services (AML / MiFID II)

AI agents operating in trading, credit scoring, and AML screening require audit trails that can survive regulatory scrutiny. The immutability and verifiability properties of Leibniz Layer™ provide this directly. The CEF export format integrates with existing SIEM infrastructure.

Healthcare (HIPAA / MDR)

Medical AI systems must demonstrate that decisions were made according to validated protocols. Leibniz Layer™ commits both the decision and the policy that governed it — enabling proof that an agent operated within its defined parameters at a given point in time.

Built-in Policy Templates

DCL Evaluator™ ships with six policy templates covering the most common compliance scenarios: DEFAULT, EU AI ACT, GDPR, FINANCE, MEDICAL, and RED TEAM. Custom policies are defined in YAML and hash-committed at deployment.

⬡ 07

Comparison with Existing Approaches

Approach Tamper-evident Offline MCP native Compliance reports Price point
Leibniz Layer™ / DCL Evaluator™ ✓ by construction ✓ full ✓ PDF + CEF Free / $99/yr
Cloud audit logging (AWS, Azure) Partial (mutable by admin) Limited $$$
LLM guardrails (Guardrails AI, etc.) Partial $$
Blockchain audit trails $$$+
Enterprise AI governance platforms Partial $$$$

The key differentiator is the combination of cryptographic tamper-evidence, offline capability, MCP compatibility, and accessibility. Enterprise governance platforms provide compliance reports but are cloud-dependent, vendor-locked, and priced for large organisations. Leibniz Layer™ is accessible to any team building AI agents — from individual developers to regulated enterprises.

⬡ 08

Roadmap

Near-term (2026)

Apify Actor publication for no-code integration. Bybit MCP integration for verifiable trading agents. Russian-language compliance templates (152-ФЗ, ФСТЭК Order №117) for domestic market. Byzantine consensus protocol for multi-agent verification pipelines.

Medium-term (2026–2027)

PCT patent filing across multiple jurisdictions. Hardware-accelerated verification using Trusted Execution Environments (TEE) available in modern CPUs. SDK for embedding Leibniz Layer™ directly in agent frameworks (LangGraph, AutoGen, CrewAI). Platform licensing for cloud providers.

Long-term (2027+)

Leibniz Layer™ as an industry standard — analogous to TLS for web communications. Collaboration with standards bodies on AI audit trail specifications. Hardware-level integration with AI inference chips for attestation at the silicon layer.

"Every agent economy needs a verification layer. Leibniz Layer™ is that layer — open protocol, proven implementation, patent-pending architecture."

⬡ 09

Conclusion

AI agents are making real decisions with real consequences. The infrastructure to verify those decisions — cryptographically, irrefutably, offline — has not existed in accessible form. Until now.

Leibniz Layer™ provides a deterministic commitment protocol that seals every agent decision into a tamper-evident chain. DCL Evaluator™ is the first production implementation: deployed, operational, and accessible today.

The protocol is named after Gottfried Wilhelm Leibniz — mathematician, logician, and philosopher who argued that the universe itself operates according to deterministic, verifiable principles. We believe AI systems should operate the same way.

Every decision, every action deterministically sealed, tamper-evident, auditable.