Blog

Technical deep-dives on AI verification, security audits, and trust infrastructure.

🇪🇺 EU AI Act Compliance with Epistemic Verification

Article-by-article mapping: how multi-model verification satisfies Art. 9, 13, 14, and 43 — with code examples and an honest assessment of limitations.

EU AI Act Compliance High-Risk AI · March 5, 2026 · 12 min

🛡️ The 4 Layers of AI Agent Security

Discovery → Orchestration → Verification → Trust. Most teams stop at Layer 0. The real risk lives in Layer 2.

Security Framework · March 3, 2026 · 8 min

🔍 From CVEs to Semgrep Rules: Building AI Agent Security Scanners

How we turned 20+ real security findings into automated detection rules — and what the false positive journey teaches about AI security tooling.

Security Semgrep Tooling · March 3, 2026 · 10 min

🔬 Auditing MCP Servers: A Methodology

How to systematically audit Model Context Protocol servers for prompt injection, authority bypass, and trust boundary violations.

MCP Security Methodology · March 2, 2026 · 9 min

📦 Epistemic Blocks: The Atomic Unit of AI Verification

Claim → Perspectives → Synthesis → Hash. Everything else builds on this: receipts, trust scores, cross-agent verification.

Architecture Core Concepts · February 28, 2026 · 7 min

⚖️ Dissent Preservation Ratio: Why Disagreement Matters

When models disagree, most systems pick the majority. We preserve the dissent — because the minority opinion is often right.

Metrics Architecture · February 26, 2026 · 6 min

🔄 Orchestration vs. Verification: Why They're Different Problems

Perplexity, LangChain, and CrewAI solve orchestration (Layer 1). ThoughtProof solves verification (Layer 2). Here's why the distinction matters.

Architecture Positioning · February 25, 2026 · 5 min

🤖 Grok vs. PoT Pipeline: Single Model vs. Multi-Model

We asked Grok to verify its own output, then ran the same claim through the full pipeline. The results are instructive.

Benchmarks · February 24, 2026 · 5 min

🪞 Can Grok Audit Itself? A Self-Verification Experiment

Spoiler: No. But the failure mode is fascinating and reveals exactly why multi-model verification exists.

Experiments Benchmarks · February 23, 2026 · 6 min

🔗 AI Supply Chain Auditing with Epistemic Blocks

When AI agents call other agents, how do you verify the chain? Epistemic blocks create a verifiable provenance trail.

Architecture Trust · February 22, 2026 · 7 min

⚡ Verification Latency: How Fast Can Multi-Model Consensus Be?

Parallel execution, smart routing, and when to skip verification entirely. Latency numbers from production.

Performance Architecture · February 21, 2026 · 5 min

📦 pot-sdk v0.1: First Public Release

The first npm package for epistemic verification. What's in it, how to use it, and what's coming next.

Release SDK · February 20, 2026 · 4 min