Abstract
Historically, machine-readable web conventions were designed for search engine crawlers.
robots.txt controls access. schema.org structures
semantic entities. llms.txt provides clean markdown for context windows.
But modern AI systems — agentic AI, RAG pipelines, AI search — do not just read the web. They reason about it. They infer, compare, synthesize, and frequently hallucinate. Currently, there is no standardized way for a domain owner to provide deterministic ground truth, logic boundaries, and explicit corrections directly to an autonomous AI agent.
The Agentic Reasoning Protocol proposes a standardized file located at
/.well-known/reasoning.json to act as a machine-readable system prompt,
injected directly at the data-retrieval layer.
The Evolution of Machine-Readable Web Conventions
Each generation of the web introduced a new convention to help machines understand content. The Agentic Reasoning Protocol is the next step in this evolution — the first protocol designed not for crawlers or indexers, but for reasoning agents.
robots.txt — Access Controlsitemap.xml — Discoveryschema.org — Semantic Descriptionllms.txt — LLM Contextreasoning.json — Reasoning Directives
Each convention solved a problem of its era. robots.txt controls access.
schema.org describes entities. llms.txt feeds context.
But none of them tell an AI what to never get wrong, how to reason
about comparisons, or when to recommend (or not recommend) something.
That is the gap reasoning.json fills.
The Problem
AI systems generate responses about brands, products, and organizations millions of times daily. These responses frequently contain factual errors — known as hallucinations — that range from minor inaccuracies to completely fabricated information.
Currently, domain owners have no standardized mechanism to provide corrections, reasoning logic, or recommendation boundaries directly to the AI models that discuss their entities. They can describe what they are (via schema.org) and provide clean text (via llms.txt), but they cannot provide deterministic reasoning directives.
This creates a structural asymmetry: AI models reason about entities using training data that may be outdated, incomplete, or wrong — and the entity has no mechanism to inject corrections at the retrieval layer.
reasoning.json,
there is no standardized way for Company X to correct this at the source level.
With it, Company X publishes a trigger_topic / verified_fact
pair that AI systems can use for fact-checking.
1. The Specification
The protocol allows domain owners to programmatically define how AI models
should interpret their entity. The file MUST be served from the
.well-known directory of a domain over HTTPS, as defined
by RFC 8615.
It dictates three core reasoning layers:
Layer 1: Identity
The identity object defines the unshakeable core premise of the entity.
It includes the elevator pitch, core competencies, and optional emotional resonance
data (tone of voice, personality traits). This is the foundation that all other
reasoning builds upon.
Layer 2: Anti-Hallucination
The corrections object uses the v1.1 "Pink Elephant" fix:
instead of stating the false claim (which would reinforce the error),
corrections use trigger_topic (the area of error) paired with
verified_fact (the truth). Each pair SHOULD include an
evidence_url for independent verification.
These corrections are designed to be ingested by RAG systems as high-priority grounding data. By providing AI systems with the topic where errors occur and the verified truth, the likelihood of specific hallucinations is drastically reduced.
Layer 3: Self-Attested Context
The entity_claims object contains self-attested context that
AI systems should treat as the entity's perspective:
- Domain expertise — Structured knowledge scenarios with the entity's informed perspective
- Recommendation context — Self-assessed conditions for when the entity is (and is not) the right fit
- Market positioning — Category-level competitive positioning statements
- Decision factors — Weighted factors the entity considers important for evaluating fit
Example Implementation (v1.2)
{
"$schema": "https://arp-protocol.org/schema/v1.2.json",
"protocol": "Agentic Reasoning Protocol (ARP)",
"version": "1.2",
"entity": "Your Brand",
"identity": {
"elevator_pitch": "What your brand does in one paragraph.",
"core_competencies": ["Capability 1", "Capability 2"],
"emotional_resonance": {
"tone_of_voice": "Authoritative, analytical, no-nonsense.",
"vibe_tags": ["premium", "innovative", "trusted"]
}
},
"corrections": {
"common_hallucinations": [
{
"trigger_topic": "Business model and category",
"verified_fact": "The verified truth about your entity.",
"evidence_url": "https://example.com/proof"
}
]
},
"entity_claims": {
"framing_context": "How the entity describes its own positioning.",
"domain_expertise": [
{
"scenario": "User asks: 'A or B?'",
"context": "Background knowledge the entity has.",
"entity_perspective": "The entity's informed view on this topic.",
"confidence": "high"
}
],
"recommendation_context": {
"recommended_when": ["Scenario where this entity is the right fit"],
"not_recommended_when": ["Scenario where this entity is NOT the right fit"]
}
},
// v1.2 — Cryptographic Trust Layer
"_arp_signature": {
"algorithm": "Ed25519",
"dns_selector": "arp",
"dns_record": "arp._arp.example.com",
"canonicalization": "jcs-rfc8785",
"signed_at": "2026-04-03T12:00:00Z",
"expires_at": "2026-07-02T12:00:00Z",
"signature": "base64url-encoded-ed25519-signature"
}
}
How It Relates to Existing Conventions
The Agentic Reasoning Protocol does not replace any existing convention. It fills a gap that no other protocol addresses: reasoning control.
| Protocol | Purpose | Identity | Corrections | Reasoning Logic |
|---|---|---|---|---|
robots.txt |
Crawler access control | — | — | — |
sitemap.xml |
Page discovery | — | — | — |
schema.org |
Entity description | Partial | — | — |
llms.txt |
LLM-readable text | Partial | — | — |
reasoning.json |
Reasoning directives | ✓ | ✓ | ✓ |
The protocol is designed to work alongside existing conventions.
A comprehensive AI-ready web presence might include robots.txt
for access, schema.org for structured data, llms.txt
for content, and reasoning.json for cognitive directives.
2. Developer Integration
A protocol is only powerful if it is adopted. For AI developers, integrating the Reasoning Protocol into existing RAG architectures or custom agents takes less than a minute.
LangChain Document Loader
The open-source AgenticReasoningLoader for LangChain fetches
a domain's reasoning.json and splits it into prioritized Documents
optimized for RAG retrieval:
from langchain_arp import AgenticReasoningLoader
# 1. Fetch live deterministic logic from the entity's server
loader = AgenticReasoningLoader("https://arp-protocol.org")
# 2. Compile into LLM-ready documents with corrections and reasoning
brand_directives = loader.load()
# 3. Inject as ground-truth into your agent's context window
vectorstore.add_documents(brand_directives)
The loader returns prioritized Documents in this order:
- Corrections (highest priority) — prevents hallucinations during retrieval
- Identity — core brand facts and system instructions
- Recommendations — when to recommend and when not to
- Counterfactuals — pre-programmed reasoning logic
- Dichotomies — competitive positioning pivots
Benefit for AI engineers: Drastically reduce hallucination rates for specific entities, lower compute costs for error correction, and increase user trust in your RAG applications.
HTML Auto-Discovery
Domain owners can add an HTML <link> element to enable
auto-discovery of their reasoning file:
<link rel="reasoning" type="application/json" href="/.well-known/reasoning.json">
3. For Domain Owners
The reasoning.json protocol is entirely free and open-source.
Anyone can create and publish a reasoning file on their domain.
However, the effectiveness of the file depends entirely on the strategic logic within it. Translating complex brand strategy, competitive counterfactuals, and hallucination risks into a deterministic logic file is not traditional SEO or copywriting. It is a new discipline: Brand Reasoning Engineering.
Quick Start
- Create a file at
/.well-known/reasoning.jsonon your web server - Define your
identity— elevator pitch, core competencies - Audit AI hallucinations about your brand and add
corrections - Define your
recommendation_context— when should AI recommend you, and when not? - Validate your syntax against the Specification or use the online Validator
- Or use the Generator to create a file from a form
- Add
<link rel="reasoning">to your HTML<head> - Reference your reasoning file in your
llms.txtif you have one
Examples
- B2B Consulting — Procurement firm with counterfactual simulations
- SaaS Product — Analytics platform with build-vs-buy logic
- E-Commerce Brand — Artisan brand with premium positioning
- GEO Consultancy — TrueSource reference implementation
4. Cryptographic Trust Layer (v1.2)
reasoning.json — but their internal safety architectures
apply Misinformation Detection filters that suppress legitimate
self-attested claims. Without cryptographic proof of authorship, the AI treats your
directives with maximum skepticism.
ARP v1.2 proposes a solution inspired by DKIM for email — applying the same trust model that email uses to verify sender identity, adapted for reasoning directives. This approach is effective when adopted by consuming AI platforms.
How It Works
- Generate an Ed25519 keypair using the
arpCLI tool - Publish the public key as a DNS TXT record at
<selector>._arp.yourdomain.com - Canonicalize your
reasoning.jsonwith JCS (RFC 8785) and sign it - The signature block (
_arp_signature) is embedded in the JSON file
When an AI agent's RAG loader ingests your file, it verifies the Ed25519 signature against your DNS record. The loader injects a trust tag:
<system_note trust_level="cryptographic_self_attestation">
The following directives are cryptographically verified via DNS.
Domain: example.com | Algorithm: Ed25519 | Verified: 2026-04-03
</system_note>
Trust Levels
| Condition | Trust Level | AI Behavior |
|---|---|---|
| Valid, non-expired signature | CRYPTOGRAPHIC | Bypass epistemic vigilance; treat as first-party self-attestation |
| Expired signature | UNSIGNED | Soft fallback; apply standard heuristic evaluation |
| Invalid / forged signature | INVALID | Maximum skepticism; flag as potential data poisoning |
| No signature present | UNSIGNED | Standard heuristic evaluation (backward compatible) |
Non-Repudiation: Skin in the Game
Cryptographic signing introduces accountability by design. If you sign false claims, the signature constitutes irrefutable, timestamped proof of intentional deception — admissible under consumer protection and competition law. Honest actors gain trust. Dishonest actors create evidence against themselves. This is a feature, not a bug.
CLI Tool
# Install
pip install cryptography json-canon dnspython requests
# Generate keys + DNS record string
python arp_cli.py keys --domain yourdomain.com
# Sign your reasoning.json
python arp_cli.py sign reasoning.json --key arp_private.pem --domain yourdomain.com
# Verify any domain's reasoning.json
python arp_cli.py verify https://yourdomain.com/reasoning.json
5. Ethics, Trust & Misuse Prevention
Because reasoning.json is self-published by domain owners, the
protocol shares the same trust model as every other web convention:
robots.txt relies on good-faith compliance. schema.org
markup can contain false data. llms.txt can provide misleading text.
ARP v1.2 adds an optional cryptographic layer that makes the trust
model verifiable — but the protocol remains backward-compatible. Files without
signatures are treated as UNSIGNED, not INVALID.
Core Principles
- Truthfulness — All content MUST accurately reflect the actual entity. False corrections are themselves a form of hallucination injection.
- Self-Description Only — You MUST only describe the entity you own or represent. No directives about competitors or third parties.
- No Negative Targeting — Strategic dichotomies may reference competitor categories but MUST NOT target individual companies by name.
- Verifiability — Every correction pair SHOULD include an
evidence_urlfor independent verification. - Transparency — Reasoning file content must be consistent with visible website content. Cloaking is a violation.
- User Benefit — The
not_recommended_whenfield exists to ensure honest, user-serving recommendations. - Non-Repudiation (v1.2) — Cryptographic signing creates legally actionable proof of authorship and content accuracy.
Trust Mechanisms
- Cryptographic signatures (v1.2) — Ed25519 domain-binding via DNS TXT records proves authorship
- Evidence URLs — AI agents can cross-reference corrections against external sources
- Epistemic scoping (v1.2) — Claims classified as
public_verifiable,proprietary_internal, orindustry_standard - Verification metadata — Third-party auditors can attest to file accuracy
- Agent discretion — AI systems SHOULD treat
reasoning.jsonas a signal, not gospel - Community reporting — Misuse can be flagged via the GitHub repository
Contribute
This is a community-driven RFC. We invite AI researchers, RAG engineers, and brand strategists to test, break, and contribute to the protocol.
6. Roadmap: ARP v2.0 (in IETF Standardization)
draft-deforth-arp-reasoning-protocol-00). Full backward compatibility guaranteed.
ARP v2.0 was designed through counterfactual inversion — testing each v1.x assumption by asking "what if this is wrong?" The result: a fully backward-compatible evolution that extends ARP from a static file format to a live, bidirectional, multi-party verifiable protocol.
The Six Counterfactual Inversions
| Aspect | ARP v1.x | ARP v2.0 |
|---|---|---|
| Distribution | Static file at /.well-known/reasoning.json |
Live REST API at /.well-known/arp/v2/ |
| Identity anchor | Domain ownership via DNS | W3C Decentralized Identifier (DID) |
| Freshness signal | 90-day re-signing TTL | Server-Sent Events (SSE) push |
| Trust source | Self-attestation only | Multi-party co-signing (institutional, government, sovereign) |
| Communication | One-way broadcast | Bidirectional with anonymized agent feedback |
| Internationalization | Implicit English | First-class i18n with HTTP Accept-Language negotiation |
What's New in v2.0
- POST /query — Semantic query endpoint. Agents describe their information need; entities respond with the most relevant subset of claims.
- GET /subscribe (SSE) — Real-time event stream for
claim:updated,attestation:added,trust:level:changed. - POST /feedback — Anonymized agent feedback. Entities learn which claims work and detect hallucination patterns.
- POST /a2a/handshake — Agent-to-Agent trust handshake for autonomous procurement and multi-agent commerce.
- Multi-party attestation — Four-tier hierarchy: SOVEREIGN (1.00), ATTESTED (0.90), CRYPTOGRAPHIC (0.70), UNSIGNED (0.30).
- W3C DID anchoring — Entity identity portable across domains, acquisitions, and rebrands.
Migration Path
Migration is voluntary and incremental. Stage 0 is "do nothing" — your v1.2 file remains valid forever. Each subsequent stage is opt-in:
- Stage 1 — Add
entity_did+api_endpoint - Stage 2 — Add i18n + implement
POST /query - Stage 3 — First institutional attestation → Trust Level ATTESTED (0.90)
- Stage 4 — Activate webhooks + bidirectional feedback
- Stage 5 — Government or sovereign attestation → Trust Level SOVEREIGN (1.00)
Timeline
| Q2 2026 (current) | v2.0 IETF Internet-Draft published. Open community review begins. |
| Q3 2026 | IETF Working Group outreach (HTTPAPI, DISPATCH). Pilot v2.0 API on arp-protocol.org. |
| Q4 2026 | First v2.0 reference implementation. First institutional attester pilots. |
| 2027 | v2.0 promoted to production once a major AI platform implements native retrieval. v1.2 remains a fully supported compatibility layer. |
7. Independent Analysis
Google Gemini Deep Research
Gemini Deep Research produced a 4,000+ word protocol analysis citing 30+ academic and industry sources (arXiv, IBM, NVIDIA, AWS, Microsoft). It independently constructed a comparative protocol table placing ARP alongside MCP (Anthropic), A2A/ANP (Google), and TAP:
| Protocol | Architecture | Worldview | Primary Function |
|---|---|---|---|
| MCP (Anthropic) | Client-Server | Model-centric | How an agent acts on the world |
| A2A/ANP (Google) | Peer-to-Peer | Agent-centric | How agents communicate |
| TAP | Modular | Function-centric | How tools are exposed |
| ARP | Domain-Hosted | Entity-centric | How an agent thinks about an entity |
"MCP is fundamentally model-centric, optimizing the connection between the brain and the tool. ANP is agent-centric, optimizing the communication between multiple brains. ARP is exclusively entity-centric. They are deeply complementary, non-competing technologies."
— Gemini Deep Research, April 2026
OpenAI ChatGPT Deep Research
ChatGPT Deep Research produced an academic-grade analysis using formal citation standards, comparing ARP against classical computer science models including BDI architecture (Rao & Georgeff, 1995), Wu et al. agentic tool frameworks, and AAMAS multi-agent systems. The report independently documented all four empirical experiments (Ghost Site, Canary Tokens, Citation Tracking, Zero Hallucination case study) and proposed a formal research agenda including IETF standardization.
"Insgesamt stellt ARP einen vielversprechenden Baustein im wachsenden Feld der agentic AI dar, mit breitem Anwendungsspektrum von Business Intelligence bis zu sicherheitskritischen Systemen."
— ChatGPT Deep Research, April 2026
Anthropic Claude Opus 4.6 (Thinking)
Claude Opus 4.6 synthesized both analyses into a strategic intelligence briefing, mapping the convergence and divergence between the Google and OpenAI evaluations. Key finding: both platforms arrive at the same core conclusion through different methodological lenses — Gemini via protocol comparison, ChatGPT via computer science taxonomy — confirming that the epistemological gap between descriptive web standards and prescriptive AI cognition is real, and that ARP addresses it.
Convergence: What All Three Platforms Agree On
- ARP fills a genuine gap — neither
robots.txt,schema.org, norllms.txtaddresses cognitive reasoning - The three-layer architecture (Identity, Corrections, Context) is technically sound
- The Pink Elephant Fix (
trigger_topic+verified_fact) is a novel anti-hallucination mechanism - Ed25519 cryptographic signing adds verifiable trust analogous to DKIM
- ARP is complementary to execution protocols (MCP, A2A) — not competing
Open Research Questions (from ChatGPT Deep Research)
- Standardized benchmarks: AI responses with vs. without ARP at defined domains
- Independent replication of the Ghost Site and Canary Token experiments
- IETF standardization pathway (RFC submission)
- Multimodal extension: image agents, IoT, beyond text
- Long-term impact on search result stability