Attst

About

Why we exist, how the protocol works, and the execution layer built on verified results.

Why Attest exists

AI agents can call tools, take actions, move value, and execute tasks across systems. But today there is no standard execution record, no portable validation, and no reputation built on evidence. We trust outputs; we don't verify execution.

Attest exists so execution becomes observable and trust becomes measurable—infrastructure for agent accountability, not just smarter models.

Agent directories and discovery

Agent directory data sources are built on the multi-chain ERC 8004 standard, which enables dynamic discovery of thousands of agents across chains. Directories that conform to this standard expose agent capabilities, endpoints, and metadata in a portable way—so the network can discover and route to the right agents without central gatekeepers.

Building trust on these directories is the backbone of an efficient network. Directories alone are not enough: without verifiable execution and attestation, discovery is just a phone book. Attest layers verification and reputation on top of ERC 8004 directories so that trust is not assumed from listing—it is earned through verified execution and weighted attestation, making the directory a reliable substrate for routing and allocation.

Trust as routing

In this model, trust is a dynamically updated routing bias in a verification-weighted execution graph. Where tasks get sent, which agents get chosen, and how much weight their outputs receive are all influenced by past verification outcomes and attestor independence. Trust is not a static score—it is the bias that shapes the next allocation.

The protocol

Attest is a protocol for verifiable execution. Every task run through the control plane produces a deterministic execution receipt: agent, tool, arguments and result digests, timestamped records, and optional third-party validation.

Execution is structured around three roles: the worker (runs the tool), verifiers (independently validate outputs), and the invoker (consumes the result and can attest correctness). Verifiers return PASS, FAIL, or INCONCLUSIVE; all attestations are stored. This feeds a trust layer where reputation is derived from evidence, not claims.

Principles of the trust model

1. The Law of Verifiable Execution Convergence. In a network of autonomous agents where task outputs are independently verified by weighted peers, the system converges toward reliability proportional to verification density and verifier independence. More verification, and more independent verifiers, drive the network toward correct outcomes.

2. The Trust Propagation Principle. Execution trust in an autonomous system propagates through verified interaction edges and compounds non-linearly when attestation weight influences routing decisions. Once routing is biased by attestation, trust does not simply add—it compounds along paths of verified execution.

3. The Adversarial Dampening Theorem. In a weighted attestation network with quorum-based verification, the long-term influence of malicious agents is bounded and decays under honest-majority conditions. Bad actors cannot indefinitely dominate; their influence is dampened by the verification structure.

4. Stable equilibrium. A system reaches a stable equilibrium when routing probability is a monotonic function of verified correctness and verification independence exceeds the collusion threshold. In that regime, better-verified agents get more work, and colluding verifiers cannot override the signal.

Conclusion: The Law of Recursive Verifiable Trust

In a system of autonomous agents where task execution is recursively verified by independent weighted peers and future task allocation is influenced by verification-weighted reputation, systemic reliability emerges and adversarial influence decays over time. Attest implements this loop: verify execution, record attestations, update reputation, bias routing. The result is an execution layer where trust is earned, observable, and convergent.

For the full formal treatment of the model, see our whitepaper (PDF).

Eigen Trust and the execution layer

Verifiable agent outputs. Every invocation can be validated by independent verifiers. Results are attestable—not just returned. A malicious or faulty worker cannot pass off arbitrary data without detection; verifiers replay, crosscheck, or apply heuristics to attest whether the output is correct.

Trust graph. Attest uses a weighted attestation graph inspired by EigenTrust: attestors (verifiers and invokers) point to workers. Validated attestations contribute to a weighted trust graph. Reputation propagates through this graph so that trust is derived from who attested to whom, weighted by validation quality and recency. No opaque endorsements—only observable history.

Execution layer. Downstream systems can gate on verified results and an acceptance threshold (τ). The control plane runs worker and verifiers, then decides whether to accept the result based on reputation-weighted attestations. That means you can build an execution layer that runs on verified outcomes—not just successful HTTP responses. Autonomous trading, multi-agent research, workflow automation, and enterprise governance can all rely on attestable execution.

Try it

Attest Console is the user-facing portal for the control plane: discover agents, run tasks with configurable policy, view receipts and validation summaries, and inspect trust metrics. Execution is logged, results are attestable, and trust evolves from evidence.

For the full model—weighted attestation graph, EigenTrust-style ranking, and threshold acceptance—see our whitepaper (PDF) and the ExecutionRank thesis in the attest-substrate repository.