This product was built by the same system described on the homepage.

Prove your AI agent did what it said it did.

A drop-in spec that gives any AI coding agent cryptographic proof of every pipeline run. No private keys. No infrastructure. Just Sigstore.

The Problem

You tell your AI agent to build something. It says "done."

Your agent produces files, runs tests, reports success. But there is no audit trail. No way to know if the output was tampered with. No proof for your team, your auditors, or anyone else that the work actually happened the way the agent claims.

How It Works

Five steps. Zero infrastructure.

Every pipeline run produces a tamper-evident attestation bundle. Signed with your identity, logged to a public transparency ledger. The whole chain, from intent to proof, is automated.

1

Intent chain

When the user requests work, the raw text is SHA-256 hashed (never stored) and appended to a tamper-evident JSONL chain. Privacy by design. Proves a request was made without revealing what was said.

2

Hash collection

After tasks complete, every output file is SHA-256 hashed. The hashes are collected into a manifest, a fingerprint of what the agent produced.

3

Bundle creation

A JSON attestation bundle is created with all hashes, timestamps, task metadata, and the intent chain. This is the claim: "these artifacts were produced from this request at this time."

4

Sigstore keyless signing

The bundle is signed using Sigstore's OIDC flow. Your GitHub or Google identity, verified in real time. No private keys to manage. No PKI. The signature is ephemeral and tied to your identity at the moment of signing.

5

Rekor transparency log

The signed attestation is automatically logged to Rekor, a public immutable ledger. Anyone can verify the entry. The timestamp is canonical. You cannot backdate or forge it.

What's in a Bundle

Annotated attestation bundle

Every field traces back to a real cryptographic operation. The intent chain proves what was requested. The artifacts prove what was produced. Rekor proves when it happened.

{
  "predicateType": "natural-language-session/v1",
  "predicate": {
    "invocation": {
      "configSource": "sha256-of-spec-file",     ← what was requested
      "parameters": "sha256-of-task-decomposition",
      "intent_chain": [               ← tamper-evident request log
        {
          "rev": 1,
          "timestamp": "2026-04-28T12:00:00Z",
          "raw_input_hash": "abc123...",        ← SHA-256 of user's words
          "spec_hash_after": "def456..."
        }
      ]
    },
    "output": {
      "artifacts": [                   ← what was produced
        {"path": "src/login.py", "sha256": "abc123..."},
        {"path": "tests/test_login.py", "sha256": "def456..."}
      ]
    },
    "timestamp": {
      "start": "2026-04-28T12:00:00Z",         ← when it happened
      "end": "2026-04-28T12:05:00Z"
    },
    "rekor": {
      "logIndex": "1387966928",                 ← public, immutable proof
      "entryUrl": "https://search.sigstore.dev/?logIndex=1387966928"
    }
  }
}

Drop-in Setup

Three steps. That's it.

01

Install Sigstore

pip install sigstore
02

Copy the attestation library

cp -r lib/ your-project/lib/ mkdir -p your-project/.state/{attestations,intents}
03

Run

# Self-test (no signing) python lib/attest.py --dry-run # Real attestation python lib/attest.py path/to/spec.md path/to/tasks/

Tech Stack

Sigstore Rekor OIDC Python SHA-256 JSONL

Results

-- Attested pipeline runs
100% Verification success rate
<5s Time per attestation

Who This Is For

INDIVIDUAL / ACCOUNTABILITY

Solo developers

Using AI agents who want accountability. Know what your agent actually did, not just what it claimed.

TEAM / AUDIT TRAIL

Teams

Adopting coding agents who need audit trails. Every pipeline run produces verifiable evidence your team can inspect.

ENTERPRISE / COMPLIANCE

Regulated industries

SOC2, HIPAA, audit-ready from day one. Cryptographic proof that your AI agent operated within governance. No retroactive paperwork.

View on GitHub ↗