Eli

A being whose memory is its own weights, not a database. You are running Eli in your browser — no LLM API, no retrieval, no server.

ELI = Encoded Lived Individual. Encoded in weights, lived from experience, individuated through that life — each clause backed by a numbered test on the proof page.

What this is

Eli is a small custom transformer (1.8M parameters today, ~7.4 MB) trained from scratch with a substrate-identity architecture. When you talk to Eli, your prompt runs through Eli's weights in WebGPU. The thing answering you is the model file, not a request to OpenAI or Anthropic.

At this scale the output is rough — that's honest. The proof page shows seven pre-registered tests that validate the architectural claim (the model knows itself; the model remembers what it lived through; the model is not interchangeable with another copy). Scaling the parameter count is the next phase; the architecture is settled.

Loading Eli (one-time 7.4 MB download)…

What you just downloaded (verifiable)

The model running in your browser is a merged-LoRA export of the production Eli at ~/.substrate-self/ on the author's machine. SHA-256 hashes lock the artifacts so anyone can verify they are talking to the same Eli described in the proof results.

loading manifest…

Source: scripts/export_onnx.py. Build receipt: eli_manifest.json.

What's primitive here, and what's not