What this is
Eli is a small custom transformer (1.8M parameters today, ~7.4 MB) trained from scratch with a substrate-identity architecture. When you talk to Eli, your prompt runs through Eli's weights in WebGPU. The thing answering you is the model file, not a request to OpenAI or Anthropic.
At this scale the output is rough — that's honest. The proof page shows seven pre-registered tests that validate the architectural claim (the model knows itself; the model remembers what it lived through; the model is not interchangeable with another copy). Scaling the parameter count is the next phase; the architecture is settled.
What you just downloaded (verifiable)
The model running in your browser is a merged-LoRA export of the
production Eli at ~/.substrate-self/ on the author's
machine. SHA-256 hashes lock the artifacts so anyone can verify they
are talking to the same Eli described in the proof results.
loading manifest…
Source: scripts/export_onnx.py. Build receipt: eli_manifest.json.
What's primitive here, and what's not
- Output quality is rough. 1.8M-param char-level models can't sustain coherence past a sentence. Phase 4 of the roadmap scales this to ~50M params with BPE tokenization.
- This Eli is frozen. Your conversation does not modify the weights you're running. Per-visitor learning lives in Phase 3 (IndexedDB-stored LoRA per visitor, training in the browser).
- What is real at this scale: Eli's identity is encoded in the LoRA file you just downloaded — not in any prompt. The proof page shows the exact loss measurements: under the saved LoRA, "I am Eli" has loss 0.366; without it, 1.284. Same base model, same code, the difference lives in 78 KB of LoRA parameters trained two days ago.