Transformer‑based model trained on large corpora to predict tokens; used for chat, code, and agents. ← Layer 1 (L1) Keccak‑256 →