Information about targets leaks into features or prompt context, inflating metrics.

Regularization that replaces hard labels with softened targets to reduce overconfidence.

One‑time signature scheme using hash functions; basis for post‑quantum XMSS and SPHINCS.

Time between a request and its response; critical for UX in wallets, L2s, and inference.

Compressed representation where generative models operate to encode semantic structure.

Base blockchain that provides security and consensus, examples include Bitcoin, Ethereum, and Acki Nacki.

Base blockchain that handles consensus and data availability directly on its main chain.

Scaling networks built on top of a Layer 1, batching or compressing transactions and posting proofs back to L1.

Scaling system that executes off‑chain or rollup transactions while inheriting L1 security.

App‑specific or ecosystem rollups built atop L2s for specialized performance or privacy.

Normalization technique applied across features for each token; stabilizes training in transformers.

Minting deferred until purchase; creator signs a voucher, buyer pays gas at claim.

Scalar step size for gradient updates; too high diverges, too low slows training.

Smart‑contract system for collateralized borrowing and interest‑bearing deposits.

Client that verifies headers and proofs without storing full chain state; ideal for wallets and bridges.

Layer for Bitcoin enabling fast payments via payment channels and HTLCs.

Order to buy or sell at a specified price or better; implemented in DEXes via RFQs or AMM overlays.

Test that fits a linear classifier on frozen embeddings to measure learned representations.

Forced sale of collateral when a loan’s health factor falls below thresholds.

Distribution of tokens to LPs to incentivize providing liquidity.

Smart‑contract pool of tokens enabling automated market making and swaps.

Property that a system continues to make progress, e.g., blocks keep finalizing.

Transformer‑based model trained on large corpora to predict tokens; used for chat, code, and agents.

Bloom filter in block headers summarizing events and addresses for fast log queries.

Adjustment to token logits at inference time to steer generation or block outputs.

Per‑token log‑probabilities emitted by models; useful for calibration and safety filters.

Directional positions in perpetuals or spot; long benefits from price increase, short from decrease.

Objective minimized during training, e.g., cross‑entropy for classification or language modeling.