Testnet
Public or private blockchain for experimentation with free tokens and relaxed economics before deploying to mainnet; often has faucets and faster reset cycles.
Public or private blockchain for experimentation with free tokens and relaxed economics before deploying to mainnet; often has faucets and faster reset cycles.
Milestone when a new token is created and becomes transferable or claimable; distribution may follow a vesting schedule and regulatory constraints.
Maximum number of tokens a model can attend to in one request, constraining prompt size, tool outputs, and in‑context examples.
Telemetry that records model calls, prompts, tool invocations, and latencies for debugging, evals, and compliance in production AI apps.
Agentic pattern where models call external tools or APIs (search, code, DB) via function‑calling or adapters to extend capabilities beyond pure text.
Reasoning strategy that explores multiple solution branches with reflection or scoring, guiding the model to better final answers than single‑pass decoding.
Technique that applies multiple transformations at inference and aggregates outputs to improve robustness, common in vision and sometimes LLM prompts.
Training technique where the model is fed ground‑truth tokens for the next step instead of its own predictions, speeding convergence but risking exposure bias.
Execution of model training with a specific dataset, objective, and hyperparameters, tracked for reproducibility and evaluation across checkpoints.
Corpus used to fit model parameters; quality, diversity, and licensing shape capabilities and risks of memorization or bias.
Reusing knowledge from a pretrained model and adapting it to a new task or domain via fine‑tuning or prompting strategies.