Logprobs

Per‑token log‑probabilities emitted by models; useful for calibration and safety filters.

Logit Bias

Adjustment to token logits at inference time to steer generation or block outputs.

Linear Probing

Test that fits a linear classifier on frozen embeddings to measure learned representations.

Label Leakage

Information about targets leaks into features or prompt context, inflating metrics.

Label Smoothing

Regularization that replaces hard labels with softened targets to reduce overconfidence.

Latent Space

Compressed representation where generative models operate to encode semantic structure.

Learning Rate

Scalar step size for gradient updates; too high diverges, too low slows training.

Layer Normalization

Normalization technique applied across features for each token; stabilizes training in transformers.

Loss Function

Objective minimized during training, e.g., cross‑entropy for classification or language modeling.