LoRA (Low‑Rank Adaptation)
Parameter‑efficient fine‑tuning that injects low‑rank matrices into pretrained weights.
Parameter‑efficient fine‑tuning that injects low‑rank matrices into pretrained weights.
Transformer‑based model trained on large corpora to predict tokens; used for chat, code, and agents.
Directional positions in perpetuals or spot; long benefits from price increase, short from decrease.
Order to buy or sell at a specified price or better; implemented in DEXes via RFQs or AMM overlays.
Pool that gradually shifts weights to discover price while limiting whale dominance.
Token representing a share of a liquidity pool; accrues fees and can be staked.