DQN (Deep Q‑Network)
Value‑based RL method that uses a neural network to approximate Q‑values for actions.
Value‑based RL method that uses a neural network to approximate Q‑values for actions.
Regularization technique that randomly disables neurons during training to prevent overfitting.
Combines neural networks with reinforcement learning for agents that act in environments to maximize reward.
Surface in feature space that separates classes according to a model’s predictions.
Sequence‑to‑sequence architecture with bidirectional encoders and autoregressive decoders, used in translation.
Architecture used in LLMs that autoregressively predicts the next token with causal attention.
Specific scheme such as did:ethr or did:key that defines how DIDs are created and resolved.
Wallet where all addresses are derived from a single seed using a standardized path scheme.
Contracts whose value derives from an underlying reference, such as perpetuals, futures, and options.
Financial services built on smart contracts, including lending, exchanges, derivatives, and asset management.