Attention kernels that reduce memory/time complexity using tiling, FlashAttention, or linearized variants. ← Mainnet Layer 1 (L1) →