PR #1148

open

Record: 11L Muon Legal TTT + Entropy-Adaptive Epochs (8×H100) — val_bpb 1.1179 (3-seed mean)

by aamodbhattView on GitHub
val_bpb
1.1179
Architecture
Transformer
Optimizer
Muon
Artifact Size
15.9 MB

Training Techniques

Architecture
BigramHash
Bigram hash embedding used in the base stack.
parameters: {"size":1536}
XSA
XSA applied to the last layers of the model.
parameters: {"last_n_layers":4}
MLP3x
Three-times wider MLP stack with LeakyReLU activation.
parameters: null
LeakyReLU
LeakyReLU activation used in the MLP.
parameters: {"slope":0.5}
Partial RoPE
Rotary position embeddings applied partially to a subset of dimensions.
parameters: {"dimensions":16}
VE128
Value residual enhancement on selected layers.
parameters: {"layers":[9,10],"dimension":128}
Regularization
LN scale
parameters: {"formula":"1/sqrt(layer+1)"}
Weight Averaging
EMA + SWA
parameters: {"ema_decay":0.997,"swa_every":50}
Quantization
late QAT
bits: null
scope: model
Compression
lzma
level: 7
Evaluation
sliding window eval
parameters: {"stride":64}
Test-Time Training
score-first TTT
parameters: {"learning_rate":0.002,"epochs":[2,3,4],"chunk_tokens":32768,"entropy_adaptive":true}
Optimizer
Muon
weight_decay: 0.04
momentum: 0.99
other_params: {"ns_steps":3,"warmup_momentum_start":0.92,"warmup_steps":1500}
LR Schedule
cosine decay
parameters: {"warmdown_steps":3500}
Sequence Length
sequence_length
train_length: 32768
eval_length: 32768

Novel Contributions

  • Muon-style Newton-Schulz orthogonalized updates applied inside the test-time training loop
  • Entropy-adaptive test-time training epochs that allocate 2/3/4 epochs per chunk based on chunk uncertainty
  • Score-first legal TTT with global NLL synchronization across DDP ranks to avoid collective mismatch
  • Improved SOTA validation score with a 3-seed mean of 1.1179