PR #1448

open

Add non-record 16MB submission: FlashMuon LinearScaleInit XSA5LastGated RReLU2 Int6AWQ

by shram86View on GitHub
val_bpb
1.1834
Architecture
Transformer
Optimizer
Muon
Artifact Size
15,361,671 bytes

Training Techniques

Architecture
XSA
Enabled XSA on the last 5 layers, with only the final XSA layer gated.
parameters: {"layers":5,"gated_layers":1}
ReLU²
Used RReLU2 MLP activation.
parameters: null
Optimizer
Muon
weight_decay: 0.01
momentum: null
other_params: null
Quantization
int6_awq
bits: 6
scope: all
Compression
lzma
level: null
Weight Averaging
EMA
parameters: {"late_start":true}
Evaluation
sliding window eval
parameters: {"stride":64}
Initialization
linear-by-depth scale init
Depth-aware constant initialization for attn_scale and mlp_scale, with stronger scales in later layers.
LR Schedule
warmdown
parameters: {"start_progress":0.75,"progress_based":true}
Sequence Length
sequence_length
train_length: 2048
eval_length: null
Regularization
weight decay
parameters: {"value":0.01}

Novel Contributions

  • XSA enabled on the last 5 layers with only the final XSA layer gated
  • RReLU2 MLP activation
  • int6 AWQ with lzma export
  • val-tail calibration for quantization
  • late EMA with post-train candidate selection
  • depth-aware constant initialization for attn_scale and mlp_scale
  • stride-64 sliding-window evaluation