PR #529

closed

Record: GPTQ + Legal TTT (3-seed mean val_bpb=1.1195)

by EthanYangTWView on GitHub
val_bpb
1.1195
Architecture
Transformer
Optimizer
AdamW
Artifact Size
15.96 MB

Training Techniques

Quantization
QAT
bits: 6
scope: all
GPTQ
bits: 6
scope: all
Architecture
XSA
XSA applied to all 11 layers
parameters: {"layers":11}
Partial RoPE
Partial rotary positional embeddings
parameters: {"dimensions":"16/64"}
SmearGate
SmearGate with OrthoInit
parameters: null
BigramHash
BigramHash feature with shared VE128 in later layers
parameters: {"size":2048}
KV head count
Grouped-query attention with 8 attention heads and 4 KV heads
parameters: {"heads":8,"kv_heads":4}
MLP3x
MLP with 3x relu²
parameters: null
Optimizer
AdamW
weight_decay: 0
momentum: null
other_params: {"learning_rate":0.0001}
Weight Averaging
EMA
parameters: {"decay":0.997}
Compression
zstd
level: 22
Evaluation
sliding window eval
parameters: {"stride":32}
Test-Time Training
score-first TTT
parameters: {"epochs_per_chunk":3,"chunk_size":131072,"stride":32,"learning_rate":0.0001,"weight_decay":0}
Initialization
OrthoInit
Orthogonal initialization used with SmearGate
Regularization
layerwise LN scale
parameters: null
Sequence Length
sequence_length
train_length: 131072
eval_length: null
Other
other
Early QAT with threshold 0.5 using fake int6 STE and percentile clipping before GPTQ
parameters: {"threshold":0.5,"clip_percentile":0.9995}
other
Manual gradient all_reduce without DDP wrapper
parameters: null

Novel Contributions

  • GPTQ quantization with Hessian-aware error compensation, column reordering, and 256-sample calibration
  • Early QAT with threshold 0.5 and longer adaptation to quantization noise
  • EMA tuned to 0.997
  • Legal score-first TTT where each token is scored before any gradient update using it
  • Manual gradient all_reduce without a DDP wrapper