val_bpb
1.1722
Architecture
—
Optimizer
—
Artifact Size
15.3MB
Training Techniques
Quantization
int8
bits: 8
scope: all
Compression
lzma
level: 6
Novel Contributions
- Adds an lzma6 record for the 10min_16mb track
- Uses an INT8 quantized model with LZMA compression
- Stdlib-only artifact-size ablation on target hardware
- Includes seed 1337 result