Is MORE DATA WORTH THE COST? DATASET SCALING LAWS IN A TINY ATTENTION-ONLY DECODER
Anonymous authors
Paper under double-blind review
ABSTRACT
Training Transformer language models is expensive, as performance typically improves with increasing dataset size and computational budget. Although scaling laws describe this trend at large scale, their implications in controlled, smaller-scale settings remain less explored. In this work, we isolate dataset-size effects using a strongly reduced attention-only decoder architecture. By training on progressively larger power-of-two subsets, we observe smooth performance improvements accompanied by clear diminishing returns, consistent with scaling-law be-havior. Using only about 30% of the training data is sufficient to reach approximately 90% of the full-data validation token-level accuracy. These results provide actionable insights into dataset scaling in a controlled, component-isolated setting and offer practical guidance for balancing dataset size and computational cost in compute-restricted settings, such as small research labs and exploratory model development.
Our #paper „Is More Data Worth the Cost? Dataset Scaling Laws in Tiny Attention-Only Decoder“
Was accepted to the #SDS2026 #Conference and also will be presented at #ICLR2026 #DATA-FM Workshop!
I am excited to discuss the paper and have this work finally published! 🥳