Advertisement · 728 × 90
#
Hashtag
#sds2026
Advertisement · 728 × 90
Empowering sustainable AI and DS projects with Renku. May 7th 2026, from 9.00 to 12.30. Circle Convention Center, Zurich Airport

Empowering sustainable AI and DS projects with Renku. May 7th 2026, from 9.00 to 12.30. Circle Convention Center, Zurich Airport

Join the 2026 IEEE Swiss Conference on Data Science, and attend the 3h-workshop on Renku on May 7th. Understand and solve the integration of multiple resources for data science and AI sustainability, and connect different data, code & set up compute environments for collaborative projects!

#sds2026

0 0 0 0

Is MORE DATA WORTH THE COST? DATASET SCALING LAWS IN A TINY ATTENTION-ONLY DECODER

Anonymous authors
Paper under double-blind review

ABSTRACT
Training Transformer language models is expensive, as performance typically improves with increasing dataset size and computational budget. Although scaling laws describe this trend at large scale, their implications in controlled, smaller-scale settings remain less explored. In this work, we isolate dataset-size effects using a strongly reduced attention-only decoder architecture. By training on progressively larger power-of-two subsets, we observe smooth performance improvements accompanied by clear diminishing returns, consistent with scaling-law be-havior. Using only about 30% of the training data is sufficient to reach approximately 90% of the full-data validation token-level accuracy. These results provide actionable insights into dataset scaling in a controlled, component-isolated setting and offer practical guidance for balancing dataset size and computational cost in compute-restricted settings, such as small research labs and exploratory model development.

Is MORE DATA WORTH THE COST? DATASET SCALING LAWS IN A TINY ATTENTION-ONLY DECODER Anonymous authors Paper under double-blind review ABSTRACT Training Transformer language models is expensive, as performance typically improves with increasing dataset size and computational budget. Although scaling laws describe this trend at large scale, their implications in controlled, smaller-scale settings remain less explored. In this work, we isolate dataset-size effects using a strongly reduced attention-only decoder architecture. By training on progressively larger power-of-two subsets, we observe smooth performance improvements accompanied by clear diminishing returns, consistent with scaling-law be-havior. Using only about 30% of the training data is sufficient to reach approximately 90% of the full-data validation token-level accuracy. These results provide actionable insights into dataset scaling in a controlled, component-isolated setting and offer practical guidance for balancing dataset size and computational cost in compute-restricted settings, such as small research labs and exploratory model development.

Our #paper „Is More Data Worth the Cost? Dataset Scaling Laws in Tiny Attention-Only Decoder“

Was accepted to the #SDS2026 #Conference and also will be presented at #ICLR2026 #DATA-FM Workshop!

I am excited to discuss the paper and have this work finally published! 🥳

3 0 0 0
Post image

📊🤖 SDS2026 | Data, Impact & Value

📅 6–7 May 2026

🎤 Keynotes, workshops & best-practice use cases

🔬 IEEE research + real-world outcomes
Bridging science and business to turn data into value.

Register here👉https://lnkd.in/edu-Y349

#SDS2026 #DataScience #AI #IEEE

0 0 0 0