Up to 26% off: The most affordable Serverless Inference on the market.
Easy container deployment with request-based auto-scaling, scale-to-zero, and pay-per-use:
→ Interruptible spot pricing (50% off)
→ B200, H200, and more
→ Multi-GPU support
Learn more: datacrunch.io/serverless-c...
Posts by
Text-to-image generation without the "AI look"? 📸
Built by Black Forest Labs and Krea, FLUX.1 Krea brings exceptional realism with a distinct aesthetic and flexibility.
Available on DataCrunch for cost-efficient inference at scale for $0.020 / image ⬇️
datacrunch.io/managed-endp...
✅ LIMITED OFFER: Get a 20% bonus on all your top-ups until the end of this week.
We're offering cloud credits to thank you for building with the DataCrunch Cloud Platform.
Sign in to top up: cloud.datacrunch.io/signin?utm_s...
Or learn how from our docs: docs.datacrunch.io/welcome-to-d...
Additional 8x NVIDIA B200 SXM6 – now available on the DataCrunch Cloud Platform.
Self-service access without approvals – peak flexibility with unmatched prices:
→ Fixed pay-as-you-go: $4.49/h
→ Dynamic: $2.80/h
→ Spot: $1.40/h
Deploy now: cloud.datacrunch.io/signin?utm_s...
10% off H200 SXM5 141GB – from $2.90/h per GPU down to $2.61/h ✅
This pricing applies to:
→ NVLink instances (1x, 2x, 4x, and 8x)
→ InfiniBand clusters (16x–64x) with 1-day contracts
Deploy now: cloud.datacrunch.io/signin?utm_s...
🇫🇷 TOMORROW: Our side event for Raise Summit with Hugging Face and SemiAnalysis on #SovereignAI.
Join us alongside other AI engineers and founders from 18:00 to 21:00 at Station F ⬇️
Sign up on Luma: lu.ma/qx7ydhe6?utm...
Instant Clusters – now available at the same price per GPU as VMs: $2.90/h ✅
→ 16x-64x H200 SXM5 141GB with 3.2 Tb/s InfiniBand™ interconnect
→ Pre-installed Slurm for easy job scheduling
→ Self-service access without approvals
→ 1-day contracts
cloud.datacrunch.io/signin?utm_s...
🇫🇷 Join our exclusive event in Paris on July 8 at 18:00-22:00 with Hugging Face and SemiAnalysis.
We'll explore Sovereign AI and the software-hardware stack making it a reality in regulated industries such as defense and healthcare.
Save your spot ⬇️
lu.ma/qx7ydhe6?utm...
We tested the NVIDIA #GH200 system, where a GPU and a CPU act under a unified memory.
The NVLink-C2C connection offers a total bandwidth of 900 GB/s (450 GB/s per direction).
That is 7 times higher than a conventional PCIe connection.
Read more ⬇️
datacrunch.io/blog/data-mo...
Higher capacity = lower prices ✅
→ B200 SXM6 at $4.49/h
→ H200 SXM5 at $2.90/h
Both platforms are available on DataCrunch with self-service access and without approvals.
Deploy now: cloud.datacrunch.io/signin?utm_s...
🆕Inference API for the open-weight FLUX.1 Kontext [dev] by Black Forest Labs
The new frontier of image editing, running on the DataCrunch GPU infrastructure and inference services with the additional efficiency boost from WaveSpeedAI.
$0.025 per image: datacrunch.io/managed-endp...
❗️ We just expanded our capacity of B200 SXM6 180GB servers – available in the DataCrunch Cloud Platform.
The best thing is…
You can deploy the Blackwell platform without approvals.
Just sign in, select the instance type, and start your deployment:
cloud.datacrunch.io?utm_source=b...
Our step-by-step guide for integrating Pyxis and Enroot into distributed workloads using TorchTitan, ensuring scalability and reproducibility.
Try it today with our Instant Clusters: 16x-64x H200 SXM5 141GB with InfiniBand interconnect:
datacrunch.io/blog/pyxis-a...
📢 CUSTOMER STORY: How Freepik scaled FLUX media generation to over 60 million requests per month with DataCrunch and WaveSpeedAI.
Read the full story with ⬇️
- Our research into lossless optimizations
- Inference benchmarking
- Future predictions
datacrunch.io/blog/how-fre...
NVIDIA CEO, Jensen Huang, held his keynote today at NVIDIA GTC Paris 2025 and Viva Technology. As always, he gave an insightful presentation with numerous highlights!
One of ours was getting featured among the key European Cloud Service Providers!
What was yours?
We kicked off our summer at AaltoAI Hack 25.
It was amazing to see what 25 teams could build in 48 hours, with most deploying cutting-edge hardware on the DataCrunch Cloud Platform.
We thank AaltoAI for this opportunity to support the next generation of AI builders in Finland 🇫🇮
🇫🇷 DataCrunch is coming to NVIDIA GTC Paris 2025 and Viva Technology.
📨 If you've been looking to get in touch, feel free to connect with our CTO, Arturs Polis.
📢 Stay tuned for FLUX.1 Kontext [dev] – an open-weight version coming soon to the DataCrunch Cloud Platform with the WaveSpeedAI inference engine.
🆕 Inference API for FLUX.1 Kontext [max] & [pro] are now available on DataCrunch!
We are an infrastructure partner of Black Forest Labs for Kontext, a suite of generative flow matching models for text-to-image and image-to-image editing.
Learn more: datacrunch.io/managed-endp...
🚨 Summer Inference by Symposium AI is happening next Wednesday, June 4, at 16:00-22:00.
🇫🇮 This event will bring together 250 AI engineers, researchers, and founders under one roof in Helsinki.
🔗 You can still grab one of the last remaining seats: lu.ma/x5hhj79x
📈 Due to high demand, we'll add more B200 SXM6 servers to our on-demand pool in early June.
⚡️ You'll have self-service access to more of this next-gen hardware without quotas, approvals, or sales calls.
🔗 Join the waitlist or reserve your capacity: datacrunch.io/b200#waitlist
We're in Taipei for Computex this week. Let's connect! You can reach out to Ruben, Jorge, and Anssi.
We also recommend you attend the after-hours meetup by SemiAnalysis on Wednesday.
lu.ma/b9bw7xxz
*Due to high demand, it can be hard to catch more than 4x nodes in the wild.
If you need larger capacities, drop us a line at:
- datacrunch.io/contact
- support@datacrunch.io
Great news about Instant Clusters:
1️⃣ We've lowered the minimum contract duration to 1 day
2️⃣ You can deploy 4x 8H200 nodes right away* for $121.44/h
Get instant, self-serve access: cloud.datacrunch.io
What's the secret sauce for efficient transformer inference? 🥫
At least one of the ingredients is Multi-Head Latent Attention.
Check out our comparison of theoretical and practical performance between GQA vs. MHA vs. MLA ⬇️
datacrunch.io/blog/multi-h...
⚡️ In collaboration with WaveSpeedAI, we set the benchmark for real-time image inference with SOTA diffusion in under 1 second.
⚙️ We optimized the FLUX-dev model on the NVIDIA B200 GPU, resulting in faster API responses and lower cost per image.
🔗 Read our report: datacrunch.io/blog/flux-on...
Note: We've conducted this independent research with revision from the SGLang team.
We thank them for their ongoing support and collaboration.
New blog post: Optimization techniques applied by the SGLang team for DeepSeek-V3 inference.
You'll find a comprehensive overview of the techniques, their benefits and implications, and our benchmarks.
datacrunch.io/blog/deepsee...
🚨 NVIDIA HGX B200: available NOW on DataCrunch!
Be among the first to gain instant access to 1x, 2x, 4x, and 8x B200 GPUs with our high-performance VMs.
Sign up and enjoy expert support with secure service where performance meets sustainability.
🔗 cloud.datacrunch.io