Advertisement · 728 × 90

Posts by

Post image

Up to 26% off: The most affordable Serverless Inference on the market.

Easy container deployment with request-based auto-scaling, scale-to-zero, and pay-per-use:

→ Interruptible spot pricing (50% off)
→ B200, H200, and more
→ Multi-GPU support

Learn more: datacrunch.io/serverless-c...

7 months ago 0 0 0 0
Post image Post image Post image Post image

Text-to-image generation without the "AI look"? 📸

Built by Black Forest Labs and Krea, FLUX.1 Krea brings exceptional realism with a distinct aesthetic and flexibility.

Available on DataCrunch for cost-efficient inference at scale for $0.020 / image ⬇️

datacrunch.io/managed-endp...

8 months ago 0 0 0 0
Post image

✅ LIMITED OFFER: Get a 20% bonus on all your top-ups until the end of this week.

We're offering cloud credits to thank you for building with the DataCrunch Cloud Platform.

Sign in to top up: cloud.datacrunch.io/signin?utm_s...

Or learn how from our docs: docs.datacrunch.io/welcome-to-d...

8 months ago 1 0 0 0
Post image

Additional 8x NVIDIA B200 SXM6 – now available on the DataCrunch Cloud Platform.

Self-service access without approvals – peak flexibility with unmatched prices:

→ Fixed pay-as-you-go: $4.49/h
→ Dynamic: $2.80/h
→ Spot: $1.40/h

Deploy now: cloud.datacrunch.io/signin?utm_s...

8 months ago 0 0 0 0
Post image

10% off H200 SXM5 141GB – from $2.90/h per GPU down to $2.61/h ✅

This pricing applies to:
→ NVLink instances (1x, 2x, 4x, and 8x)
→ InfiniBand clusters (16x–64x) with 1-day contracts

Deploy now: cloud.datacrunch.io/signin?utm_s...

8 months ago 0 0 0 0
Post image

🇫🇷 TOMORROW: Our side event for Raise Summit with Hugging Face and SemiAnalysis on #SovereignAI.

Join us alongside other AI engineers and founders from 18:00 to 21:00 at Station F ⬇️

Sign up on Luma: lu.ma/qx7ydhe6?utm...

8 months ago 0 0 0 0
Post image

Instant Clusters – now available at the same price per GPU as VMs: $2.90/h ✅

→ 16x-64x H200 SXM5 141GB with 3.2 Tb/s InfiniBand™ interconnect
→ Pre-installed Slurm for easy job scheduling
→ Self-service access without approvals
→ 1-day contracts

cloud.datacrunch.io/signin?utm_s...

8 months ago 1 0 0 0
Preview
Towards Sovereign AI with Hugging Face, SemiAnalysis, and DataCrunch · Luma Join us for an exclusive event dedicated to Sovereign AI and the software-hardware stack required to make it a reality. Get first-hand insights into AI…

🇫🇷 Join our exclusive event in Paris on July 8 at 18:00-22:00 with Hugging Face and SemiAnalysis.

We'll explore Sovereign AI and the software-hardware stack making it a reality in regulated industries such as defense and healthcare.

Save your spot ⬇️

lu.ma/qx7ydhe6?utm...

8 months ago 0 0 0 0
Post image

We tested the NVIDIA #GH200 system, where a GPU and a CPU act under a unified memory.

The NVLink-C2C connection offers a total bandwidth of 900 GB/s (450 GB/s per direction).

That is 7 times higher than a conventional PCIe connection.

Read more ⬇️

datacrunch.io/blog/data-mo...

9 months ago 0 0 0 0
Post image

Higher capacity = lower prices ✅

→ B200 SXM6 at $4.49/h
→ H200 SXM5 at $2.90/h

Both platforms are available on DataCrunch with self-service access and without approvals.

Deploy now: cloud.datacrunch.io/signin?utm_s...

9 months ago 0 0 0 0
Advertisement
Post image

🆕Inference API for the open-weight FLUX.1 Kontext [dev] by Black Forest Labs

The new frontier of image editing, running on the DataCrunch GPU infrastructure and inference services with the additional efficiency boost from WaveSpeedAI.

$0.025 per image: datacrunch.io/managed-endp...

9 months ago 0 0 0 0
Post image

❗️ We just expanded our capacity of B200 SXM6 180GB servers – available in the DataCrunch Cloud Platform.

The best thing is…

You can deploy the Blackwell platform without approvals.

Just sign in, select the instance type, and start your deployment:

cloud.datacrunch.io?utm_source=b...

9 months ago 0 1 0 0
Preview
Pyxis and Enroot Integration for the DataCrunch Instant Clusters Step-by-step configuration and testing of multi-node distributed workloads using TorchTitan and by integrating with Pyxis and Enroot.

Our step-by-step guide for integrating Pyxis and Enroot into distributed workloads using TorchTitan, ensuring scalability and reproducibility.

Try it today with our Instant Clusters: 16x-64x H200 SXM5 141GB with InfiniBand interconnect:

datacrunch.io/blog/pyxis-a...

9 months ago 0 0 0 0
Preview
How Freepik scaled FLUX media generation to millions of requests per day with DataCrunch and WaveSpeed Cost-efficient and low-latency image generation without compromising on model output with high GPU utilization, elastic scaling, and near-zero cold starts.

📢 CUSTOMER STORY: How Freepik scaled FLUX media generation to over 60 million requests per month with DataCrunch and WaveSpeedAI.

Read the full story with ⬇️

- Our research into lossless optimizations
- Inference benchmarking
- Future predictions

datacrunch.io/blog/how-fre...

9 months ago 0 0 0 0
Post image

NVIDIA CEO, Jensen Huang, held his keynote today at NVIDIA GTC Paris 2025 and Viva Technology. As always, he gave an insightful presentation with numerous highlights!

One of ours was getting featured among the key European Cloud Service Providers!

What was yours?

9 months ago 1 0 0 0
Post image

We kicked off our summer at AaltoAI Hack 25.

It was amazing to see what 25 teams could build in 48 hours, with most deploying cutting-edge hardware on the DataCrunch Cloud Platform.

We thank AaltoAI for this opportunity to support the next generation of AI builders in Finland 🇫🇮

9 months ago 1 0 0 0
Post image

🇫🇷 DataCrunch is coming to NVIDIA GTC Paris 2025 and Viva Technology.

📨 If you've been looking to get in touch, feel free to connect with our CTO, Arturs Polis.

9 months ago 1 0 0 0
Advertisement

📢 Stay tuned for FLUX.1 Kontext [dev] – an open-weight version coming soon to the DataCrunch Cloud Platform with the WaveSpeedAI inference engine.

10 months ago 0 0 0 0
Post image

🆕 Inference API for FLUX.1 Kontext [max] & [pro] are now available on DataCrunch!

We are an infrastructure partner of Black Forest Labs for Kontext, a suite of generative flow matching models for text-to-image and image-to-image editing.

Learn more: datacrunch.io/managed-endp...

10 months ago 0 1 1 0
Preview
Symposium AI - Summer Inference · Luma Join 250 leading AI builders for an epic night in Helsinki! Symposium AI events bring together top AI talent, researchers, and engineers who are actively…

🚨 Summer Inference by Symposium AI is happening next Wednesday, June 4, at 16:00-22:00.

🇫🇮 This event will bring together 250 AI engineers, researchers, and founders under one roof in Helsinki.

🔗 You can still grab one of the last remaining seats: lu.ma/x5hhj79x

10 months ago 1 1 0 0
Post image

📈 Due to high demand, we'll add more B200 SXM6 servers to our on-demand pool in early June.

⚡️ You'll have self-service access to more of this next-gen hardware without quotas, approvals, or sales calls.

🔗 Join the waitlist or reserve your capacity: datacrunch.io/b200#waitlist

10 months ago 0 0 0 0
Post image

We're in Taipei for Computex this week. Let's connect! You can reach out to Ruben, Jorge, and Anssi.

We also recommend you attend the after-hours meetup by SemiAnalysis on Wednesday.

lu.ma/b9bw7xxz

10 months ago 0 0 0 0

*Due to high demand, it can be hard to catch more than 4x nodes in the wild.

If you need larger capacities, drop us a line at:
- datacrunch.io/contact
- support@datacrunch.io

10 months ago 0 0 0 0
Post image

Great news about Instant Clusters:

1️⃣ We've lowered the minimum contract duration to 1 day
2️⃣ You can deploy 4x 8H200 nodes right away* for $121.44/h

Get instant, self-serve access: cloud.datacrunch.io

10 months ago 0 0 1 0
Advertisement
Preview
Multi-Head Latent Attention: Benefits in Memory and Computation Multi-Head Latent Attention (MLA) vs. Group Query Attention (GQA): Transformer inference optimization in DeepSeek V3 with lower KV cache and higher FLOPs/s.

What's the secret sauce for efficient transformer inference? 🥫

At least one of the ingredients is Multi-Head Latent Attention.

Check out our comparison of theoretical and practical performance between GQA vs. MHA vs. MLA ⬇️

datacrunch.io/blog/multi-h...

10 months ago 0 0 0 0
Preview
FLUX on B200 vs H100: Real-Time Image Inference with WaveSpeedAI How WaveSpeedAI and DataCrunch achieved an up to 6x faster image inference by optimizing FLUX-dev's latency and efficiency: NVIDIA B200 vs. H100 benchmark.

⚡️ In collaboration with WaveSpeedAI, we set the benchmark for real-time image inference with SOTA diffusion in under 1 second.

⚙️ We optimized the FLUX-dev model on the NVIDIA B200 GPU, resulting in faster API responses and lower cost per image.

🔗 Read our report: datacrunch.io/blog/flux-on...

11 months ago 2 0 0 0

Note: We've conducted this independent research with revision from the SGLang team.

We thank them for their ongoing support and collaboration.

11 months ago 0 0 0 0
Preview
DeepSeek-V3 + SGLang: Inference Optimization A comprehensive overview of optimization techniques applied by the SGLang team for DeepSeek-V3 inference with GitHub commits, benchmarks, and results.

New blog post: Optimization techniques applied by the SGLang team for DeepSeek-V3 inference.

You'll find a comprehensive overview of the techniques, their benefits and implications, and our benchmarks.

datacrunch.io/blog/deepsee...

11 months ago 0 0 1 1
Post image

🚨 NVIDIA HGX B200: available NOW on DataCrunch!

Be among the first to gain instant access to 1x, 2x, 4x, and 8x B200 GPUs with our high-performance VMs.

Sign up and enjoy expert support with secure service where performance meets sustainability.

🔗 cloud.datacrunch.io

1 year ago 1 2 0 0
Preview
The GPU Cloud ClusterMAX™ Rating System | How to Rent GPUs The ClusterMAX™ Rating System and content within this article were prepared independently by SemiAnalysis. No part of SemiAnalysis’s compensation by our clients was, is, or will be directly or indi…

Learn more about The GPU Cloud ClusterMAX™ Rating by SemiAnalysis: semianalysis.com/2025/03/26/t...

1 year ago 0 0 0 0