Operational data changes continuously.
Iceberg was built for batch commits.
Materialize’s Iceberg sink delivers transactionally consistent operational data into Iceberg without the memory and latency costs of batching.
If Kappa means compute once and serve everywhere, this is how. 🔗 bit.ly/4r4j9QI
Posts by Materialize
Agents don’t fail in production because models are bad.
They fail because context is stale, fragmented, or too slow.
See how Day AI built an agentic CRM, with live context powered by Materialize 🔗 bit.ly/3Ytjr8e
Flare needed fresher, unified data as microservices bottlenecks slowed development.
With Materialize + dbt, they built a live data layer across all systems, enabling sub-second queries, unified case views, a reliable “My Clients” dashboard, and fast features for AI-driven matching. bit.ly/4iuQUs9
New from Materialize: Cloud M.1 Clusters
Run 3x larger workloads with the same low latency and predictable performance—thanks to intelligent data spilling and expanded capacity.
Learn more: bit.ly/3L12oH2
Not all operational data platforms are built alike.
We break down the trade-offs between Materialize and Palantir Foundry in a new white paper. 📖 bit.ly/46LTjsO
Vector databases need fresh context to be useful.
The challenge: keeping attributes up to date without burning compute or building brittle pipelines.
Materialize fixes this with incremental updates, giving you faster, cheaper, fresher vector search. bit.ly/3KddzMs
Welcome Frank McSherry @frankmcsherry.bsky.social to Sync Conf 2025. Pioneer of sync technology, inventor of Differential Dataflow, and founder of @materialize.com, Frank will trace the evolution of sync and stream processing.
We’ve released a major improvement to our memory spilling infrastructure:
Materialize now uses swap to scale SQL workloads beyond RAM.
✅ Faster hydration
✅ Efficient memory utilization
✅ Bigger workloads supported
Full post from antiguru.bsky.social → bit.ly/46EF2iJ
I wrote about the projects done at Materialize’s recent hackathon. Many very cool projects, and also one that I worked on; take a read!
materialize.com/blog/spring_...
At our last on-site, the Materialize R&D team held a hackathon.
8 projects. 1.5 days. Highlights:
– SQL tutorial game
– WASM UDFs
– API endpoints from views
– S3 as a consensus layer
One shipped already. Others might next. Read the full recap → bit.ly/4lo4YmR
New white paper: Materialize vs ClickHouse
How to choose the right tool for real-time vs historical analytics — and why modern data platforms often need both.
Dive into architectural comparisons, use cases, and case studies: bit.ly/412qx5b
#DataInfrastructure #ClickHouse #Materialize #AIDataLayers
That’s Materialize.
Imagine…
A live data layer built for apps *and* agents
That incrementally maintains views at the scale of >1M updates per second
While maintaining up-to-the-second freshness
With query response times in the single-digit milliseconds
Waiting for CI hurts. In July, we cut our runtime by up to 86%. From 23+ min builds to under 2 min, and full runs in as little as 7 min.
Caching, parallelization, smarter builds, and a bit of [libeatmydata] magic.
How we did it 🔗 bit.ly/45yoOWM
AI agents need more than stale snapshots — they need a real-time model of the world.
Materialize powers digital twins: always-fresh, SQL-accessible representations of your business.
How to build them: bit.ly/46H97i7
Materialize can "push down" the filters in your query to its storage layer to fetch less data — and thanks to a few cool static analysis tricks, this works for more queries than you might expect. To see how it works, check out the blog: bit.ly/475FBCL
Want live analytics on Bluesky itself? Pipe the public firehose into Materialize with a tiny JS script, then explore trends in SQL. Full walkthrough by @frankmcsherry.bsky.social → bit.ly/46OOwsa
We have a new blog post up at @materialize.com about analyzing the Bluesky firehose (Jetstream, really) through Materialize. You can grab a copy of the community edition of MZ and follow along, or invent your own ways of looking at the data, live!
materialize.com/blog/analyzi...
Untangling control vs. data paths :point_right: Bigger SELECT results, smaller bottlenecks. Materialize now streams large query outputs out-of-band, so coordination stays snappy while data flies. Dive into the architecture shift and what it unlocks next → bit.ly/3Ub6GwI
SponsorCX went from 90-minute batch updates to ~1-second freshness by pointing Materialize at Postgres. No streaming specialists—just SQL. Real-time reporting shipped the same day. Check out the full story: bit.ly/4lM1k6Y
Neo Financial now serves real-time features that are fresh and fast while saving 80 % on infra. All SQL, no cluster babysitting. Case study → bit.ly/4lIP8E3
I refreshed a blog post draft on streaming the Bluesky firehose through @materialize.com. Some experience tidied up the examples, made things a bit more efficient, and told a different story (now with less cloture).
More in the near future, as we put a front end on it!
github.com/frankmcsherr...
Flink vs Materialize isn’t apples-to-apples.
Flink is a stream processor with external dependencies. Materialize is a unified platform: ingest, transform, and serve real-time data in SQL.
💡 50% faster deploys
💰 45% lower cost
📖 Read the guide: bit.ly/4eBNMc0
LLM agents that act need data that reacts.
If your data layer can’t reflect the consequences of an agent’s action in real time, it’s not just inefficient—it can lead to disaster.
🧠 Smarter agents need smarter data. bit.ly/4lz4hro
#AI #DigitalTwins #LLM #Materialize
Materialize 25.2 is here!
Materialize 25.2 is here! New features include live freshness reports for all your views, 2.5x faster data product deployment times, and native SQL Server support.
See how these updates can help streamline your operations: bit.ly/44i2hg2
Big news: Materialize now connects directly to SQL Server.
We ingest CDC, maintain real-time views of your logic, and eliminate the pain of:
- Slow OLTP queries
- Stale dashboards
- Brittle pipelines
Just SQL. Just correct. Just live. 🔗 bit.ly/4mKbk1S
Just a normal day at work where a co-worker discovers a memory bug in Rust
The Materialize engineering team uncovered a rare concurrency bug 🪲in Rust’s 🦀 unbounded channels that could lead to double-free memory errors. After thorough debugging and working with the Rust and crossbeam communities, the fix is now part of @rust-lang.org 1.87.0.
🔗 bit.ly/3Fan1Om
AI is pushing data infrastructure to its limits.
MCP gives agents access to services—including databases—but most systems can’t handle the load. Materialize’s MCP server turns live data products into tools agents can use—without crushing your systems or overwhelming your team. bit.ly/4jYBrQU