Why the Zig programming language matters, when to use it, and why correctness is a system design problem, not a language problem. Interesting thoughts from @joran.tigerbeetle.com.
tigerbeetle.com/blog/2025-10...
#ziglang
Posts by Joran Dirk Greef
Join us tomorrow for the finalé of #SD25
1000x: The Power of an Interface for Performance
@joran.tigerbeetle.com
10am PT / 1pm ET / 7pm CET
www.youtube.com/watch?v=yKgf...
In 2018, “Protocol-Aware Recovery” set the standard for Durability in ACID, making Prof. Ram Alagappan and Aishwarya Ganlat giants in the field.
TigerBeetle is grateful, not only to stand on their shoulders, but to support their ongoing work.
siebelschool.illinois.edu/news/Alagapp...
TripleZip (YC W25), now processing commercial rent transactions with TigerBeetle.
Congrats, Grayson and Yash, on going into production! We're cheering for you.
Yes, it was really interesting to learn that TigerBeetle was inspired by research out of UW–Madison on storage faults.
I’ve also heard great things about their database group.
Thoroughly enjoying the IronBeetle series on YouTube about how @tigerbeetle.com works. There are some very interesting ideas behind it and they are well explained
Coming soon to a screen near you...
Kicking off #SD25 online
𝚖𝚊𝚝𝚔𝚕𝚊𝚍 on building systems, simply
Tomorrow, July 21 at 10am PT / 1pm ET / 7pm CET
youtu.be/jVC4DP-8xLM
How to add rate limiting to your API using TigerBeetle, including how to capture/visualise with #grafana
Thanks for this great post, @mcadariu.bsky.social
dev.to/mcadariu/how...
TigerBeetle 0.16.50 is released!
TigerBeetle 0.16.50 is released!
This release includes various cleanups prompted by the upgrade to Zig 0.14.1.
github.com/tigerbeetle/...
Haha! There was no side_letter.
As announced on the Systems Distributed 25 website, TigerBeetle has generously donated 8k USD to ZSF.
Thank you Joran & team for both the donation and for hosting a conference of rare quality.
The DBS should guarantee the permanence of Commit under the weakest possible assumptions about the correct operation of hardware, systems soft- ware, and application software. That is, it should be able to handle as wide a variety of errors as possible. At least, it should ensure that data written by committed transactions is not lost as a consequence of a computer or operating system failure that corrupts main memory but leaves disk storage unaffected.
This is from a 40 year old textbook:
The database should guarantee durability under the weakest possible system assumptions.
That includes hardware corruption, yet no mainstream database today cares about it. Most just assume hardware is sound. (TigerBeetle is an exception)
TigerBeetlex 0.16.47 now includes structs and functions to decode TigerBeetle CDC events streamed on RabbitMQ.
Bonus material: a guide to create a pipeline to process them in ~50 LOC, powered by Broadway.
hexdocs.pm/tigerbeetlex...
#weBEAMTogether @elixir-lang.org #ziglang @tigerbeetle.com
The @tigerbeetle.com team runs a conference like they build their database — with artful craftsmanship, technical precision, and a sprinkle of magic.
Kudos to an amazing SD ‘25 🤩⚡️
Okay we're done with Benchmarking. Bringing up the whole Tigerbeetle team to take bows. Calling out the individual work each person did on the benchmarking demo and the conference.
Picture of the vs screen, though I think it loses a bit without the animations and the music
(oh hey the lights turned yellow because the last race is with DuckDB)
They plan to release the program so you can make your own database races
These guys employ a lot of artists
#sd25
Tigerbeetle also had the ability to do full replication and durability checks, since it was fast enough to afford to spend cycles on that.
That was at 10% contention. Now benchmarking 50% contention with the proprietary postgres million-dollar cluster. Gets 2 minute headstart vs tigerbeetle
I mean they had the potential to change color, they just didn't until the closing keynote.
To the beat of the synthwave.
That is playing.
What a conference, man
Aaaaaanyway that was a benchmark visualization. Postgres avged 1k tps, tigerbeetle avged 300k tps for same latency levels
Red and blue stage lights, they were yellow for EVERY OTHER TALK. Changing in time to the synthwave that's also playing
THE STAGE LIGHTS CHANGE COLOR
THEY CHANGED COLOR THIS WHOLE TIME
#sd25
THE SYNTHWAVE HAS STARTED
client runs on separate machine. Proprietary cluster users anonymized. Missed a bit of theother benchmark details.
"The time has come to race."
Gonna replay the real performance traces for us. Showing that now for psql, plus analytic database
Again, this ONLY does OLTP. No general-purpose processing, no custom data, no user passwords or workflow state machines, just accounts and debits/credits, nothing else. Specialization gives special interfaces gives power.
Benchmark time! Every db [except cluster?] runs on i8g.16xlarge...
Tigerbeetle learns from history. While SQL is the language of databases for 5 decades, DebitCredit has been language of transactions for 5 centuries. Tailor interface for that.
We can fit 8000+ transactions in 1mb, getting a lot more work done per network round-trip.
#sd25
tbf it's not fair, because it was provided as one giant batch. Also since DuckDB is OLAP it doesn't have any durability or high-availability. Just shows that if we can avoid locks, we can process many more transactions.
Benchmark #4: Tigerbeetle (duh). True OLTP, nothing BUT transactions.
So benchmark is adapted to look like an analytics workload, just for fun.
DuckDB hits 2k transactions/sec for 0% contention. 10% contention: 2k t/s. 90% contention: 2k t/s. 90% with 10ms roundtrip: 2k t/s.
No performance degradation because no row locks.
Write contention is what separates OLTP from OLGP. At 10% contention, the proprietary cluster never goes above 12% cpu utilization.
What about stored procedures? Keep processing in a single in-process transaction without row locks!
Benchmark #3: DuckDB.
Kinda ill-suited, as DuckDB is OLAP
#sd25
They'll benchmark the "cluster" on 16 machines that cost 84k/month total, or 1m/year.
Instead of 12k transactions/sec under 0% contention, got 7.5k transactions/sec, a little better than half (for a million dollars a year). 10% contention: 570 t/s. 8 machines: 600 t/s. 1 machine: 800 t/s!