In data analytics, we're facing a paradox. AI agents can theoretically analyze anything, but without the right foundations, they're as likely to hallucinate a metric as to calculate it correctly. They can write SQL in seconds, but will it answer the right business question?
Posts by Mike Driscoll
DuckLake is a simpler, SQL-friendlier alternative to Iceberg.
“There are no Avro or JSON files. There is no additional catalog server or additional API to integrate with. It’s all just SQL.“
That said, choose your catalog database — a single-point of failure — *very carefully*.
Quack... Quack... and code!
@mehdio.com and @medriscoll.com from @rilldata.com are diving into how GenAI is reshaping BI-as-code — from idea to implementation.
This one’s for data folks who want to see beyond the hype.
Register : lu.ma/w4ncmttn
And it's only fitting that we'll be hosting this event a true lakehouse, the Lake Chalet, the best waterfront restaurant on Lake Merritt, steps from the Data Council main event.
RSVP here while tickets last:
www.rilldata.com/events/data-...
Similarly, Toby and his team at Tobiko Data have built an powerful yet elegant transformation platform -- combining SQLMesh and SQL dialect transpilation (SQLGlot) to allow portability of pipelines between databases, warehouses, and lakehouses.
techcrunch.com/2024/06/05/w...
The sub-second speed-at-scale of these real-time engines enable new kinds of applications: point-of-sale fraud detection, IoT monitoring, real-time context for AI agents -- these use cases just aren't supported by traditional data warehouses like Snowflake.
www.rilldata.com/blog/scaling...
Why am I so excited to bring this crew together on stage? It's because real-time analytical databases like ClickHouse, Apache Pinot, and MotherDuck / DuckDB are reshaping data stacks the fastest-moving engineering teams on earth -- OpenAI, DoorDash, and
@stackblitz.com.
This legendary panel of technical founders includes Yury Izrailevsky (co-founder of ClickHouse), Kishore Gopalakrishna (founder of StarTree, creator of Apache Pinot), @jrdntgn.bsky.social (co-founder of MotherDuck), and @captaintobs.bsky.social (founder of Tobiko, creators of SQLMesh and SQLGlot).
Yo SF Bay Area #databs crew, want to talk lakehouses at a real Lake House? :)
Next week after Data Council, join the founders of @clickhouse.com, @motherduck.com, @startreedata.bsky.social, and @tobikodata.com to talk real-time databases and next-generation ETL.
www.rilldata.com/events/data-...
If you're interested in learning more about the "Shift Left" trend, please check out data engineering author @ssp.sh's blog post released today.
www.rilldata.com/blog/what-sh...
At @rilldata.com, we've taken the step of shifting metric layers left from BI tools and pushing them into real-time analytical databases like ClickHouse and DuckDB -- to power insanely fast exploratory dashboards. (I'll be discussing at my Data Council in two weeks).
docs.google.com/presentation...
Now that SQL-on-data-lake frameworks are maturing (DuckDB SQL on Iceberg, Spark SQL on DeltaLake), and transpiling between SQL dialects is possible (thanks to SQLMesh and @tobikodata.com), it's possible to shift these SQL transformations left, out of the warehouse and onto object storage.
The advantage was that transformations could be written in SQL. The disadvantage is you pay the Snowflake tax for every compute cycle in their warehouse.
Transformation logic is another use case.
"Shifting left" is in some ways a reaction to the "ELT" pattern (or anti-pattern, in my opinion) that big data warehouses like Snowflake were pushing -- whereby you extract, load, and only *then* transform data in the warehouse.
Data validation is a great example: an eCommerce platform might validate that order prices contain no negative numbers after its loaded into the database. "Shifting left" means moving that validation to the ingestion or even the collection step in the pipeline, before it hits the database.
Data pipelines can be visualized as flowing data left to right, starting with raw sources, ingested and modeled into database tables, and eventually served out through user-facing applications and dashboards.
"Shifting left" means taking logic that lives on the right side and moving it leftward.
"Shifting left" is the new trend among in data stacks -- but what does it mean and what does it matter?
Apache Pinot is one of the world’s fastest and most scalable real-time analytical databases, relied on by LinkedIn, Uber, and Stripe. It was awesome diving into the secrets behind its unique architecture with creator and @startreedata.bsky.social founder Kishore Gopalakrishna.
I wish I could say "yes almost certainly" but if the levels of competence we're witnessing in other areas I'm not placing any bets on DOGE's data security practices.
So what are DOGE's true priorities?
As Maya Angelou wrote: "When someone shows you who they are, believe them the first time."
Cloud data centers have climate control, and more compute power than your MacBook Air!
This set up could be done by competent data engineer in less time than it took to run her query.
The DOGE tech wiz acknowledged this and wrote "it hasn't been a priority to get that done."
They should haved loaded this multi-terabytes contracts data set into a cloud database, or even better -- a database built for real-time analytics like @clickhouse.com, Pinot, or StarRocks (sorry @duckdb.org, this is more than you can handle).
But it doesn't absolve her, or her team, from ridicule.
The DOGE tech wiz kids shouldn't be toting federal databases around on USB-attached external hard drives in "hot, humid hotel rooms" (her literal words): that's what database servers were invented for.
What actually overheated was an USB external hard drive, with several terabytes of contract data, that she was reading into her MacBook Air and then filtering to find contracts matching her criteria. (High-speed reads on NVMe drives can heat up to 175 °F before thermal throttling kicks in).
Like others, I jumped on the bandwagon to ridicule the DOGE analyst who "overheated her hard drive" by analyzing just 60k rows of data.
I was wrong.
The truth is even dumber.
🧵
Just published: Ever had to «Scale beyond Postgres»?
You may have started with a simple ETL pipeline and crunched critical business logic into useful dashboards, but speed and scale didn't grow with data at some point, and it's the concurrent user.
✨ Below are some highlights from the article.
How mature is DuckDB WASM these days? I recall reading on HN about a similar app last year, called Pretzel, but I think they pivoted:
news.ycombinator.com/item?id=3971...
This activity is sometimes called semantic data modeling. Actually, the task of capturing the meaning of data is a never-ending one. So the label “semantic” must not be interpreted in any absolute sense. – E.F. Codd, 1979
The father of relational databases understood semantic data models.
Blogged: Exploring UK Environment Agency data with @duckdb.org and @rilldata.com
rmoff.net/2025/02/28/e...
#dataBS
Google couldn’t create a competitive product because they profit directly from that ad spam and indirectly from data selling.
(Same reason Gmail doesn’t really want to clean up your Inbox, even though they could.)