email workflow in neomd
Added a workflow overview - how the processing and screening on Neomd works.
Simple, yet very effective. The bottom-right is basically GTD; the upper-right is the HEY screener.
email workflow in neomd
Added a workflow overview - how the processing and screening on Neomd works.
Simple, yet very effective. The bottom-right is basically GTD; the upper-right is the HEY screener.
If i need to open a project, I can just fuzzy search for `dbc`, and it opens in that directory with a neovim fzf ready :)
What do you mean when you use tmux? What's your use case?
I use it for splits, tabs and sessions. I always have a bunch of sessions open and switching between them with a single command - e.g. `alt-m` is my email, `alt-b` is my second brain, `alt-d` are my dotfiles, etc.
yeah there's no need to use 100%. If i miss something, and tmux has it, why not. But each workflow is different. I only arrived at the workflow today, after years of improving/changing, and started very simply, step by step.
6. Workspaces separated from infra code
7. CI visibility through lineage
8. GitOps as single source of truth
Selling GitOps to new data projects is hard if they haven't already been burned.
After initial setup, DevOps stabilizes.
The investment pays off.
8 key learnings:
1. Separation of concerns
2. Standardized deployment patterns
3. Versioned artifacts for rollbacks
4. Database migration automation
5. Test early, test often
Is DevOps the new data engineering of data science?
As in the old days, when you spent 80% of your time on data engineering instead of data science.
www.ssp.sh/brain/the-s...
How can you use it actually on a day-to-day basis??? 😄
jk, what's your favorite feature? Or why you using it over none? :)
https://github.com/joshmedeski/sesh
But nice resurrection of all sessions when restarting the computer, or opening and fuzzy-finding any directly in its own session, is just key.
My favorite use is navigating my terminal's text with vim motions, something most terminals are starting to implement with search, but usually no navigation
And it's true that the defaults are not nice, but they can very easily be changed, and if you get started, just try zellij.dev or start directly with Kitty Sessions (sw.kovidgoyal.net/kitty/sessions), which might be enough for your use case.
So, because tty sits on top, you can't run tmux daily? Sure, it could be integrated into the terminal (what Kitty does with sessions, and what Ghostty tries to).
I feel people who don't use tmux are trying to make it down, while everyone I know who uses it just loves it and can't live without it.
So I'd say figure it out for yourself.
And because Obsidian is based on plain Markdown files, I can also just open it in nvim to have even less distracting zen mode.
Note that when using zen-mode in nvim, I remove tmux tabs and center the text. Again, 2nd image without zen mode.
Zen mode and vim motions in Obsidian are pretty dope. i have zen mode in Obsidian through the vimrc plugin mapped to `<leader>z` and looks like this. As a comparison, 2nd without zen mode.
Config in my dotfiles.ssp.sh at: github.com/sspaeti/dotf...
Eg, our Dagster skills repo has much more terse instructions on our dbt integration vs. pointing to the human docs, because we found performance worse.
We’re discussing how to handle drift etc
GitHub: github.com/dagster-io/s...
And a blog on lightweight evals: dagster.io/blog/evaluat...
So, the future with AI reveals why BI still matters, and it's most likely not the dashboards themself. Read more in the full article at www.rilldata.com/blog/ai-rev...
And Amdahl's Law still applies.
50x faster generation of BI charts, but if the tools it depends on were designed for human speed with slow query APIs, no CLI support, and unversioned metrics, the overall gain collapses to 2-3x.
This can be achieved with a version of BI-as-Code, and with that, we can also generate safer visualizations. After all, the hard part was never generating visualizations; it was having metrics & a strong backend.
Having a unified data interface with an agent with access to source, ETL, & dashboard.
There's a good approach that we know works well, which involves the same old ingredient that we use in the software domain: artifacts can be versioned, recoverable, and declared. BI needs trustworthy metrics, semantics, AND ownership.
But most importantly, who is maintaining it all with the explosion of dashboard creation? What's the solution? In this new article, we look at how BI evolved, how dashboards are actually used today, and what remains relevant.
The hard work of aligning the business, verifying the numbers are correct, and joining & aggregating at the right granularity.
Data projects need governance & best practices. Otherwise, we are back to the same old way of using local Excel files, where everyone is doing their own thing.
However, BI was never just about dashboards.
Many need the extracts BI provides from your large SAP system, linked to the right customers in the CRM, enhancing decision-making even further. It's the primitives behind the dashboards that matter more: speed, metrics, and the semantics behind them.
It depends on what you mean by it. If you mean ad-hoc creation of a visualization, probably yes.
But if you mean a well-crafted operational dashboard, where you can see your whole company performing in a split second by looking at a single, highly dense dashboard tailored. Probably not.
Benn Stancil was writing about it in 2021, drawing parallels to the Salesforce "End of Software" declaration in 2000. We compare the parallels with today, as it's very interesting, as it's almost the same statement today with AI, that no more software developers are needed.
But is BI dead today?
We've heard it all. BI and dashboards are dead. But every time, only to rediscover its power and resurrection whenever we need grounded data analysis in any enterprise and startup space.
www.rilldata.com/blog/ai-rev...
What are external tables?
Anyone still using external tables? I'm currently writing about them, and it's fascinating that they are still not dead yet.
What's your use case? And what are the more modern versions of using them?
DuckDB, Polars, DataFusion, Spark - they all use Arrow under the hood.
If you're building anything that moves data between processes, Arrow is the standard to know.
Zero-copy data sharing between Python, Java, C++ without serialization overhead.
That's Apache Arrow.
Arrow is not a file format.
It's an in-memory columnar format.
The Arrow ecosystem:
Arrow Flight, Flight SQL, ADBC - a full stack for high-performance data transfer.
www.ssp.sh/brain/apach...
Ohh interesting, thanks for sharing!