Ah, totally, if interpreter startup time dominates then it's not a good match, unless you can scale the computation and subtract the constant term :-)
Posts by
Loved reading this, Per!
Re: "Expand the benchmark to be a true benchmark by e.g., averaging multiple calculations etc. As it stands, it just gives an idea of the performance."
I cannot recommend hyperfine highly enough: github.com/sharkdp/hype...
@andreterron.com Hi Andre! Is mainframe.so still alive? I've got a few project ideas where I'd like to use it. Alternatively, do you know of any similar projects? Hope you're doing well!
Extremely neat!
Ah, I see!
> But I almost never refer to them
In the sense that the notes are more like "scratch notes"/temporary artifacts that you use for thinking, rather than a permanent 'library'?
**Absolutely** agree on the talking to friends/introspection/application. No good ideas without good feedback
Reflecting on this, how do you best "reflect on your experiments"?
I use note-taking a lot to reflect on an experiment over time.
But again, I strongly agree that the litmus test should be "how will this help you achieve what you want".
Or, currently, as a SWE working on distributed systems, studying durable execution from e.g. Temporal and memorizing that "effectively once" execution requires 1) Persisted spec, 2) Retries and 3) Idempotency, has come in _super_ handy the few times i've needed it.
E.g. when I was in acute care, being able to rattle off treatments for a given disease came in super helpful, and using note-taking + spaced repetition (Matuschak's mnemonic medium) made memorisation and categorisation much easier.
I strongly agree with everything you've written, but I think note-taking has utility in domains with high need of declarative knowledge, especially under time-pressure.
But, I mean, they could, right?
Design a good training program using NDM-style methods, randomise students to either "current" training program or NDM-style training, compare results using hypothesis-testing (or insert preferred statistical methods here)?
Thanks again for the thread! I've been thinking some more, and found this quote of yours:
> The problem with NDM style training methods is that it’s ethnographic in nature. [...] Also they don’t do null hypothesis statistical testing, so are locked out of mainstream journals.
Have bookmarked this and will give it deeper thought! Love your writings.
Sidenote: So incredibly happy you're posting on BlueSky as well! Much less noisy in my algorithm.
This was one of my most formative experiences in my PhD!
Some examples:
* Polyrepo -> monorepo
* Static typing for configuration -> Runtime with Confection
Looking at some code, seeing it as a teachable moment for a junior, and seeing that _I_ wrote it, was very humbling.
In the same vein, GitHub notification filtering is such a dumpster fire that most devs I know regularly miss important notifications because they are drowning in "dependabot merged this PR".