By the way: I took a very similar approach as your for the conversion from R to Python, also did some differential testing to compare the output.
Posts by Aart Goossens
The FPCA model is also in the Silhouette library (github.com/SweatStack/s...)
If you share your code with me I can do a comparison. :)
For me this is about building experience with specific models and never blindly trusting the output. Testing becomes also more important, and for repeated task that need some level of determinism you can use eval(uation) tools to quantify that.
The former. Although I think some of the more advanced models (maybe most of them by know?) are actually multi-model and/or multi-agent so changed output earlier in the flow could technically change model/agent selection later in the flow.
If you're interested, Anthropic has a lot of good resources on prompting, like this: platform.claude.com/docs/en/buil...
Yup. Rephrasing your question in a way that makes it easier to admit that there's no solution helps. Even adding something like "There might be no solution" can make a big difference.
In a lot of ways, it's like working with a very smart and ambitious but overly naive junior developer.
Starlette 1.0 is out! I used this as an opportunity to experiment with Claude Skills, since Claude isn't yet familiar with the (minor) breaking changes in the 1.0 release compared to 0.x simonwillison.net/2026/Mar/22/...
Tip: Read @simonwillison.net's guide on agentic engineering patterns: simonwillison.net/guides/agent...
Just add a LICENSE file to the drive and you're done.
MIT would probably be a good fit, but there are other options.
choosealicense.com/licenses/mit/
Get in touch if you want to get involved or have a use case.
What if your .fit files were queryable?
What if you could point a Python library at ~1000 .fit files, ask for the average power of your 10 longest rides in 2025, and get the answer in 1.7s?
I'm planning to use these insights to do some power duration modelling across fatigued states, so stay tuned. π
Another day, another experiment: How fast can we compute mean-max curves in Python?
Thanks to Numpy/Numba, the speedup with Rust is not that big, but differences in algorithms and tools make a huge difference.
I can of course be pragmatic and liberal about this but thought it wouldnβt hurt asking if there was a proper open source license.
The script that your wrote. When thereβs no license it basically means all rights reserved.
What's the license on the R script in the Google Drive? I assume the data is CC0 1.0 via GC OpenData.
Context: I'm porting the FPCA stuff to an installable, MIT-licensed Python library and want to make sure I'm not violating the original code's license.
Good one, I didn't think about the license. Garmin's official Python SDK doesn't even specify one...
That almost looks like a random number generator! π
Joking aside: I think there's no consensus because running "power" doesn't really exist (or can't be measured directly).
In my experience Garmin, Stryd, and Apple Watch data correlate reasonably well, at least on flat-ish terrain.
That said, there is of course a big benefit in using something that is maintained by Garmin. But the Python library leaves imo too much to the developer and also the Java SDK doesn't really serve the Python community.
Technically yes, but I don't think it's faster than Rust, which was the point of this benchmark.
And packaging for Python distribution is impossible and cloud setup is harder (often impossible if going serverless).
Also, purely subjective: I never liked calling the Java SDK from Python.
It's the beginning of the end: The AI just skipped the step trying to explain the human at the other end of the keyboard why Python is so much better than R...
Whoopsidaisy! Forgot to make the repo public. Fixed now.
Should I build this into a proper (open-source) library?
What use cases would benefit most from 15-20x faster parsing?
If this sounds useful to you, let me know!
Is the speed difference actually useful? For most cases, 100ms vs 2s doesn't matter. Parsing typically happens async anyway.
But it does matter for:
β Local analytical workloads
β Historical processing (e.g. onboarding a user in a training app)
β High-scale apps with lots of users
How fast can we parse FIT files in Python?
I've been thinking about building a Rust-backed parser for a long time and finally got around to a proof-of-concept benchmark:
β With a limited scope, Rust is 15-20x faster
β A 6h bike ride parsed in ~130ms
github.com/SweatStack/fit-parsing-experiment
Completely agree. I think my misunderstanding came from my regular annoyance about exactly this, and sometimes feeling like I have to defend my AI usage.
Gotcha! Then I misunderstood your tweet. I thought you were implying that we should dismiss AI just as crypto/NFT based on some of the nitwits that advocate for it.
Separating messengers from message matters here. I'm bullish on crypto/NFT despite the bad evangelists, not because of them. Same with AI: some (but not all!) use cases are genuinely real, and the loudest voices in the room don't get to decide that.
The interface is a bit funky, but if you sort the files by "Last modified: newest to oldest", you can browse this INDEX directory and see the individual files.