That was incredibly relieving to me to read “motivation was problem-solving, machine learning as a means to an end” in @lawrennd.bsky.social “Atomic human”.
I always felt as an impostor among colleagues who want to solve intelligence, while I just enjoy working on cool stuff.
Posts by Albert Thomas
Also github.com/QwenLM/Qwen2... gives prompts you can try such as "Read all the text in the image" (in case you were not aware of this notebook). Not sure this will lead to a drastic change.
What's the difference between your model and the MLX version?
And reviewing will be the bottleneck. Although we can use LLMs to help us write the reviews (not saying that the review should be made by the LLM alone).
Many companies won't use Chinese *open* models for long-tail information and generated code security concerns - a major adjustment in how I see the open model ecosystem. Adoption is on the table for entrants amid DeepSeek and Qwen releasing some of the best models on paper.
buff.ly/y6MMoBt
TIL about huggingface-cli delete-cache to clean the models or other resources (datasets, ...) you downloaded from huggingface albertcthomas.github.io/blog/removin...
« When software is open-source, it means it is open-source – that the source is open – nothing more. […]
It does not mean open to contributions;
It does not mean support is offered;
It does not mean you’re entitled to feature requests;
It does not mean the developer owes you their time;
[…] »
Really enjoying the series of posts by @beenwrekt.bsky.social on overfitting.
Just put on line a talk I gave summarizing what I have learned across the years as a maintainer of open source.
It's _opinions_ (been there, done that), but I'm willing to defend them, having stewarded my share of successful open source projects.
speakerdeck.com/gaelvaroquau...
Yes I was going to ask for more details about LangChain given how popular it seems to be.
Thanks for the pointer!
Really? mamba still seems to be faster than conda, I might just need to update my conda :)
Never mind I saw someone asked the same question below :)
Why is it considered anti pattern to have global environments?
Good, published, benchmarks of machine learning / data science is crucial.
But so hard.
Well-cited "SOTA" methods typically crash often. They tend to be very computational expensive. Both make a systematic study impossible.
Finally, reviewers always ask for more methods, and more "SOTA".
Yes! I often do the same when I am in the debugger
Game on! 👾 for @scikit-learn.bsky.social
experts only: the ✨boss level✨ has arrived 🚀
For seasoned pros ready to master ML:
🔹 Custom algorithms
🔹 MLOps & deployment
🔹 Align ML with business projects
Be among the first to get certified! 👉https://eu1.hubs.ly/H0dZ18x0
#machinelearning #datascience
Ok release updates cannot be automatic :)
Wow this is nice, thanks a lot for sharing! This configures automatic updates as well? What about release updates? I find myself stuck with old Ubuntu releases...
Here’s a little script I made which I use to get a server up and running automatically (after you answering a few questions, including “what’s your name”) in just a few minutes.
You can even fully automate it with a few environment variables.
github.com/AnswerDotAI/...
👋
Totally agree. This is a great paper that everyone doing RL should read.