chiming in to stan for netshaq
Posts by TSTACE
You can get the binary on Poob. Poob has the zip file for you.
This is a very good post to send to your friends and colleagues who use AI, teaching them how to learn how to get more correct answers out of LLMs. AI haters should read this too, as it’s research-based and you can see how these systems work when you encounter them.
I really appreciate the number of reasonable posts from super senior developers who talk about how they’re using LLMs, which tasks they’re good for in software dev, and which don’t make sense at all.
crawshaw.io/blog/program...
This is excellent - crammed with practical advice about how to build useful systems that use LLMs to run tools in a loop to achieve a goal. Wrote some short notes here: simonwillison.net/2025/Jan/11/...
Great prose and amazing concept building. Also consistently updated!
⚠️ Warning: If you see agents as cloud resources you are getting ecosystem sold and it might not be in your benefit. Agents belong in code. AI is still moving fast and you want to be nimble for your Compound AI systems.
OpenAI Assistant API example logical (left) physical (right)
Yes!
I think this also emphasizes why custom AI tools are good because they create guardrails to help the user with tricky tasks that the LLM can do but aren’t immediately obvious
With this example (and the many that came before it) you’re fighting an uphill battle with tokenization 😀
banger, made even better by the Claude riposte
What I found in the criticism was a near-total unwillingness to acknowledge that generative AI can do anything good or useful, or to acknowledge that it has improved significantly and rapidly with successive generations. I found a genuine lack of curiosity in whether the scaling laws might get us all the way to superintelligence, and in the risks that clearly await us if it does. I don’t know if this is intellectual dishonesty or simply wishful thinking, but in any case I do think that the blind spots it has produced are real. And it will be fascinating to see whether the fake-and-sucks crowd updates its views (or doesn’t) as LLMs continue to make steady incremental or perhaps even exponential progress in the years ahead. In the meantime, I’m taking detailed notes on all the bloggers writing “financial analyses” suggesting that OpenAI will go bankrupt soon because it’s not profitable yet. The good thing about covering AI these days is that so much of it is publicly available and even free to use — and the broad contours of what is going to happen next are already hiding in plain sight. But a hallmark of the fake-and-sucks crowd has been an unwillingness to see what is already staring them in the face.
What I learned from this weekend's great "AI is fake and sucks" debate on Bluesky, with responses to Gary Marcus, Edward Ongweso Jr., and others www.platformer.news/ai-fake-and-...
Traditional search pages now look like Times Square with the amount of advertising goop. SEO optimization can promote unreliable results already. LLM based search can help cut through this noise but we need AI literacy, just like we needed internet literacy 20 years ago.
It's pretty sad to see the negative sentiment towards Hugging Face on this platform due to a dataset put by one of the employees. I want to write a small piece. 🧵
Hugging Face empowers everyone to use AI to create value and is against monopolization of AI it's a hosting platform above all.
had me in the first half
The authors of ColPali trained a retrieval model based on SmolVLM 🤠 TLDR;
- ColSmolVLM performs better than ColPali and DSE-Qwen2 on all English tasks
- ColSmolVLM is more memory efficient than ColQwen2 💗
Find the model here huggingface.co/vidore/colsm...
this stands to be a really awesome bridge between folks that build code and folks that “just want to run the damn thing”
your scientists were so preoccupied with whether or not they could
when you try to convert text to smaller pieces but all it gives you is the subdued acoustic pop music of Simon and Garfunkel, that’s a folkenizer