Advertisement · 728 × 90

Posts by TSTACE

chiming in to stan for netshaq

6 months ago 1 0 0 0

You can get the binary on Poob. Poob has the zip file for you.

6 months ago 0 0 0 0

This is a very good post to send to your friends and colleagues who use AI, teaching them how to learn how to get more correct answers out of LLMs. AI haters should read this too, as it’s research-based and you can see how these systems work when you encounter them.

7 months ago 317 66 51 10
crawshaw - 2025-01-06

I really appreciate the number of reasonable posts from super senior developers who talk about how they’re using LLMs, which tasks they’re good for in software dev, and which don’t make sense at all.

crawshaw.io/blog/program...

1 year ago 188 15 12 1

This is excellent - crammed with practical advice about how to build useful systems that use LLMs to run tools in a loop to achieve a goal. Wrote some short notes here: simonwillison.net/2025/Jan/11/...

1 year ago 142 12 4 3

Great prose and amazing concept building. Also consistently updated!

1 year ago 0 0 0 0

⚠️ Warning: If you see agents as cloud resources you are getting ecosystem sold and it might not be in your benefit. Agents belong in code. AI is still moving fast and you want to be nimble for your Compound AI systems.

OpenAI Assistant API example logical (left) physical (right)

1 year ago 4 2 0 0
Post image

Yes!

1 year ago 143 15 7 1

I think this also emphasizes why custom AI tools are good because they create guardrails to help the user with tricky tasks that the LLM can do but aren’t immediately obvious

1 year ago 1 0 0 0
Advertisement

With this example (and the many that came before it) you’re fighting an uphill battle with tokenization 😀

1 year ago 4 0 1 0

banger, made even better by the Claude riposte

1 year ago 5 0 0 0
What I found in the criticism was a near-total unwillingness to acknowledge that generative AI can do anything good or useful, or to acknowledge that it has improved significantly and rapidly with successive generations. I found a genuine lack of curiosity in whether the scaling laws might get us all the way to superintelligence, and in the risks that clearly await us if it does. I don’t know if this is intellectual dishonesty or simply wishful thinking, but in any case I do think that the blind spots it has produced are real. And it will be fascinating to see whether the fake-and-sucks crowd updates its views (or doesn’t) as LLMs continue to make steady incremental or perhaps even exponential progress in the years ahead.

In the meantime, I’m taking detailed notes on all the bloggers writing “financial analyses” suggesting that OpenAI will go bankrupt soon because it’s not profitable yet. The good thing about covering AI these days is that so much of it is publicly available and even free to use — and the broad contours of what is going to happen next are already hiding in plain sight. But a hallmark of the fake-and-sucks crowd has been an unwillingness to see what is already staring them in the face.

What I found in the criticism was a near-total unwillingness to acknowledge that generative AI can do anything good or useful, or to acknowledge that it has improved significantly and rapidly with successive generations. I found a genuine lack of curiosity in whether the scaling laws might get us all the way to superintelligence, and in the risks that clearly await us if it does. I don’t know if this is intellectual dishonesty or simply wishful thinking, but in any case I do think that the blind spots it has produced are real. And it will be fascinating to see whether the fake-and-sucks crowd updates its views (or doesn’t) as LLMs continue to make steady incremental or perhaps even exponential progress in the years ahead. In the meantime, I’m taking detailed notes on all the bloggers writing “financial analyses” suggesting that OpenAI will go bankrupt soon because it’s not profitable yet. The good thing about covering AI these days is that so much of it is publicly available and even free to use — and the broad contours of what is going to happen next are already hiding in plain sight. But a hallmark of the fake-and-sucks crowd has been an unwillingness to see what is already staring them in the face.

What I learned from this weekend's great "AI is fake and sucks" debate on Bluesky, with responses to Gary Marcus, Edward Ongweso Jr., and others www.platformer.news/ai-fake-and-...

1 year ago 224 14 52 13

Traditional search pages now look like Times Square with the amount of advertising goop. SEO optimization can promote unreliable results already. LLM based search can help cut through this noise but we need AI literacy, just like we needed internet literacy 20 years ago.

1 year ago 4 0 0 0

It's pretty sad to see the negative sentiment towards Hugging Face on this platform due to a dataset put by one of the employees. I want to write a small piece. 🧵

Hugging Face empowers everyone to use AI to create value and is against monopolization of AI it's a hosting platform above all.

1 year ago 455 70 29 8

had me in the first half

1 year ago 1 0 0 0
Post image

The authors of ColPali trained a retrieval model based on SmolVLM 🤠 TLDR;
- ColSmolVLM performs better than ColPali and DSE-Qwen2 on all English tasks
- ColSmolVLM is more memory efficient than ColQwen2 💗

Find the model here huggingface.co/vidore/colsm...

1 year ago 73 8 4 2

this stands to be a really awesome bridge between folks that build code and folks that “just want to run the damn thing”

1 year ago 1 0 0 0
Advertisement

your scientists were so preoccupied with whether or not they could

1 year ago 2 0 0 0

when you try to convert text to smaller pieces but all it gives you is the subdued acoustic pop music of Simon and Garfunkel, that’s a folkenizer

1 year ago 4 0 0 1