So it looks like the only use for these weights is for individuals with very powerful hardware, I hope that this won’t become a trend for new open weight models.
Posts by Adam Wyłuda
So it can’t be run by independent providers?
sounds like it’s time to take local models seriously
GLM-5/5.1 feels close, but Z.ai infra is really bad (hallucinates garbage after 100k tokens in context), so I don’t recommend their coding plan.
1/ Today, we’re excited to introduce Attie, currently as an invite-only closed beta. Attie is the first agentic social app on atproto. It’s something completely new—an experiment in making building on the protocol more accessible.
Can it control letta agents running on remote machine?
"everyone hates llms"
hundreds of millions (and growing) are using llms regularly. techcrunch.com/2026/02/27/c...
thread of claims continues...
Nothing to see, just very powerful pattern matching. www-cs-faculty.stanford.edu/~knuth/paper...
Would be nice to have terminal previews to test TUI apps.
Looks really nice, so basically instead of having a discrete GPU+VRAM, it computes with iGPU+RAM, kind of like unified memory in Macs (I was thinking about getting Mac Studio with 128GB+ RAM some day, but this looks a lot more affordable :))
It takes so much memory though, I can barely get 32k context with q4 of this model on 24GB VRAM card.
Polarising open-source into being pro-AI or anti-AI is going to be so good for FOSS in general...
It was given this challenge to prove its AI coding methods viable by being given a task thats unlikely to have lots of examples in the AI's training data. It of course accepted the challenge because it knows nothing about what this means.
That way its fair fight.
I’m no expert, but isn’t multilayer perceptron network part of the transformer architecture?
It takes 0.04 liters of water to make a single AI image. Meanwhile, it takes 5 liters to make a single piece of paper, and presumably even more when you add a pencil to it.
Using AI to make art is literally better for the environment than pencil and paper. www.theatlantic.com/technology/a...
I shared a short succinct summary of my core strongest arguments that using chatbots is definitely not bad for the environment you can share with skeptical people in your life andymasley.substack.com/p/a-short-su...
AI voicing sounds like a game changer for game modding.
I have a hunch that current LLMs might make it easier to launch a brand new programming language, provided you can describe it in a few thousand tokens and ship it with a compiler and linter that coding agents can use simonwillison.net/2025/Nov/7/l...
A program adjusting its weights by looking at something is also arguably not infringement, but a fair use.
Gen AI is when ML has any output. So the only useful ML is the one that does nothing?
New data on the corporate ROI from generative AI from a large-scale tracking survey by my colleagues at Wharton.
They found that 75% already have a positive return on investment from AI, less than 5% negative. Also 46% of businesses leaders use AI daily. knowledge.wharton.upenn.edu/special-repo...
You’re (probably) measuring application performance wrong.
Humans have a strong bias for throughput.
"I can handle X requests per second."
Real capacity engineers use response-time curves.
I hope as we move past the first wave of AI criticism ("it doesn't work, all hype") we get a new wave of AI criticism rooted in the acknowledgement that, yes, these systems are very powerful & quite useful and focusing a deep exploration of when AI uses are uplifting and when they are detrimental.
First world problems
I’d also argue that being anti-AI is gatekeeping, for example people whose English is not the first language or aren’t great at writing, LLMs can help share their knowledge with the world.
(Edited 7:32)
AI/LLMs are a major accessibility technology — being against this technology is to be against advancing human accessibility, and i think there’s a case to be made that it’s borderline ableist
And with VAT, the B2B transactions are exempt from it, it’s only paid once by the end consumer, so it has little effect on overall economy. Meanwhile tariffs are a turnover tax, which compound each time a good passes the border.
if you're curious about the architecture and mechanics of LLMs, this site has a really excellent explorable interactive visualization. it helps build intuition for how massive these models are, what 'interpretability' means, and the complexity involved here
bbycroft.net/llm
Saying that we already know everything about LLMs because we know how they work on the lowest level is like saying that we know everything about mathematics by just defining axioms, or that we have computed everything just by inventing CPU.