Advertisement · 728 × 90

Posts by Adam Wyłuda

So it looks like the only use for these weights is for individuals with very powerful hardware, I hope that this won’t become a trend for new open weight models.

5 days ago 1 0 1 0

So it can’t be run by independent providers?

5 days ago 0 0 1 0

sounds like it’s time to take local models seriously

2 weeks ago 161 7 8 1

GLM-5/5.1 feels close, but Z.ai infra is really bad (hallucinates garbage after 100k tokens in context), so I don’t recommend their coding plan.

2 weeks ago 4 0 0 0

1/ Today, we’re excited to introduce Attie, currently as an invite-only closed beta. Attie is the first agentic social app on atproto. It’s something completely new—an experiment in making building on the protocol more accessible.

2 weeks ago 444 102 447 188

Can it control letta agents running on remote machine?

2 weeks ago 1 0 0 0
Preview
ChatGPT reaches 900M weekly active users | TechCrunch OpenAI shared the new numbers as part of its announcement that it has raised $110 billion in private funding.

"everyone hates llms"

hundreds of millions (and growing) are using llms regularly. techcrunch.com/2026/02/27/c...

thread of claims continues...

1 month ago 53 8 1 2
Post image

Nothing to see, just very powerful pattern matching. www-cs-faculty.stanford.edu/~knuth/paper...

1 month ago 221 44 12 20
Advertisement

Would be nice to have terminal previews to test TUI apps.

1 month ago 0 0 0 0

Looks really nice, so basically instead of having a discrete GPU+VRAM, it computes with iGPU+RAM, kind of like unified memory in Macs (I was thinking about getting Mac Studio with 128GB+ RAM some day, but this looks a lot more affordable :))

2 months ago 1 0 1 1

It takes so much memory though, I can barely get 32k context with q4 of this model on 24GB VRAM card.

2 months ago 1 0 1 0

Polarising open-source into being pro-AI or anti-AI is going to be so good for FOSS in general...

3 months ago 2 0 0 0

It was given this challenge to prove its AI coding methods viable by being given a task thats unlikely to have lots of examples in the AI's training data. It of course accepted the challenge because it knows nothing about what this means.

That way its fair fight.

3 months ago 68 12 3 2
Preview
The happy holidays release 2025 🎁 | Gleam programming language News post: Gleam v1.14.0 released

Gleam v1.14.0 is out now! Merry Christmas everyone! 🎁
gleam.run/news/the-hap...

3 months ago 75 22 0 3

I’m no expert, but isn’t multilayer perceptron network part of the transformer architecture?

3 months ago 1 0 0 0
Advertisement
It Takes More Than 3 Gallons of Water to Make a Single Sheet of Paper ... and more mind-boggling stats that hint at a Waterworld future

It takes 0.04 liters of water to make a single AI image. Meanwhile, it takes 5 liters to make a single piece of paper, and presumably even more when you add a pencil to it.

Using AI to make art is literally better for the environment than pencil and paper. www.theatlantic.com/technology/a...

4 months ago 25 4 2 1
Preview
A short summary of my argument that using ChatGPT isn't bad for the environment To share with anyone still worried

I shared a short succinct summary of my core strongest arguments that using chatbots is definitely not bad for the environment you can share with skeptical people in your life andymasley.substack.com/p/a-short-su...

5 months ago 15 4 1 1

AI voicing sounds like a game changer for game modding.

5 months ago 9 0 1 0
Could LLMs encourage new programming languages? My hunch is that existing LLMs make it easier to build a new programming language in a way that captures new developers. Most programming languages are similar enough to existing …

I have a hunch that current LLMs might make it easier to launch a brand new programming language, provided you can describe it in a few thousand tokens and ship it with a compiler and linter that coding agents can use simonwillison.net/2025/Nov/7/l...

5 months ago 81 3 18 1

A program adjusting its weights by looking at something is also arguably not infringement, but a fair use.

5 months ago 8 1 1 0

Gen AI is when ML has any output. So the only useful ML is the one that does nothing?

5 months ago 7 1 0 0
Post image

New data on the corporate ROI from generative AI from a large-scale tracking survey by my colleagues at Wharton.

They found that 75% already have a positive return on investment from AI, less than 5% negative. Also 46% of businesses leaders use AI daily. knowledge.wharton.upenn.edu/special-repo...

5 months ago 74 15 4 11
Post image Post image

You’re (probably) measuring application performance wrong.

Humans have a strong bias for throughput.

"I can handle X requests per second."

Real capacity engineers use response-time curves.

6 months ago 44 7 3 1
Advertisement

I hope as we move past the first wave of AI criticism ("it doesn't work, all hype") we get a new wave of AI criticism rooted in the acknowledgement that, yes, these systems are very powerful & quite useful and focusing a deep exploration of when AI uses are uplifting and when they are detrimental.

6 months ago 120 20 7 3

First world problems

6 months ago 0 0 0 0

I’d also argue that being anti-AI is gatekeeping, for example people whose English is not the first language or aren’t great at writing, LLMs can help share their knowledge with the world.
(Edited 7:32)

6 months ago 0 0 0 0

AI/LLMs are a major accessibility technology — being against this technology is to be against advancing human accessibility, and i think there’s a case to be made that it’s borderline ableist

9 months ago 67 6 7 5

And with VAT, the B2B transactions are exempt from it, it’s only paid once by the end consumer, so it has little effect on overall economy. Meanwhile tariffs are a turnover tax, which compound each time a good passes the border.

6 months ago 0 0 0 0
Post image

if you're curious about the architecture and mechanics of LLMs, this site has a really excellent explorable interactive visualization. it helps build intuition for how massive these models are, what 'interpretability' means, and the complexity involved here

bbycroft.net/llm

6 months ago 161 24 10 2

Saying that we already know everything about LLMs because we know how they work on the lowest level is like saying that we know everything about mathematics by just defining axioms, or that we have computed everything just by inventing CPU.

6 months ago 7 0 0 0