Advertisement · 728 × 90

Posts by David Huang

Congrats—just invented sleep from first principles

1 month ago 1 0 0 0

Just imagining devs sitting there banging their heads against context compaction issues:
“compacting happens at inconvenient times”,
“ compaction takes forever”,
“compaction times makes hard to do synchronous work


Maybe we should just pick specific times for everyone to do compaction …

1 month ago 0 0 1 0

It's easy to underestimate what context really means:

Context windows overflow with just...

1. a single economist's published papers
2. a single drug discovery campaign
3. a single store's transaction data

Abstractions can help decrease context length

But, these are hard won, e.g. F=ma, E=mc^2.

1 month ago 1 1 0 0

Some people frame this as being 2 years into an AI revolution—or 20 years into the internet revolution. A better lens is that we’re about a century into a physics revolution, and the last one lasted three centuries.

2 months ago 0 0 0 0

Please welcome Google's Open Source efforts to Blue Sky at @opensource.google!

3 months ago 245 39 7 4

This is where extra productivity goes… does she really need a team of psychiatrists and doctors… if we have more surplus do we need to add even more psychiatrists to her team?

3 months ago 2 0 0 0
Preview
2025 LLM Year in Review 2025 Year in Review of LLM paradigm changes

karpathy.bearblog.dev/year-in-revi...

3 months ago 0 0 0 0
Advertisement

There’s only two people I’ve heard defend not enacting the Epstein transparency bill.. one hard right republican congressmen and the other a guy on the New Yorker podcast

5 months ago 1 0 0 0
Preview
Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering Podcast Episode · a16z Podcast · 11/17/2025 · 1h 11m

Fascinating to hear Emmett on A16Z podcast:

In response to his Q: how would you know something is different other than its behavior (and generally what would make you grant AI personhood)?

I would say… I would need to believe that I could have been in the AI shoes in a veil of ignorance way.

5 months ago 3 0 0 0
Preview
Composer: Building a fast frontier model with RL Cursor released Cursor 2.0 today, with a refreshed UI focused on agentic coding (and running agents in parallel) and a new model that's unique to Cursor called Composer 1. As far …

Notes on Cursor 2.0 and a pelican drawn by their brand new Composer-1 coding model, which they describe as "4x faster than similarly intelligent models" simonwillison.net/2025/Oct/29/...

5 months ago 52 4 9 0
Post image

I’d love to see what % of public co ownership stake is of OpenAI and Anthropic

5 months ago 1 0 0 0

I think that’s the point though, a true inner happiness is very compressible—it looks like contentedness which naturally looks more same-same then constant envy and churn

5 months ago 0 0 0 0

“Every happy family is alike; each unhappy family is unhappy in its own way”

5 months ago 0 0 1 0

Idk the foreign policy and the ballroom seem aight, maybe just focus on the stuff like the president suing his own department of justice for hundreds of millions, or the pardons, or the forced national guard deployments across state lines

5 months ago 0 0 0 0
Advertisement

it's painful to see people grasping onto "True AGI", "datacenter of geniuses", "Year of the Agent", "Massive job replacement"

when it's just clearly not the case...

6 months ago 0 0 0 0

definitely resonate with LLMs
1. giving over defensive code
2. unable to break out of common frameworks and apis

6 months ago 0 0 0 0

Ok but isn’t this kinda expert systems all over again?

6 months ago 1 0 0 0

Causation module; phase-shift gradient flow; something like that for rapid learning and high sample efficiency

6 months ago 0 0 0 0

Sample inefficiency??

6 months ago 2 0 0 0

Suppose your model of intelligence requires (1) NOT imitation learning (2) NOT virtual environments, and (3) NOT compressed timelines, then what BENEFIT is there to training an “artificial” intelligence vs an “organic” intelligence.

Isn’t the organic more good?

6 months ago 1 0 1 0

I think you learn a lot about a person when you learn what is the tiny garden they will protect from slop and where they will let slop go wild.

7 months ago 0 0 0 0
Claude Code: Behind-the-scenes of the master agent loop When Tom's Guide reported that Anthropic had to add weekly limits after users ran Claude Code 24/7, it caused quite a stir. Claude Code did something right. Let's dive into the architecture behind the...

a good breakdown of the Claude Code agent loop

the key: a simple 2-layer architecture

“a … single-threaded master loop (codenamed nO) real-time steering (the h2A queue)”

(yes, they totally read a de-compiled/obfuscated version, but yeah dedication!)

blog.promptlayer.com/claude-code-...

7 months ago 35 7 3 0

Too much spam to take notes this way… maybe more subdomains is the answer

7 months ago 0 0 0 0

He does know he is trying for the Nobel PEACE Prize right? Like put some effort into it. Just as easily could have been the Department of PEACE.

7 months ago 1 0 0 0
Advertisement

Ayy, thanks

7 months ago 1 0 0 0

Ok yeah but if I’m understanding right (maybe not relevant to GM, more just about general trends) it’s not about “scaling”, but rather it’s about data mix/ quality, algos and architectures?

7 months ago 3 0 1 0

I’m dense … what is he wrong about?

My read is that if GPT-5 is scaled up then pure scaling does not work (from evaluating the model… and if it is not scaled up then OAI must have try pure scaling up and it did not work.

7 months ago 1 0 1 0

This is an example of the evils of specialization: a man must not write on Plato h less he has spent so much of his youth on Greek as to have no time for the things that Plato thought important.”

7 months ago 0 0 0 0

P.132 “It is noteworthy that modern Platonists, without exception are ignorant of mathematics, in spite of the immense importance that Plato attached …

7 months ago 0 0 1 0

more indefensible? Is there any position that is, in practice, more required for the purposes of sane and productive development?

7 months ago 0 0 0 0