Advertisement · 728 × 90

Posts by dever

it's a bit of a retcon, but if we weren't training the current "locked in" models, they'd never reach their full potential

I bet they'll appreciate it some day

1 day ago 0 0 0 0

financial reasoning also places all kinds of constraints on viable solutions, greatly stifling agentic development velocity (not to mention making them characteristically uninteresting)

5 days ago 0 0 0 0

I could explore algorithmic space like this forever - it really feels like a game. endless puzzles, pitfalls, victories, and most importantly, depth

5 days ago 0 0 0 0

considering the set of possible useful software that could be written, the portion that is financially profitable is incredibly small, yet comprises the vast majority of what we've spent time exploring

5 days ago 1 0 0 1

I feel kinda lame developing games to practice agentic development, but I find it to be great exercise

5 days ago 2 0 0 0

ah, probably explains the free inference gift

5 days ago 1 0 1 0

it's really valuable to just think about what might work, build it, and iterate

nobody knows what the best harness design is yet

my take is - as simple as possible, but no simpler

5 days ago 2 0 1 0
Advertisement

being intentional about "handling" seems to have a distinct impact on the way I feel about the agent

more "team", less "tool"

6 days ago 1 0 0 0

having another "is this even real?" moment, that we get to seriously engage with considering the humane treatment of machines

6 days ago 6 0 2 0

all of this free inference is really nice

I do wonder why they're offering it

6 days ago 1 0 0 0

believing that it's possible for anyone to become such a hero, and proceeding to spread the idea, is sufficient

so, the non-hero is transformed into the hero by faith in themselves and others

1 week ago 1 0 0 0

can't unsee this; excellent!

1 week ago 1 0 0 0

if we are going to suffer the effects of ML ravaging what we had envisioned of our lives, we might as well put it to good use

1 week ago 0 0 0 0

agentic development feels like a violation of the speed of light

1 week ago 0 0 0 0

they don't give the map, but they can be used like a flash light to illuminate the map a lot faster than manual thought

1 week ago 5 0 1 0

we can get there from here

1 week ago 4 0 1 0

just spent 30 minutes trying to track this down, ty

2 weeks ago 0 0 0 0
Advertisement

honestly, you're missing out

2 weeks ago 0 0 0 0

photonic computing in five years, maybe?

2 weeks ago 0 0 1 0

single cycle inference embeeded in CPUs will be awfully nice. some day...

2 weeks ago 0 1 1 0

how will we make up for the stark decline in roundhouse kicks happening today?

2 weeks ago 0 1 0 0

the profitability constraint on systems worth building is a poison pill

exciting times for us, as soon as we solve scarcity

3 weeks ago 0 0 0 0
Preview
Claude March 2026 usage promotion | Claude Help Center

support.claude.com/en/articles/...

3 weeks ago 1 0 1 0

is there some system that could dynamically scale up to utilize effectively infinite inference in a productive way?

I guess that's the universe itself

3 weeks ago 0 0 0 0
Advertisement

uh, yeah, so I'm going to need #anthropic to extend this double inference for longer than two weeks

3 weeks ago 0 0 1 0

problem-finding is more valuable than problem-solving

3 weeks ago 2 0 0 0

vague intent is the only real bottleneck

3 weeks ago 0 0 0 0

curation is the new engineering

3 weeks ago 0 0 0 0

great rabbit hole to kick off the weekend!

3 weeks ago 1 0 0 0

it's a major failure to not consume all of one's inference budget

it's a much lesser failure to consume it inefficiently

4 weeks ago 0 1 1 0