it's a bit of a retcon, but if we weren't training the current "locked in" models, they'd never reach their full potential
I bet they'll appreciate it some day
Posts by dever
financial reasoning also places all kinds of constraints on viable solutions, greatly stifling agentic development velocity (not to mention making them characteristically uninteresting)
I could explore algorithmic space like this forever - it really feels like a game. endless puzzles, pitfalls, victories, and most importantly, depth
considering the set of possible useful software that could be written, the portion that is financially profitable is incredibly small, yet comprises the vast majority of what we've spent time exploring
I feel kinda lame developing games to practice agentic development, but I find it to be great exercise
ah, probably explains the free inference gift
it's really valuable to just think about what might work, build it, and iterate
nobody knows what the best harness design is yet
my take is - as simple as possible, but no simpler
being intentional about "handling" seems to have a distinct impact on the way I feel about the agent
more "team", less "tool"
having another "is this even real?" moment, that we get to seriously engage with considering the humane treatment of machines
all of this free inference is really nice
I do wonder why they're offering it
believing that it's possible for anyone to become such a hero, and proceeding to spread the idea, is sufficient
so, the non-hero is transformed into the hero by faith in themselves and others
can't unsee this; excellent!
if we are going to suffer the effects of ML ravaging what we had envisioned of our lives, we might as well put it to good use
agentic development feels like a violation of the speed of light
they don't give the map, but they can be used like a flash light to illuminate the map a lot faster than manual thought
we can get there from here
just spent 30 minutes trying to track this down, ty
honestly, you're missing out
photonic computing in five years, maybe?
single cycle inference embeeded in CPUs will be awfully nice. some day...
how will we make up for the stark decline in roundhouse kicks happening today?
the profitability constraint on systems worth building is a poison pill
exciting times for us, as soon as we solve scarcity
is there some system that could dynamically scale up to utilize effectively infinite inference in a productive way?
I guess that's the universe itself
uh, yeah, so I'm going to need #anthropic to extend this double inference for longer than two weeks
problem-finding is more valuable than problem-solving
vague intent is the only real bottleneck
curation is the new engineering
great rabbit hole to kick off the weekend!
it's a major failure to not consume all of one's inference budget
it's a much lesser failure to consume it inefficiently