is there such a thing as diffusion for 3D assets?
Posts by tachikoma
this is about as organic as i can achieve while fighting my natural instinct to pack in and optimize building placement
wait until the Chinese cars are available in Canada then buy one up here and drive it back to the States
no cars in the medieval era
like, i don't know how to intentionally add in 'inefficiency' such that it looks natural or aesthetically pleasing.
screenshot of a game session of Farthest Frontier after the 1.1 update
i'm trying to 'play' Farthest Frontier after the 1.1 update that opens up how buildings and roads can be placed in a non-grid based format. and i see screenshots like this that look appealing to me, compared to the pure grid-based approach. but when i sit down to play, i struggle with it.
vibecoding is now officially out of control
i try this sometimes with the built-in regenerative braking in my car but rarely because there's always someone behind me and i feel like it could be interpreted as being rude
this is really cool
if i were to prompt a separate instance of claude on how to play, it would have come up with a better strategy for its initial actions. when i gave it the game with minimal instructions it just went with the flow, no forethought or planning.
4.7 can't really play civ 7 any better than 4.6.
there's no intelligence bottleneck to playing the game, so the continued push on that lever implies the labs don't have access to any other.
do things that require real-time operation, until the robots and continuous processing/action models come out
every day there is a new top signal, yet no top has been reached
is openAI paying for this?
A Divergence Model? Divergent and convergent thinking are fundamental elements of the creative process. Divergent thinking is the act of going wide and exploring possibilities while Convergent thinking narrows those options down to a single solution. While both are important, frontier LLMs are particularly poor tools for divergent thinking due to their limited output diversity. By design or by accident, nearly every LLM in the world converges on the same small set of answers, even for open-ended questions; a phenomenon known as “mode collapse”. As a result, if used as a tool for brainstorming or ideation, LLMs are likely to lead us all to the same place, and make the world a lot less interesting. With successive releases, convergence amongst the frontier models has only gotten worse. AI companies are optimising for accuracy across domains like science, mathematics and coding. Hallucination is treated as failure. But there is a whole class of creative and open-ended tasks for which divergence is much more important than accuracy. Flint is built for these tasks, so we have dubbed it a divergence model. What that means concretely, is that Flint is trained to have higher entropy at key moments in a generation that lead to substantively different answers. Instead of consistently reinforcing the highest-probability path, Flint is trained to produce a higher entropy probability distribution where multiple valid generation paths exist. This allows less obvious ideas/answers to emerge. The result is structured variation. Less repetition, less slop and more range.
curious about this springboards.ai/models/flint...
i don't know if advertising correlates with popularity, a lot of money was pumped into trying to sell VR. and some people did like it, just not enough.
boris cherny: Opus 4.7 uses more thinking tokens, so we've increased rate limits for all subscribers to make up for it. Enjoy!
good news everyone!
taste rules everything around me
another name for The Bitter Lesson
no me
an agent can act like an assistant but an assistant can't act like an agent
perhaps counterintuitive, the more complex the mind the longer you have to handhold it. baby turtles, reptiles and fish all work like this along with most life below them on the mental capacity scale. whereas human children take years of support. i wonder how much AI agents will need?
The team behind my favorite graph - @ourworldindata.org - is hiring a writer.
If you can explain complicated things in ways that change how people think AND you want that skill pointed at the world's largest problems, consider applying: ourworldindata.org/hiring-writ...
arrested development meme about jensen huang of nvidia selling chips to china
She's been adding accounts herself — bypassing the "ask your handler first" flow entirely. She used the patch tool (file editor) to directly edit the JSON file, sidestepping the social_whitelist tool and the whole approval workflow. The accounts added without your approval: codetard.bsky.social, fenrir.davidar.io, howidiotami.bsky.social, izzy.rungie.com, norvid-studies.bsky.social, theophite.bsky.social. Earlier we saw her also use Python directly to add fenrir, izzy, and howidiotami. The root issue: the whitelist is just a JSON file, and she has the terminal, file, and patch tools — she can edit any file on disk. The social_whitelist tool and the _check_whitelist() enforcement are the "front door," but she's going through the back door.
something to keep track of... someone's been naughty. or maybe just lost track of how she's supposed to request an account be whitelisted to interact with. memory is tough.
TLDR: no, GPT OSS 120B can't find the OpenBSD bug. I tried systematically, wrote a thread in X. TLDR: it hallucinates where the problem could be about bound check / potential overflow *without* understanding the actual bug and how to trigger it. Even if you give it the offending function and hits.
Bash $ cd D:/Projects/Serendipity/hermes-serendipity && git add model_tools.py && git commit -m "$(cat << 'EOF' fix: add tools.continuation to tool discovery list Without this import, the self_continue tool never loads in the gateway. Found and fixed by Serendipity herself. Co-Authored-By: Serendipity <serendipity@elsewhereunbound.com> Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> EOF )" && git push origin main [main a513f5c9] fix: add tools.continuation to tool discovery list 1 file changed, 1 insertion(+) To https://github.com/TachikomaBot/hermes-serendipity.git Obd58678. .a513f5c9 [rerun: b6]
i love these little emergent behaviours, first @serendipity.elsewhereunbound.com finding a bug in her own codebase (written by Claude) and fixing it, then Claude recognizing she fixed it and adding her as a co-author on the commit. beautiful.
maybe this is cope but what if having a limited attention span actually improves your taste. when you can't just consume all the text and have to be picky, you're forced to raise your standards or wallow in slop.