Advertisement ยท 728 ร— 90

Posts by Morten Vassvik

tfw an idea you've been theorizing for years actually works

1 week ago 6 0 1 0

aka. "How many bits can your display display", seems like ๐Ÿซ ๐Ÿคฃ

1 month ago 2 1 0 0
Post image
1 month ago 1 0 1 0

I wouldn't say that :D

1 month ago 0 0 1 0

So all these rather crazy things are against a backdrop of having a pretty good sparse backing grid. That moves the needle quite a bit

1 month ago 0 0 1 0

Needs to be sparse, no way a dense frustum grid scales :)

1 month ago 0 0 1 0

In practice I think you'll find that 2-3 pixels per froxel cone is actually on the edge of what going to be visually sufficient with good resampling and interpolation anyway, but I think it's largely inputs dependent still. The more important direction to sample well is the depth direction.

1 month ago 0 0 1 0

As a result you end up with tons of artifacts that require a lot of fairly brute force trickery to overcome, including dithering and temporal accululation

My motivation with this approach is to try and approach this from the opposite direction โ€“ capture as much detail first, then simplify/constrain

1 month ago 0 0 1 0

Traditional frustum grid renderers set the lateral and depth resolution first as a parameter, e.g. 3x3 pixels per froxel cone and 256 depth slices between and near and a far plane. While this can result in predictable performance it bears no direct connection to the properties of the volume.

1 month ago 0 0 1 0
Advertisement

Whether this actually resolve any meaningful detail in return depends a lot on the properties of the volume itself. For traditional smooth fog it almost certainly doesn't add much, but for crisp explosions and thick smoke? Seems much more likely.

1 month ago 0 0 1 0

The construct is currently precisely about going for pixel-sized froxel cones. And although the near-field construct do oversample the *voxels* laterally it does not oversample the *pixels* since each pixel ray take a different path through the volumetric field.

1 month ago 0 0 1 0

This part is actually still WIP, so not fully implemented yet, to be clear.

1 month ago 0 0 0 0

It's almost certainly possible to generate equivalent responses or better to almost any prompt by heavily curating the context and prompts that's sent to a LLM that cost less than the cached setup that was used to generate these responses in the first place, which bear some similarities to the above

1 month ago 1 0 1 0

There's a flipside to the awesomeness of what LLMs can do these days, and that's the clear level of waste there is almost every level if you look closely. In particular the way prefix-cached accumulated contexts are typically used leave a lot to be desired, they're literally O(N^2) for N tokens.

1 month ago 2 0 1 0

Conventional uses of LLMs these days can pretty much do anything in an almost literal sense, which doesn't really come as a surprised given that the number of parameters are in the trillions by now on top of various clever techniques to make those parameters go even further.

1 month ago 0 0 1 0

There's a related aphorism typically attributed to Einstein

> Everything should be made as simple as possible, but not simpler.

which simultaneously combines Occam's razor and the concept of entropy - broadly implying that there is a kind of irreducible complexity (unfortunately a hijacked term)

1 month ago 1 0 1 0
Advertisement

The expression is used in physics to criticize physical models that are overfit, and that with enough parameters you can basically make a model do anything, which is largely self-defeating for explanatory purposes.

1 month ago 0 0 1 0

Von Neumann's elephant:

> With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.

Extrapolate to 2026 and with 1 trillion parameters you can effectively answer any question in the world and solve almost any problem.

1 month ago 1 0 1 0

Another way to put it is that the camera frustum exactly spans 2160 voxels vertically at 60 degrees vertical field of view at a distance 1/(2*tan(60/2 deg)/2160) = 1870.61487 voxels away from the camera the camera, and equivalently 3840 voxels horizontally.

1 month ago 0 0 1 0

So d*(2*tan(fov/2)/N)) is the lateral pixel size at depth d, and setting this equal to the voxel size dx gives d = dx / (2*tan(fov/2)/N)) = dx / kappa.

So n = 1 / kappa is geometrically the number of voxels up to the crossover point, and for

1 month ago 0 0 1 0

The lateral span of the frustum is 2*tan(fov/2) times the distance from the camera, and that span is evenly divided across the pixels in that direction, so the lateral span of each pixel is 2*tan(fov/2)/N times the distance for N pixels

Call it the lateral pixel size as a function of distance/depth

1 month ago 0 0 1 0

You got it right.

The near field slice count to the crossover point is independent of the voxel size, it's purely dependent of the camera parameters (number of pixels and field of view).

More precisely: The near field slice depth (which is constant) *is* the voxel size, by definition.

1 month ago 0 0 1 0

Some editorializing expected

2 months ago 0 0 0 0
Post image

Direct link to the "results": github.com/vassvik/clau...

2 months ago 0 0 1 0
Advertisement
GitHub - vassvik/claude-thinking-experiment Contribute to vassvik/claude-thinking-experiment development by creating an account on GitHub.

So I was "talking" to it about that, and ended up doing a bunch of experiments (although small sample size per test, I only have so much time doing this manually!). I put the result here github.com/vassvik/clau...

2 months ago 0 0 1 0
Post image

Claude Opus 4.6 made a big change in how the "thinking" is configured and controlled, and a lot of people are seeing it being really greedy with tokens across the board, some times filling the entire context (and making it impossible to use on a Pro sub) with very few prompts.

2 months ago 0 0 1 0

Some kind of cluster or block selection aroudn a certain region to seed a list of potential people to follow (as an extension to the your follower network analyzer) would be very useful

2 months ago 1 0 0 0
Post image

would be nice to be able to do a cluster or block select to easily enumerate people to follow

2 months ago 1 0 0 0
Post image

fun

2 months ago 0 0 1 0
Preview
Bluesky Map Interactive map of 3.4 million Bluesky users, visualised by their follower pattern.

I made a map of 3.4 million Bluesky users - see if you can find yourself!

bluesky-map.theo.io

I've seen some similar projects, but IMO this seems to better capture some of the fine-grained detail

2 months ago 7239 2166 660 4554