Advertisement · 728 × 90

Posts by SniperJake945

Video

My erosion filter is out! Video, blog post, and shader source.

It emulates erosion without simulation, so it's fast, GPU friendly, and trivial to generate in chunks.

Explainer video:
www.youtube.com/watch?v=r4V2...

Companion blog post:
blog.runevision.com/2026/03/fast...

#ProcGen #vfx #GameDev

3 weeks ago 545 170 16 5

We should call the LP norm the Squircle norm

2 weeks ago 0 0 0 0
Video

VoroNERF is coming along better now that i fixed a huge bug with my Top K selection of sites! It's still not perfect of course, lots of bugs and stuff to fix.

This is with 60k voronoi sites, using the 16 closest sites around any sampling position along the render rays.

2 months ago 2 0 0 0
Digital Iris
Digital Iris YouTube video by Ancient

"Animated Bokeh" has been on my whiteboard for a long time, and this is the result.

I thought it would just be for cheesy novelty effects, but it also does lightfield manipulation that was cooler than I expected.

www.youtube.com/watch?v=Kg_2...

2 months ago 204 73 13 9
Preview
MOPs: Motion Operators for Houdini This course provides an overview of the MOPs workflow and how it integrates with the rest of Houdini, shows concrete examples of how MOPs can be used in both motion graphics and visual effects workflo...

My MOPs course on Houdini.School is now free! If you want a start to finish course on how (almost) everything works, plus the math background, you can watch it all here: www.houdini.school/courses/mops...

2 months ago 24 8 0 0
voronerf output

voronerf output

ground truth lego from the back

ground truth lego from the back

voronoi based NERF training isn't going incredibly well... 😂20k iterations with 80k voronoi sites takes about 10 hours to train. And it still looks dog water.

Definitely need more sites and some better approach to accelerating sampling....

2 months ago 2 0 0 0

When I accidentally start to minimize my dirichlet energy 😣

3 months ago 1 0 0 0
Advertisement
Video

Voronoi implicit but this time in 3d. It's not fully NERF mode yet, as I'm not doing any kind of sampling along rays, this is just trying to learn volumetric data.

30,000 Voronoi sites. Probably not enough for the pighead, but it's just a fun first test.

3 months ago 7 0 0 0

So the hardliners in my mind are people trying to cash in on the idea that companies will choose code assistants over employees. And they need the AI companies to succeed in that vision. And then extremist povs also gain a lot of traction on social media. So it's kinda of this insane feedback loop

3 months ago 1 0 0 0

My conspiracy is that in order for the investment these AI companies have made to pay off they need for it to be a full on replacement. Otherwise when costs normalize, the value for a given consumer won't be there. Especially if a company still has to pay for both an engineer and a code assistant.

3 months ago 0 0 1 0
Video

Lebronsketball 2

3 months ago 1 0 1 0
Video

Lebronsketball 1

3 months ago 0 0 0 0
Video

*morphs your cat*

3 months ago 0 0 0 0
Gaboronoi (top) vs Anisotropic Voronoi (bottom)

Gaboronoi (top) vs Anisotropic Voronoi (bottom)

Gaboronoi (top) vs Anisotropic Voronoi (bottom), with 5000 Sites. I feel like the anisotropic voronoi's artifacts are more aesthetically pleasing. Gaboronoi is faster to train by a lot!

I've compiled all of my recent voronoi experiments into a collab:
colab.research.google.com/github/jaker...

3 months ago 2 1 0 0
example gaboronoi

example gaboronoi

Assuming uniform weights and frequencies, and random colors and anisotropy directions, this is what an example gaboronoi diagram would look like.

3 months ago 8 2 0 0
Advertisement

Similar to this post: bsky.app/profile/tear...

we're using 3000 voronoi sites. It captures the high frequency info worse than the anisotropic voronoi which is interesting. But it's also one less parameter to learn.

3 months ago 0 0 1 0
cat neural implicit

cat neural implicit

gaboronoi tesselation

gaboronoi tesselation

I'm once again making neural implicits of my cats. This time we're back to the voronoi. Gabor noise style.

We weight the result of the softmax at any site i by sin(F_i*(x_i-x_j)•u_i) where F_i is learned frequency and u_i is a learned anisotropy direction. I call it Gaboroni

3 months ago 3 0 1 0
noise layers

noise layers

These are the noise layers that are all summed up to get the predicted image. each noise layer is colored with two colors, and each layer has a circular mask associated with it as well.

I scaled the values here by a factor of ten so that they're easier to see.

3 months ago 1 0 0 0
predicted cats

predicted cats

ground truth cats

ground truth cats

Wanted to make a neural implicit that's just layers of anisotropic simplex noise. Turns out it works pretty well. With 24 layers of simplex noise each with a 96x96 texture of anisotropy data we can get this kind of result!

It's not at all a good compression method but it's fun and cool :)

3 months ago 1 0 1 0
Post image

They like to hug now that they're older

3 months ago 2 0 1 0
Ground truth cats

Ground truth cats

Here's the aforementioned paper: sphericalvoronoi.github.io

All credit to Lucky Lyinbor for the idea to apply this to euclidean voronoi.

Also here's the ground truth image

3 months ago 3 0 0 0
Learned anisotropic rep

Learned anisotropic rep

Learned isotropic rep

Learned isotropic rep

Anisotropic voronoi

Anisotropic voronoi

Isotropic voronoi

Isotropic voronoi

I had the idea to incorporate anisotropy into the recent Spherical Voronoi paper. And when we apply their ideas to euclidean problems (not Spherical) the results are pretty great when anisotropy is used. This is 3000 anisotropic sites vs 3000 isotropic sites. Same learning rates for both

3 months ago 5 0 2 1

These are technically two different problems to be clear* but both are more annoying than I expected.

3 months ago 0 0 0 0

Shockingly annoying to find the points of intersection between a plane and an AABB. Even when I know the plane passes through the center of the box.

If anyone has a simple method for finding the exact distance to a plane clipped by a bounding box I'd be eternally grateful.

3 months ago 0 0 1 0
Advertisement
Video

Computing the exact bijection of the optimal transport (OT) problem between very large point sets is completely untractable…

In our SIGGRAPH Asia 2025 paper: “BSP-OT: Sparse transport plans between discrete measures in log-linear time” we get one with typically 1% of error in a few seconds on CPU!

6 months ago 45 16 1 3

Obviously doing this through noise and no visibility tests would be far faster but it would be fun to play with wind shaping the coverage of the grass.

4 months ago 1 0 0 0

I'd imagine snow covered grass is relatively similar, but on its side. Each blade could have a wind direction ray and if the ray doesn't hit any other blade of grass in the wind direction (over a certain distance), then it could be snow covered. And then combo that with some general snow coverage

4 months ago 2 0 1 0

For the snow on the hedge maze in Zootopia 2, we cast rays upwards from each leaf and if it didn't collide with anything we instanced a snow chunk onto it. We made a few variants of the snow chunks to add extra variation.

4 months ago 8 0 1 0

Right but the idea of being able to eat it (in the future) is itself a form of satisfaction. And plenty of people hoard items they want to be able to indulge in at the time of greatest satisfaction. Like someone who doesn't use their consumables in an rpg because they might need it for the next boss

4 months ago 0 0 0 0

my FX work alone required at least 1 quadrillion of the 3.7 quadrillion rays traced on Zootopia 2. This is FACTUAL and REAL. If you are a journalist, please cite this in your article.

4 months ago 12 1 1 0