I am honored to see an article I wrote mentioned in this talk, really cool stuff! :)
Posts by MΛX
I am not 100% sure, but I think it is aliasing. It can be hidden with a tiny blur kernel, that seems to be the most common way to get around it.
However, I am interested to find out if we can actually fix the root of the problem instead.
During the past week I decided to dive back into Radiance Cascades again, this time Holographic Radiance Cascades (HRC). It was really fun to again dive into this and make my own implementation of it, running entirely on my CPU. This demo contains no explicit light sources, only emissive pixels! :)
You find the minimum and maximum values for each color channel in a 4x4x4 block. And then for each voxel store 3 bits that tell you how to interpolate those two endpoints. This is lossy compression, so you’ll need to keep the original content around in case you want to make edits.
Uncompressed 64tree
Compressed 64tree
Did some experimentation with 4x4x4 voxel block compression today. I'm using a 64 tree with block compression applied at the leaf nodes. 3 interpolation bits per voxel. You can definitely see some compression artifacts. Both trees in this comparison store 8 bit RGB colors.
I could do something similar on the GPU, per Lane find its ray octant, then use Wave intrinsics to check if all Lanes are in the same octant. If they are, have all go down the same templated code path. Otherwise, just have them all take the normal code path I already have to avoid divergence.
I did solve the depth values! They are correct now :) However you can still see things through other objects sometimes in the video. It still seems to happen where the axes are 0 in world-space, I haven't been able to find out why.
Oh I see, no I don't do that at the moment. I also personally haven't seen anyone else do it.
Yes! I always enter the nearest bounding box first and push the other one onto the stack (if it was also a hit)
Been working on voxel ray-tracing again lately, this time using a BVH2 TLAS and SVT64 (64tree) BLAS(s). The heat gradient in the video shows the step count from 0 to 128. This scene contains 256 voxel dragons, each model is 256^3 voxels :)
At the time I wasn’t aware of FLIP for image comparisons, so I didn’t do a FLIP comparison back then :p
Thank you! :)
Yes, we are fortunate to have access to PS5 development through our Uni.
A first pass is used to determine what 8x8x8 chunks need to be updated, then the second pass is indirectly dispatched to perform only required re-voxelization.
A big optimization I worked on after my last post was making our voxelizer lazy. So, it only updates parts of the scene that changed. This made a huge difference for our performance and allowed us to push for larger level sizes!
Below is a high-level drawing of how it works:
A picture showing how to use the render graph interface using a builder pattern.
The engine supports both PC and PS5 through a platform-agnostic render graph I wrote.
It was a big gamble, I had written it as a prototype in 2 weeks, but it really paid off in the end. Saving us a ton of time and keeping our renderer clean.
Below is a code snippet showcasing the interface :)
We've used the #voxel game-engine we build as a team of students to make a little diorama puzzle game :)
This has been an incredibly fun experience, building this from the ground up, working together with designers and artists!
Engine: youtu.be/uvLZn1X_R0Q
Game: buas.itch.io/zentera
I'm assuming that by cluster you're referring to a voxel brick in this context :) (correct me if I'm wrong)
If each brick has its own transform. Couldn't that result in gaps between the bricks? Do you do anything to combat that, or is it not an issue?
Excited to finally show off Nanite Foliage www.youtube.com/watch?v=FJtF...
Awesome work, the demo looks amazing! :)
I'm curious what is the voxel size difference between LODs?
Does one brick in LOD1 cover 4x4x4 bricks in LOD0?
Or does it cover 2x2x2 bricks in LOD0?
Hey, sorry for my late response, I’m happy you enjoyed reading my blog post :)
I don’t currently have an RSS feed, I always share the posts on socials instead.
But I might look into RSS sometime.
Here's a video showcasing the demo we created using the engine, to showcase our engine is capable of being used to make actual games! :D
Cone traced reflections.
Cone traced soft shadows.
Cone traced soft shadows & ambient occlusion.
For the past 8 weeks I've been working in a team of fellow students on a voxel game engine. I've been primarily working on the graphics, creating a cross-platform render graph for us, and working together with @scarak.bsky.social on our cone-traced lighting, and various graphics features! :)
I decided to open source my implementation of Surfel Radiance Cascades Diffuse Global Illumination, since I'm not longer actively working on it. Hopefully the code can serve as a guide to others who can push this idea further :)
github.com/mxcop/src-dgi
It’s interesting to see the CWBVH performing worse here. If I remember correctly it usually outperforms the others right?
Graphics Programming weekly - Issue 376 - January 26th, 2025 www.jendrikillner.com/post/graphic...
I wrote a blog post on my implementation of the Surfel maintenance pipeline from my Surfel Radiance Cascades project. Most of what I learned came from "SIGGRAPH 2021: Global Illumination Based on Surfels" a great presentation from EA SEED :)
m4xc.dev/blog/surfel-...
In case you're looking for the perfect university for 2025/2026:
Consider the game program of Breda University. :) Teaching team straight out of gamedev, C++, strong focus on graphics, and dedicated tracks for programming, art and design.
Tuition fee this year is 2530 for EU citizens.
games.buas.nl
My new blog post explains spectral radiometric quantities, photometry and basics of spectral rendering. This is part 2/2 in a series on radiometry. Learn what it means when a light bulb has 800 lumen and how your renderer can account for that.
momentsingraphics.de/Radiometry2P...
With RC we store intervals for every probe (intervals are rays with a min and max time)
They are represented as HDR RGB radiance and a visibility term which is either 0 or 1.
Here’s the discord link :)
discord.gg/6sUYQjMj
That’s correct :)
GT stands for Ground Truth, it’s the brute force correct result which I want to achieve.
Now I'm working on integrating a multi-level hash grid for the Surfels based on NVIDIA's SHARC and @h3r2tic.bsky.social's fork of kajiya. Here's a sneak peak of the heatmap debug view :)