Original PBGI instead of raymarching surfels was rasterizing small cubemaps (micro buffers) in order to compute GI. They had a much higher density of surfels and used it for a per pixel final gather instead of caching in an ambient cube probes. cgg.mff.cuni.cz/~jaroslav/gi...
Posts by Krzysztof Narkowicz
An interesting idea to use surfels not only as a surface cache (PBGI), but also as a geo approx to trace against. The main use case is mobile, where triangle ray tracing isn’t widely supported in HW, and raymarching SDFs isn’t a great fit for bandwidth-limited devices. gdcvault.com/play/1035619...
Planescape: Torment, Kentucky Route Zero, Pentiment, Heavy Rain.
Float to R8_UNorm conversion has a permitted tolerance of 0.6ulp in the DX spec. 0.55/255 on some GPUs rounds down to 0 and on others rounds up to 1/255. Dangerous behavior when R8_UNorm NumFramesAccumulated=0 is used by a denoiser to mark invalid history... microsoft.github.io/DirectX-Spec...
Inspiring examples of stylized rendering in UE without any modifications to the engine. Just clever meshes and textures. Though I wonder if it holds up with changing lighting conditions?
www.artstation.com/artwork/JrJKVv
www.artstation.com/artwork/kNXBx2
Interesting paper showing great results for replacing standard low-res rendering + temporal upsampling (DLSS) with high-res GBuffer + adaptive sampling. Makes sense given rising costs of lighting computations vs GBuffer. arxiv.org/abs/2602.08642
I'm finally writing up how Nanite Tessellation works. The first few blogs posts are up. More will be coming.
graphicrants.blogspot.com/2026/02/nani...
Yeah, it's tricky as spatial hashing basically merges geometry into one value and cells have to be large for any caching. You could try to add a bounce index to your key (per Stachowiak), force rayT<cellSize to 0 (we do this) or add ray<cellSize to your key (gpuopen.com/download/GPU...).
Yes, it should work as long as those shapes are large enough (don't require explicit sampling) - many titles shipped using hidden Lumen emissives as area fill lights, though that's not as powerful for artists as an additive mesh light. Cool, I really enjoy stylized gfx and tech solutions for it.
Amazing results! So you were picking N most important analytical mesh light shapes for a given scene and then intersecting/accumulating for each Lumen ray (including screen space one)?
I recommend just reading the slides. After the talk, we filled the presenter notes with more information than we could fit in our live talk.
It was recorded, so at some point it should be available on the ACM website.
MegaLights slides were just posted online. There's a bunch of details in slide notes if you're curious how our new Stochastic Direct Lighting solution works, why we made it and want to learn a bit about the problem space.
#SIGGRAPH2025 Advances in Real-Time Rendering in Games course slides for "MegaLights: Stochastic Direct Lighting in Unreal Engine 5” talk by @knarkowicz.bsky.social and @tiagocostav.bsky.social from Epic Games are now online
advances.realtimerendering.com/s2025/index....
Thank you @ceruleite.bsky.social for 20 years of organizing the Adavances course. I did learn a lot from it over the years. And also for a really well chosen speaker swag :)
It's time to share the program for the 2025 Advances in Real-time Rendering in Games.
Check out all the details here: advances.realtimerendering.com/s2025/index....
and, of course, come attend the course live on Tuesday August 12 at SIGGRAPH in Vancouver!
We use a similar technique in UE, but using cheap AO derived from the distance fields, which then artists use to modify materials in material shaders. Though it's much less powerful - more like a procedural scene-aware texturing than a mesh blend system.
Amazing talk about Neural Materials. Lots of insight - from why layered materials are important to an overview of current neural solutions and their tradeoffs: youtu.be/xUnXPNFWJUY?...
My feeling is that 20y ago there was a huge barrier to start, but if you could land a publishing deal (even on Steam) your game would at least recoup a large part of the budget. Nowadays the risk is much higher.
It's very simple. One is checkerboarding and the other one is half/quater res.
Well, it's more like "Go, go, go, fuck (KURWAAA), police" :)
Great to see you getting this award and certainly well deserved!
It's designed for run time compression (e.g. real-time captured env maps), so quality isn't the best. If you can compress those offline then likely Compressonator would be a better choice.
I imagine dynamic register allocation is also quite useful on DXR 1.0, as on hit you need to evaluate materials and they can have highly variable complexity. At least in the future, as for now it seems like everyone is either simplifying or caching materials for perf reasons.
It reminds me of FMV games, which were a cool idea, but it also was a dead end due to various limitations of the tech.
Another great article from Chips and Cheese. This time about dynamic RDNA4 register allocation. Having full dynamic allocation would be huge for all those complex shaders we write nowadays, but as the article shows it's not easy to get there.
New blog post! "Measuring acceleration structures", in which we will compare BVH costs on various GPU architectures and drivers and attempt to understand the details enough on AMD hardware to make sense of the numbers!
Reposts appreciated :)
zeux.io/2025/03/31/m...
I see this more as a big corp opened a service which allows you to order Simpson merch without owning the IP. Yeah, a skilled artist could make such a t-shirt himself, but it's not the same as a company earning money by selling them.
Well, obviously would prefer a tech paper actually digging into tech, but still it's a nice post with some hints. Hopefully one day the embargo will be lifted.