Not only a very cool technique, but first authored by a high school student!
Posts by Andrew Helmer
INTRODUCTION TO SPHERICAL HARMONICS FOR GRAPHICS PROGRAMMERS
finally done.
gpfault.net/posts/sph.html
I felt like The Martian was Ridley Scott doing a Spielberg!
Given the full eclipse, is that slight crescent of lighting reflection off the Earth?
Personally I found the vid behavior confusing, it looked like many items were selected, but only one item got reparented. Maybe only drag & drop to move/reparent when an item is selected, and also support moving/reparenting many selected items at once.
Ahhh Debug Break can be disabled at the PSO level so the driver can effectively optimize them out of they're disabled, that's cool! And I can them them on "at runtime" without re-compiling the shader from source at least!
Everything I've seen so far is that technically it's incredible (eg youtu.be/3uUIBUoJhk8?...), but I've heard the story/writing is only so-so which would probably prevent it from getting GOTY.
Not that I'm defending the choice at all, but didn't they fall behind way before Yann Lecun was replaced? And Lecun was overall bearish on the advancement of LLMs? I thought the causality was reversed (he was replaced *because* they fell behind).
Screenshot showing skin with micro-occlusion at the top, and micro-shadowing below.
I finally wrapped up the second post on the series about micro-shadowing. On this one I go over a basic approach based on a microsurface, and I show results for different materials and lighting conditions.
irradiance.ca/posts/micros...
www.ubisoft.com/en-us/news/i...
Quite cool that the latent space is interpolable, so block texture compression, filtering, and mipmapping all work.
JFC, I cannot believe this. Such a skilled group of devs that batted 1000 on successful remakes and ports. You'd think with all the remakes being made now, those people would be so incredibly valuable.
Awesome. Really nice approximation for how accurate that is!
This looks awesome, but sorry if I'm a bit confused - is the top-left supposed to be "radiance"? It represents the incoming light distribution, while the numerical one is the ground-truth for reflected light from a Lambertian, ie clamped cosine convolution (irradiance)?
The answer might be both resolution and GPU dependent, but I'm wondering what most people would do now for a modern game, maybe on current generation consoles.
I want to do a chain of down sampling using a 6x6 kernel (say, for bloom), representable with 13 bilinear taps. Is it faster to do A) single-pass compute shader (with UAV and globallycoherent), but full manual blending, or B) multiple passes, using a sampler with bilinear interpolation?
I'm holding out a hope that LLMs actually make the web itself faster because the benefit of bloated frameworks/libraries goes down. Vibe coding can just generate native JS, HTML, CSS, WebGL. No reason to import things you don't need, when you can easily generate bespoke code for what you do need.
All the presentations slides of the Graphics Programming Conference 2025 are up #GPC2025
graphicsprogrammingconference.com/archive/2025/
That is a very big difference, but other big ones IMO are 1) manual testing being (necessarily) part of the shipping pipelines for games, and 2) needing to update client software (which also ties into the art content, in terms of update sizes), and via intermediate platforms.
Also the update install process can suck, so you don't want to make players update frequently. Even once a week is a pretty significant detriment.
It varies by game companies but many still are in the dark ages when it comes to this stuff. But depending on what you mean by "release to production", deploying client updates for *console* games goes through a certification process that takes time.
Ah yes okay, you want multiple random eigenvectors for the construction of multiple BSPs for merging. Thank you!
Yup I understood that bit, you were saying it didn't work for two point sets. Translation invariance is interesting! Maybe the point sets could be centered independently before concatenating?
Wonderful paper btw. I was just thinking about problems with higher dimensionalities.
I was wondering about the Gaussian slicing. The paper mentions the first principal component ("slicing along the largest eigenvector of its covariance matrix..."). Did you compare the Gaussian slicing to the first principal component of simply concatenating the two point sets?
If you haven't seen the HDR debate between Timothy Lottes and Filippo Tarpini (mostly on the other site), it's very interesting!
Article: share.google/SlH7YI6l8Cc1...
Response: youtu.be/OXpLF69jPhI?...
Rebuttal: (attached image from the other site)
Second Response: youtu.be/hzOkBfwEruI?...
This is one of the better general write-ups of CPU perf optimization that I've seen: abseil.io/fast/hints.h...
Way to generate a random integer in the range [0,N) without *usually* using an integer division (which rejection sampling does): lemire.me/blog/2019/06...
And a nice explanation here:
jacquesheunis.com/post/bounded...
My "No Graphics API" blog post is live! Please repost :)
www.sebastianaaltonen.com/blog/no-grap...
I spend 1.5 years doing this. Full rewrite last summer and another partial rewrite last month. As Hemingway said: "First draft of everything is always shit".
What an incredible game
Hades II 1.0 as well! Though for me personally, 2023 is still a slightly stronger year.
Lots and lots of good observations in both of these.