Try pinging on the issue (couldn't find it) or on the wgpu matrix channels? The maintainers are very busy but are pretty responsive to the community.
(Dawn would also be a fine option but as you noted wgpu hasn't aligned with the common C header yet so the transition would take some effort)
Posts by Corentin Wallez
Opened github.com/webgpu-nativ... and github.com/webgpu-nativ... we've just got this feedback from onnx-runtime as well.
Yeah I remember being @-ed on a few PRs and it looked messy and painful. We put a bunch of effort in making builds easier to integrate with CMake as well: github.com/beaufortfran... shows the minimal way to get something working with CMake with source-built Dawn as a submodule, and with Emscripten.
I'm obviously biased, but the webgpu.h API (all that the WebGPU backend needs) is supposed to be ABI stable now github.com/webgpu-nativ.... Dawn caught up with it and won't break it. Dawn prebuilts are done on Github CI (though old macOS fails?) and uploaded as artifacts that should be usable.
We'd still want to do that automatically if possible by throttling the RAF callback so developers don't have to do that in-flight frame counting by default.
Chromium should have some frame throttling in place to avoid huge latency like that. Did you resort to manually throttling by adding stalls waiting for the GPU to finish (or keeping only a few frames in flight?) If you have a consistent repro please file an issue on issues.chromium.org.
I remember how you had to use a minitel emulator of sorts to go check the results ^^ That site is a throwback to the 90s but it didn't exist or didn't show results back then.
Storage textures are useful to get random access writes to the texturer where one shader invocation (fragment, but also compute) can write any texel (not just the one for its FS invocation) and multiple texels if it needs to. AMD SPD is an example, Nanite-like workloads another, etc.
In other WebGPU news, it is now possible to test WebGPU with WebXR in Chrome Canary.
Subgroup operations are the only new feature in Chrome 134 but it's a big one! It lets shaders share data efficiently between invocations even faster than using workgroup memory in compute, and can be used inside vertex/fragment shader as well. Read more here: developer.chrome.com/blog/new-in-...
At the moment WebGPU supports only what's available on all devices of all the target APIs, which is limiting. There is a proposal to add additional "format tiers" github.com/gpuweb/gpuwe... which should come in the medium term since it is an agreed priority, see developer.chrome.com/blog/next-fo...
Seems like a bug in a part of Chromium, could you file an issue on issues.chromium.org?
The reason we are looking to add texel buffers to WebGPU github.com/gpuweb/gpuwe... is to allow 8 and 16-bit load/stores portably. Not all GPUs support that, so for WebGPU it's the more portable option. There would surely be a 16bit (int) and 8bit direct load/store extension as well in the future.
Considering that just a month ago WebGPU support in Slang was still WIP, this is great news! We need better shading languages and Slang looks like it could finally be it!
I'm also rooting for WGSL to gain a lot of modern niceties, but being a standard, it will obviously take longer.
Hey everyone, I'll start posting WebGPU related things here!
The last WebGPU F2F meeting was a great time to review priorities for future WebGPU/WGSL features and agree to make WebGPU more than just an "editor's draft". Read more at developer.chrome.com/blog/next-fo...! (spoilers: AI and bindless)
Thanks to you and others for making the table, bookmarked, it's going to be soooo useful to understand how much reach additional WebGPU features will have before we start investigating them!
Is that using the debug utils you posted on Matrix? I didn't take the time to read them yet but they look really cool! github.com/magcius/WebG...