Advertisement · 728 × 90
#
Hashtag
#sparkjs
Advertisement · 728 × 90
Table showing the tree-shaken bundle sizes for spark.js with WebGPU and/or WebGL.

Table showing the tree-shaken bundle sizes for spark.js with WebGPU and/or WebGL.

I fixed the shader imports to work with static analysis, so you can estimate bundle sizes for WebGL and WebGPU.

In practice, though, shaders are imported dynamically. You only pay for what you use, so the overall size is tiny.

#webgl #webgpu #sparkjs

1 0 0 0
Preview
Announcing spark.js 0.1 I'm excited to announce spark.js 0.1, now with WebGL support!

I'm excited to announce the release of spark.js 0.1 now with support for WebGL!

www.ludicon.com/castano/blog...

#webgl #webgpu #sparkjs

15 2 0 0
Preview
Announcing spark.js 0.1 I’m excited to announce spark.js 0.1, now with WebGL support! spark.js has been evolving since I released it last summer. Since then, the WebGPU ecosystem has matured considerably. WebGPU is now more stable and widely supported across browsers and platforms. However, users kept telling me the same thing: even though targeting WebGPU is practical today, most teams have codebases that still rely on WebGL, and that made adoption difficult. For that reason I committed to adding WebGL support. This felt like the right moment to bump the version number to 0.1 and signal that spark.js is production ready, not just experimental. That said, I expect the API to continue evolving based on the features developers need and the friction points they encounter. ## WebGL Support Support for WebGL is the main feature of this update. For a long time I believed WebGL could not update the contents of a block compressed texture from the GPU. I thought it lacked support for Pixel Buffer Objects and didn’t support `EXT_copy_image` either, making it impossible to implement Spark without a CPU read-back. Turns out I was wrong, PBO support is there! I’m not entirely sure where that misconception came from. I was possibly confused because PBO support in WebGL is somewhat limited compared to OpenGL. That may have been reinforced by Unity’s documentation, which reports that WebGL does not have texture copy support, making me think that these limitations imposed a more severe constraint. However, in practice, WebGL provides everything needed to implement copies from UINT textures to block-compressed textures. That said, these copies are more expensive compared to WebGPU and native APIs like Vulkan and D3D12. In WebGPU the shader can output to a buffer and then copy its contents to the compressed texture, and in some native APIs the shader can write to the compressed texture directly. The process in WebGL is far more convoluted. Compute shaders with buffer stores and raw image copies aren’t supported, so the codec has to run as a fragment program and output compressed blocks to a render target, then copy the render target to a pixel buffer object, and the PBO to the final compressed texture. Even with this overhead, real-time compression remains practical and fast enough for most applications. ## Cached Temporary Resources Another issue I wanted to address is the driver overhead when compressing many textures. In my initial implementation I created temporary resources for each texture and destroyed them afterward. To reduce this overhead I added support for caching of the temporary resources. This is particularly critical in WebGL where you need to have both, temporary buffers and render targets. In order to use the feature you have to opt-in when creating the spark object, and you can free the resources explicitly when done: // Create spark object with temp resource caching enabled. const spark = await Spark.create(device, { cacheTempResources: true }) // Load and transcode a bunch of textures at once. const textures = await Promise.all( imageUrls.map(url => spark.encodeTexture(url)) ) // Free cached resources. spark.freeTempResources() ## Other Features Another way to reduce overhead is to allocate the output texture once and reuse it across updates. This is useful for textures whose contents change frequently, and can be achieved by passing the output texture as an option: persistentTexture = spark.encodeTexture(renderTarget, { outputTexture: persistentTexture } ) In the future I’d like to extend this option to support other use cases, for example, encoding regions of a larger texture, which would be helping to support virtual texturing applications. The mipmapping improvements I discussed in my previous post have now been merged. One unexpected issue I encountered is that alpha-weighting and the magic kernel did not play well together. The negative lobes of the kernel would sometimes produce zero or near zero alpha values. These would then cause fireflies when un-pre-multiplying. For now I’m using the alpha-weighted box kernel for textures with alpha. In the future, the right solution is probably to apply the sharpening filter after undoing the alpha pre-multiplication. If you’ve tackled this problem before, I’d love to hear how you approached it. Finally, I’ve also started publishing the examples automatically with a github workflow, so you can explore them without having to checkout the repository or install the required development tools: https://ludicon.github.io/spark.js ## WebGL Demo With WebGL support in place, I’ve updated the gltf-demo to support it. WebGL is used automatically when WebGPU is not supported, but you can also choose it explicitly using the `?renderer=webgl` URL argument: https://ludicon.com/sparkjs/gltf-demo/?renderer=webgl ## Integration with 3D Tiles Renderer To really showcase this release, I wanted to take an existing WebGL application and add real-time texture compression, and I thought there was no better stress test than the 3D Tiles Renderer. Integrating spark.js turned out to be extremely straightforward. The `TilesRenderer` uses three.js’s `GLTFLoader`, and spark already provides a plugin that handles image transcoding automatically, so the initial integration required just a couple of lines of code. There was one gotcha: `TilesRenderer` tracks memory used by loaded tiles to decide when to stream in new tiles or unload existing ones, and it does this by assuming textures have an associated image bitmap. That assumption breaks when transcoding textures with Spark, since the resulting textures are `ExternalTexture` objects. To handle this, the Spark GLTF Plugin now stores the byte length in the texture’s `userData` field: const texture = new THREE.ExternalTexture(textureObject.texture) texture.format = textureObject.format texture.userData.byteLength = textureObject.byteLength And the memory footprint calculation handles this special case: if ( tex instanceof ExternalTexture && tex.userData?.byteLength ) { return tex.userData.byteLength; } The results speak for themselves. Texture compression doesn’t just reduce bandwidth and power consumption, it frees up memory for higher-resolution textures with mipmaps (improving aliasing) and increased geometric detail. As they say, a picture is worth a thousand words: Spark OFF Spark ON Spark OFF Spark OFF Spark ON Spark OFF You can check out the full code changes in our fork of the 3DTilesRendererJS repository: https://github.com/NASA-AMMOS/3DTilesRendererJS/pull/1497 ## See You at GDC Finally, if you would like to see spark.js in person or chat about texture compression, I’ll be at GDC next week and I will be presenting at the 3D on the Web Khronos event: https://www.khronos.org/events/3d-on-the-web-2026 Hope to see you there!

I'm excited to announce the release of spark.js 0.1 now with support for WebGL!

www.ludicon.com/castano/blog/2026/03/ann...

0 0 0 0
An Updated Sponza glTF – Ignacio Castaño

I've put together an updated version of the Sponza scene with uncompressed PNG and compressed AVIF textures. I wrote about the process and compared the results against KTX.

www.ludicon.com/castano/blog...

#webgpu #web3d #sparkjs

13 2 3 1
An Updated Sponza glTF The other day I was looking at this clustered shading WebGPU demo and was surprised by the jarring texture compression artifacts: The glTF model using KTX ETC1S textures is very compact, but is it worth the dramatic quality reduction? I wanted to see how this model would look with AVIF and _Spark_ , so I went looking for a version of the Sponza model with uncompressed textures so that I could use as a baseline for a fair comparison. The glTF used in the clustered shading demo appears to be based on the Sponza model in Khronos’ sample model repository, but neither of these includes uncompressed textures, only JPG and KTX files. Morgan McGuire’s asset repository contains a version of the Sponza model in OBJ format with PNG textures, but it does not include normal maps or other PBR textures. The version published by Alexandre Pestana is no longer available online, and downloading the original Crytek assets (created by Frank Meinl) requires registration and the use of a Windows-only downloader. After some digging, I found that Hans-Kristian Arntzen had published a version with uncompressed PNG textures. However, I ran into issues loading the geometry in some glTF viewers, and the PNG files contained unnecessary alpha channels that inflated the total size and did not follow the glTF PBR texture guidelines. To make the uncompressed version of the model more accessible, I cleaned up the PNGs, replaced the textures in Khronos’ glTF Sponza model with the updated assets, and uploaded the result to the following repository: https://github.com/ludicon/sponza-gltf The next step was to compress this glTF model using AVIF and compare the resulting size and visual quality. For this task I had previously experimented with a few ad-hoc scripts built on top of Don McCurdy’s glTF Transform library. To make this workflow easier to reuse and share, I consolidated them into a small command-line tool called `gltf-tex`: https://github.com/ludicon/gltf-tex You can install it in your system with: git clone https://github.com/ludicon/gltf-tex.git cd gltf-tex npm install npm link And run it as follows: gltf-tex avif sponza-png.glb sponza-avif.glb --quality 80 --speed 4 While the glTF Transform command line tool already provides a command to convert textures to AVIF, it treats all images the same way regardless of how they are used. I wanted more control, in particular the ability to specify custom color spaces and linear transfer functions for texture assets that do not represent color images. This was difficult to achieve with glTF Transform because it relies on the sharp library for image processing. That limits the compression options that can be configured and does not necessarily use the latest versions of the underlying encoders with all the necessary features. In contrast, `gltf-tex` employs the system-provided `avifenc` tool directly. Alternatively you may use the `sharp` library as well, but that my result in larger or lower quality textures: gltf-tex avif sponza-png.glb sponza-avif.glb --sharp To speed up asset processing, I run multiple instances of the AVIF encoder in parallel. While the encoder itself is multi-threaded, it rarely saturates all available cores, and some stages are I/O-bound. Running up to four instances in parallel improves overall throughput on my system. This can be configured with the `--concurrency N` command-line option. While processing the Sponza model, I also noticed that some of its textures were identical, so I added another tool to eliminate the duplicate images adjusting the corresponding texture references. You can simply run it as follows: gltf-tex dedup sponza-png.glb sponza-png-dedup.glb With these tools in place I produced two additional versions of the Sponza model with AVIF textures at two different quality levels (one using `--quality 80` and the other using `--quality 50`). `gltf-tex` also provides the `size` command to inspect and summarize the texture sizes. It displays not only the size on disk, but also the size in video memory with and without run-time compression. gltf-tex size sponza-avif.glb The resulting file sizes are as follows: Model| Texture Size on Disk| Size in Video Memory ---|---|--- Sponza PNG| 103.3 MB| 256 MB (uncompressed) Sponza AVIF (high quality)| 17.6 MB| 85.3 MB / 58.3 MB Sponza AVIF (low quality)| 6.5 MB| 85.3 MB / 58.3 MB Sponza KTX (ETC1S)| 8.2 MB| 44.7 MB Sponza KTX (UASTC)| 56.7 MB| 85.3 MB Note, the video memory size when using AVIF depends on whether you target 16 or 8 bit per block formats. _Spark_ allows you to target both, but always chooses 8 bit formats for occlusion maps and 16 bit formats for normal maps, which is why the video memory size is slightly larger than when using ETC1S. To see how the quality actually holds up I modified Tojiro’s demo to load and display the new assets. This only required a few minor changes: * Adding support for `EXT_texture_AVIF` in the glTF loader. * Unpacking tangent space normals in the shader (see my previous post for more details on this). * Loading textures using spark.js instead of the WebGPUTextureLoader. The resulting code is available in Ludicon’s fork of the demo: https://github.com/Ludicon/webgpu-clustered-shading And here are some screenshots of the results: PNG AVIF HQ + Spark AVIF LQ + Spark KTX ETC1S 103 MB Even at the low quality level the results are extremely close to the original. The takeaway for me is that AVIF plus runtime GPU compression offers a much better quality-to-size tradeoff than precompressed KTX, while keeping download sizes much smaller. ## Addendum I was asked about load time performance, so I run some quick tests locally to get a rough idea of what to expect. All measurements were taken on a MacBook Pro M4. My Wi-Fi connection measures around 200 Mbps. The browser cache was disabled to approximate a first-time load. All timings are reported in milliseconds. Browser| KTX ETC1S| KTX UASTC| AVIF LO| AVIF HI ---|---|---|---|--- Chrome| 185+102 = 287| 418+230 = 648| 182+130 = 314| 238+132 = 433 Firefox| 180+939 = 1,119| 423+3,246 = 3,669| 204+430 = 664| 254+460 = 714 Safari| 184+208 = 392| 424+235 = 659| 174+722 = 896| 235+755 = 990 Each entry shows two timings and their sum. The first number corresponds to downloading and parsing the glTF file; the second measures texture decoding and upload or transcoding. In an ideal implementation, these phases would overlap, you want to start processing textures as soon as the necessary data becomes available. The simple mini-gltf loader used here does not currently do that. Measuring performance in the browser is inherently tricky. Execution is highly asynchronous, and it is possible that some of these timings include unrelated work. KTX loading performance in Chrome and Safari is fairly similar, while Firefox performs significantly worse. AVIF loading performance varies substantially across browsers. Chrome uses all available CPU threads for image decoding, whereas Firefox decodes images in a single thread. Safari seems to use multiple threads as well, but it’s even slower despite of that. Note, these are all CPU timings. Spark runs in the GPU and runs in fractions of a millisecond, so it doesn’t affect performance in a significant way. The CPU timings above are practically the same regardless of whether Spark is enabled or disabled.

I've put together an updated version of the Sponza scene with uncompressed PNG textures and compressed AVIF. I wrote about the process and compared the results against KTX.

www.ludicon.com/castano/blog/2026/02/an-...

#webgpu #sparkjs

1 0 0 0
Normal Map Compression Revisited – Ignacio Castaño

While working on spark.js, I realized that common normal map compression formats weren’t supported in popular frameworks like three.js. I added the necessary support to three.js and wrote an article to shed some light on the topic:

ludicon.com/castano/blog...

#webgpu #webgl #threejs #sparkjs

17 11 0 0

I've also submitted a small spark.js update that enables the use of these formats when using the three.js GLTF loader:

github.com/Ludicon/spar...

Shaved a few more bytes too! The package is now down to 256KB!

#webgpu #threejs #sparkjs

1 0 1 0
Preview
Game Off 2025, Spark, and Box2D3-WASM Check out issue #618 of Gamedev.js Weekly — the free, weekly newsletter about web game development.

Issue #618 of Gamedev.js Weekly newsletter about Game Off 2025, Spark, and Box2D3-WASM is out - go check it!

gamedevjsweekly.com/618

#HTML5 #JavaScript #gamedevjs #gamedev #weekly #newsletter #GitHubGameOff #SparkJS #WASM

10 3 0 0