My Worms Armageddon re-implementation project OpenWA is finally public on GitHub! github.com/paavohuhtala...
I've been working on it for about a month, but an actual release for end users is still long ways off. In the meantime, feel free to follow the development, or even contribute.
Posts by Paavo Huhtala
I'm hoping to write some shorter blog posts about individual areas of this project, covering methodology, engine architecture, challenges etc. Let's see if I ever get around to it.
Fair warning: this codebase is easily the unsafest Rust codebase I've ever seen. Rust<->C++ FFI isn't nice at the best of times, and it's much worse when you don't have access to the other side's source code, that side being 90s C++ with no memory safety features but multiple inheritance aplenty.
To prevent it from being just slop, testing and debug tools have been my top priority. I've adopted the game's replay system into a fairly robust integration testing system. Every replay file works as a test, and I can simulate an hour's worth of gameplay in a few seconds.
A large part of this has been accomplished using Claude Code & GhidraMCP. I like reverse engineering, but I don't particularly enjoy reading decompiler output or figuring out calling conventions from disassembly. Turns out LLMs are really, really good at that kind of pattern recognition.
I've been reverse engineering & decompiling Worms: Armageddon (into Rust!) for the past 3 weeks. So far I've replaced about a 120 functions and made a lot of tools, but I'd like to turn this into an OpenTTD/RCT style project soon-ish, with multiplat support as a very long term goal.
Colossal Order's CEO finally confirmed the theory I presented in my article (blog.paavo.me/cities-skyli...) about 2.5 years ago: the game relied heavily on Unity's ECS/DOTS and HDRP, which were not actually ready for production use.
www.pcgamer.com/games/sim/ci...
But when everything else fails, bandwidth / memory optimization is the only thing that matters. I've managed to pack each voxel face into 6 bytes, while still having per-corner AO and supporting max 65K unique textures. Before this optimization every _vertex_ took 8 bytes.
I'm using the checkerboard pattern as a stress test, and it's quite good at that. It breaks most optimizations in my engine, namely empty / occluded chunk optimization and greedy meshing / face merging. Every chunk has the max amount of faces, giving the mesh generator a good workout.
One thing I've learned while writing a voxel engine is how huge the difference between "realistic" and "worst case" can be. With the same render distance, a typical hilly landscape uses about 2MB of VRAM, while a 3D checkerboard pattern uses over a gigabyte. Yup, that's three orders of magnitude!
And here's the current state, about 1 week after the initial commit. Ambient occlusion had a huge impact on the visuals.
Progress about 24 hours later
A screenshot of a window displaying a colorful voxel landscape with a black sky
There comes a time in every developer's life where they just have to write a voxel engine. This one is using #rust, wgpu and GPU driven rendering. #graphicsprogramming
Finally finished my latest article! It's about my experience of using a super ultrawide monitor for work and play for a bit under three years. It contains my thoughts about Samsung's (lack of) product design skills, and why Half-Life 2's FOV slider is a liar.
blog.paavo.me/1000-days-in...
But obviously there are many diversions into related and unrelated topics, like how FOV sliders are lying to you(!!!), or how DisplayPort MST could make the world a better place if manufacturers actually used it for once.
This is actually my third attempt at writing this article; in the first two iterations I had written several thousand words before even getting to the part where I buy this monitor. This time I handled that in a few paragraphs, and could mostly focus on the actual topic of the article.
I'm working on a new article, and it's once again grown to be an absolute monster in terms of length. I've had a 49 inch 32:9 monitor for about 3 years, and I want to tell the world how's it been. At the moment I've written almost 10 thousand words about hardware, productivity use and gaming.
But it's also ultimately much simpler than the traditional approach to rendering, at least so far: my entire scene is rendered by a single CPU-side draw call, bindless textures are so much nicer to use, LOD could be handled entirely by the compute shader etc.
The main benefit was performance. While release mode was fine, in debug mode (which I want to use to get wgpu validations) my little test scene was struggling to reach 60 FPS on my very powerful PC. Now it sits comfortably at locked 240 FPS, and release builds run at about 1000 FPS.
I've continued working on my little 3D engine based on wgpu and #rustlang. This week I managed to implement fully GPU driven rendering. The engine gives the GPU lists of drawables and meshes, and compute shaders perform AABB culling and generate indexed draw commands. 17K objects in about 0.03ms.
WESL advertises wgsl-analyzer as the official language server (or at least as a potential solution), but it doesn't actually support WESL from what I can tell? github.com/wgsl-analyze...
wgsl-analyzer provides autocompletion & syntax highlighting for VS Code, but doesn't support the same import syntax as naga_oil (even though it has some Bevy-specific features [which will be removed because of WESL migration?])
So I have to live with non-actionable errors in my IDE.
I integrated #bevy 's naga_oil to support shader imports. I had to use the version from git because the latest published release uses wgpu 24 (latest is 25). A new version will be released in sync with Bevy, but apparently naga_oil will also be deprecated and replaced with wesl-rs at some point?
I started this in December and resurrected it a few days ago by updating my dependencies, which took about an hour because of breaking changes in winit and wgpu. Good that these libraries are actively developed, but I imagine building anything bigger on top of these can be somewhat painful.
A colorful abstract tunnel rendered in a Windows 11 window
I'm working on a shader playground / 3D engine skeleton in #rustlang with winit, wgpu and naga_oil. About 1.5k LOC, has shader hot reload and very basic GLTF support. The DX is better than in my previous WebGL-based engine but the ecosystem is not yet very mature / stable.
Thanks for the share! YouTube decided to recommend your dithering video yesterday, and I watched it. Excellent work on that ๐
Ah yes, the joy of cover dates. Didn't realize they could be off by so much.
If I had more time, I would have written a shorter article ๐
I had assumed that triggering a script after an AI package finishes was only introduced in Fallout 3+, but apparently that's when they just added it to the editor UI. The scripting language already supported it in Oblivion.
Great catch! I had a look at this, and it is indeed powered by quest-specific scripting. When Glarthir's quest-related travel packages finish (= he reaches the victim), the script checks if the player can see Glarthir. If not, the victim NPCs are scripted to die.