In this specific example the relatively large 256-color palette relative to image size makes the color differences small so any color space works about the same.
Posts by Pekka Väänänen
Here pixel mapping uses Euclidean distance in sRGB space. Trying to keep it simple :) OKLab works a bit better but is not a silver bullet. Most important seems to be matching gamma with your viewing environment; in daylight (or bright websites) you can allow more error in darker shades for example.
Two images of Swedish red houses with transparent sky shown as a grey checkerboard pattern. The top one is the original, below it is the color reduced version that looks almost identical. Here the 4-bit quantized 256-color result does look decent. The only visible issues are the white fringes on the treeline on the left half of the image, and small overall color distortion caused by the 4-bit colors. For example the red paint of the facades is reproduced with a subtle greyish noise not present in the original.
The "popularity" color quantizer is one of the simplest: find the K most used unique colors in the image. That's the palette. I was surprised how well it works when K=256 and the image is reduced to 4-bits per channel before. Even transparencies look kinda OK.
I will not continue to work on this so I might just share the URL. I just wanted to pack everything about a palette onto one view: palettarium.color.pizza
Neat! Is the idea to simplify the resulting color segments to triangle geometry?
Came across this handy matrix of accurate but integer-only bit depth conversion formulas: threadlocalmutex.com?p=48
A cool plot from "A Simple and Efficient Error-Diffusion Algorithm" by Victor Ostromoukhov, 2001. They show how the classic Floyd-Steinberg error diffusion can't always produce a pattern with the blue noise (no clumping) property.
Paper: perso.liris.cnrs.fr/victor.ostro...
"Our experience of ICM on such scenes is limited to a couple of artificial examples and several lung and liver scans produced by a gamma camera, with Poisson distributed records. The processing was carried out on the BBC Model B, with 8 or 16 levels and second-order neighbourhood. Although the results were satisfactory, it would be desirable generally to adopt a larger neighbourhood to allow for curvature effects, through coefficients which depend"
Always a delight to see science being done on a 2 MHz 6502. From "On the Statistical Analysis of Dirty Pictures" by Julian Besag, 1986.
Thanks!
If this is not too much to ask, of the material you've collected so far, what would be the top-5 sources to read?
Why not blend gradually between ordered dither (top) and error diffusion (bottom)? Such a cool idea.
Example image by Viktor Massalogin github.com/bntre/dither...
I see, thank you. I've understood that in 1D, like in the PCA projections done here, the split that maximizes the reduction in error (variance) is equivalent to the split Otsu's algorithm (image thresholding) would give. It's cool to find connections between seemingly unrelated problems :)
A screenshot of code that says: /** * The ZX has one bit per pixel, but can assign two colours to an 8x8 block. The * two colours must both be 'regular' or 'bright'. Black exists as both regular * and bright. */ val zx_quantize(std::string rawimage, int image_width, int image_height, float dithering) { auto image_buffer = (const liq_color*)rawimage.c_str(); int size = image_width * image_height; liq_color block[8 * 8]; uint8_t image8bit[8 * 8]; std::vector<liq_color> result(size); // For each 8x8 grid for (int block_start_y = 0; block_start_y < image_height; block_start_y += 8) { for (int block_start_x = 0; block_start_x < image_width; block_start_x += 8) { int color_popularity[15] = {0};
I researched Google's squoosh.app image compression tool's palette code and it's a vendored copy of libimagequant, but what's more interesting is that the code also supports ZX Spectrum color limits!
github.com/GoogleChrome...
Thank you for taking the time answer. Really interesting, especially how PCA found a vector of both colors and pixel blocks at the same time. One question: no dynamic programming was involved, right? Because in Wu's 1992 paper they find the best splits via DP but I dunno how much it really matters.
New blog post: A Decade of Slug
This talks about the evolution of the Slug font rendering algorithm, and it includes an exciting announcement: The patent has been dedicated to the public domain.
terathon.com/blog/decade-...
A median cut that cuts boxes with the most error and chooses the split plane via variance and not just range works much better than the Octree.
I really should try that Wu92 method. It combines many different concepts so I'd like to include it in the book. Did it work well in practice for you?
I tried using an octree for color quantization with the help of a 1996 Dr Dobbs article:
jacobfilipp.com/DrDobbs/arti...
It was quite a hassle, doesn't work that well (at least my version), and isn't as fast as I expected. On the right: sklearn's bottom-up clustering for comparison.
more early counter strike vibes
Thanks! There's surprisingly little material in textbooks too, usually an offhand mention of median cut and that's it.
Thanks for the quick reply. Will be in my next game too, most likely😀
Where is that sprite from? Looks a lot like an explosion animation I've seen in some 90's games, like C&C Red Alert.
Morally under 64k then even if not technically!
The video didn't mention any exe packer though :) I wonder if it's really uncompressed.
The N64brew Game Jam #6 has come to an end, and we've had a total of 28 submissions this year! Our judge team is currently deliberating over each entry to decide which team will pick the charity to send the $3300+ donations we've received. While we wait, let's look through every entry! 🧵
What a cool project. I was surprised that it's all lit by light probes!
The topic of raycasting on MSX has come up. A while ago I made a renderer for MSX 1 which looks like a raycaster (so-called backwards projection) but in fact it's a rasterizer (forward projection). This reduces the per-column cost significantly. Doom worked this way. www.youtube.com/watch?v=8l52...
I got "The Sprite Decade". It's such a relaxing read.
All the work put in by the N64 homebrew and ROM hacking communities are starting to bear fruit :) In the past years there has been a steady improvement in hardware documentation, SDKs, emulators, Blender addons, and 3D graphics libraries (OpenGL and Tiny3D for libdragon, F3DEX3 for libultra).
Left: Without error dampening. Right: with 0.8 error dampening when accumulated per-pixel error (squared magnitude) is greater than 0.02
The above had other changes too (serpentine scan order and a different gamma curve) so here's a comparison of only the error dampening's effect.
Left: Original Floyd-Steinberg dithering. Sky gradients are reproduced with very distracting pixel patterns. Right: With large error dampening added, only moderate noise appears.
A cool trick to clean up error diffusion dithering: if the per-pixel accumulated error goes over a certain threshold, dampen it by a factor of 0.8. This suppresses bright, sparse pixels. Found this in libimagequant's code.