For the record. For the original picture (resized and sharpened):
Conversions with Nitsuja's version of Scolorq (filter=3, presumably gamma=1.0).
From left to right: Dithering level = 1.25, 1.0, 0.75, 0.6 and 0.5 respectively.
Conversions using Yliluoma-2 dithering @ 32-color mix (no precombines), with gamma=1.0 (NOTE: gamma 1 should not be used, it is wrong):
From left to right: original; RGB; CIE 76; CIE 94; CIEDE 2000.
The same, with gamma=2.0:
The same, with 4 colors (pointless, except to illustrate the different ΔE formulae):
The same, with 2 colors:
A selection of error diffusion filters, gamma 1.0:
From left to right: Original, Floyd-Steinberg, Jarvis-Judice-Ninke, Sierra-3 and Sierra-2-4A
The same, with gamma 2.0:
Finally, Yliluoma-2 dithering with gamma=2, with 16 colors, from 16 precombined maximum-16-color unique color mixes, at 16x16 matrix, with different delta-E formulae:
From left to right, RGB; CIE 76; CIE 94; CIEDE 2000; CMC; BFD. Each of these small images took something from 10―30 minutes to render (on a 4-core), but the result is definitely worth it. Lifting the unique-color restriction would possibly yield even better results. (Note that as of this posting they're still rendering, but will come up.)
(Do you now see why I prefer ordered-dithering over floyd-steinberg?)
(Disclaimer: It's possible though, that my implementation of error-diffusion dithers is buggy.)
Appendix:
16 colors; RGB and CIE76 side-by-side. Gamma=2.0
Floyd-Steinberg:
Scolorq (gamma=1.0, RGB, default options; original shown at side to show how the dithered version is lighter due to wrong gamma):
4 colors (gamma=2.0, RGB and CIE76):
2 colors
4 colors, with less choices and more duplicates
2 colors, with less choices and more duplicates
Mega Man at 64 colors (minimal precombines to make it fast) at RGB,CIE76,CIEDE2000,CMC,BFD, gamma 2.2:
Same, but 4 colors:
Same, but 2 colors:
Fewer colors are preferred if you want to increase the chances of Mario Paint's predefined dithering brushes being used.
Note: When I say "64 colors" in this context, it means that for each pixel, a selection of 64 colors is formed from the 15-color palette. From that selection, the color is selected according to the dithering matrix. It does not mean that the image has 64 colors.
It would also be possible to create custom dithering brushes according to whichever dithering patterns are the most common ones in the source picture (assuming you still have enough one-pixel brushes to complete the image). I considered this option, but could not figure out how to achieve it optimally. This possibility, however, exists only when using ordered dithering.
EDIT: Oh, one more thing. I forgot. It is possible to mix ordered dithering and error diffusioning! It works by diffusing the error that remains after ordered-dithering, as opposed to diffusing the error that remains after nearest-color quantization.
From left to right: Yliluoma-2 dithering, Yliluoma-2 + Floyd-Steinberg, Floyd-Steinberg. Gamma = 2.2, colors=15, precombine=minimal.
The fourth image is of Nitsuja's version of Scolorq (where gamma=presumably 1.0) and the fifth is the original (resized+sharpened).
Same, with 4 colors:
Same, with 2 colors & minimum premixes:
-----------
Conclusions:
― Any proper dithering algorithm should operate on gamma-corrected RGB values rather than on linear RGB values. Failure to do this will produce an image that is obviously lighter in tone than the original. This error is most pronounced when the dithered colors differ significantly from each others.
― With
static images, for certain images, ordered dithering can produce at least as good pictures as error-diffusion dithering, but for many images, it is vice versa.
― All error diffusion dithers, except for scolorq, seem to suffer from a bias towards gray values. Gray is apparently a good approximation for most colors. Therefore, where error diffusion is deemed appropriate over ordered dithering, I recommend using scolorq except where its lack of gamma correction is too obvious.
― Especially at low color candidate counts, ordered dithering produces distinguished repetive patterns, which may be very beneficial in optimizing the production of the picture when patterned brushes are available (such as in Mario Paint).
― From the six
color difference formulae (ΔE), an euclidean distance in the RGB space appeared to be sufficient for most purposes. In a few cases, an euclidean distance in the LAB C*i*e* space (aka. CIE 76) produced better results. The more expensive formulae (CIE 94, CIEDE 2000, CMC, BFD) produced no advantage to justify their significantly more expensive calculation. (And indeed, BFD often seemed even to produce inferior results).
― For customized needs, it often is warranted to do extensive testing with different values for gamma, and color collection/mixing, in order to produce the image that is best for the particular need. Remember, that the eye is more sensitive to local differences in color/tone than to global differences of color/tone. Therefore, it is forgivable to tweak the colorscape of the entire image at once (such as by darkening, brightening, or colorizing it), if it helps bringing out a more accurate local contrast. Such testing was not done here.
― For
animation, ordering dithering should always be used rather than error diffusion dithering. To prove the point, study these three examples (from DemonStrate's Portal Done Pro speedrun... rendered in Mario Paint palette.):
――
http://bisqwit.iki.fi/kala/snap/mp/pdp_fs.gif (1.3 MB, Floyd-Steinberg @gamma=2)
――
http://bisqwit.iki.fi/kala/snap/mp/pdp_sc.gif (1.5 MB, Scolorq @gamma=1)
――
http://bisqwit.iki.fi/kala/snap/mp/pdp_y2.gif (913 kB, Yliluoma-2 @gamma=2).