Understanding Sub-Pixel (LCD Screen) Anti-Aliased Font Rendering
©2007 Darel Rex Finley. This complete article, unmodified, may be freely distributed for educational purposes.



To understand what sub-pixel rendering is all about, look closely at these three white circles:

       

The circle on the left looks very chunky — it was rendered with no anti-aliasing at all. The circle in the middle looks much smoother and more circular, but it has a softer, blurrier look without the crisp edges of the first circle. The third circle (the one on the right) will, depending on the type of display you have, look sharper than the second circle, but also nicely circular like the second circle — it seems nearly to have the best properties of both the first and second circles. This effect is visible only on LCD displays in which each pixel is composed of separate red, green, and blue elements in a horizontal row.

If you blow the three-circle image up with a paint program, you can see how each pixel is colored:

Now it becomes aparent why the first two circles look the way they do. The first circle has crisp edges because every pixel is black or white — but that’s also its downfall, since the shape is not very circular. The second circle better approximates the shape of a circle by using intermediate greys — but those greys also give it a fuzzy, blurry appearance.

What’s going on with the third circle? It looks very much like the second circle, but it has colors used on its edges, which don’t obviously seem like they should make it look sharper. But look at this next picture, and it all becomes clear:

Each pixel is represented in the physical screen by three little vertical bars: One red, one green, and one blue. The first circle just turns the whole RGB triplet completely on or completely off. The second circle varies the strength of each RGB triplet to generate intermediate shades of grey. But the third circle varies the strength of each display element separately to achieve the best possible rendition of a sharp-edged circle. (Due to the arrangement of the RGB elements, this is most effective on the left and right sides of the circle, but has almost no effect on the circle’s top and bottom.)

Here’s a sample of how Mac OS X uses this effect in some text. (Windows does the same thing, under the moniker “ClearType.” I don’t think Apple has a special name for it; they just do it.)

And here’s what that actually does with the RGB display elements — notice how crisp the shapes of the “e”, “w”, and “y” are, as compared to the whole-pixel rendition above.

 

Photos

I shot these with a cheapo Radio Shack 30X pocket microscope and a 1-megapixel camera:

 

RGB and Brightness Perception

How beneficial is this effect, really? Study this sample, which has black-and-white on top, greyscale in the middle, and sub-pixel rendering on the bottom, and judge for yourself:

Although R, G, and B each play equally important roles in the perception of hue (color) in human vision, they do not play equal roles in the perception of overall brightness. (See the HSP page for more detail about exactly how that works.) The green elements play the biggest role in determining how bright a pixel looks, with the red element playing a signficant role too, and the blue element playing a very small role. The sub-pixel font rendering technique described above would work best if all three colors played equal roles in the perception of pixel brightness. Since they don’t, claims that sub-pixel rendering “triples the horizontal resolution” are exaggerated.

For example, suppose hypothetically that only the green element affected brightness perception, and the other two elements affected perceived hue only. (So with the green elements off, the screen would look black irrespective of the intensity of the red and blue elements.) In that case, sub-pixel antialiasing would be useless; horizontal resolution would not increase. If, on the other hand, R, G, and B contributed equally to brightness perception, then sub-pixel antialiasing would indeed realize a 3x increase in horizontal resolution.

Since the facts lie somewhere inbetween those two extremes, we can infer that the improvement is somewhere inbetween 1x (no improvement) and 3x (maximum theoretical improvement). If two of the three elements contributed equally to brightness, and the third not at all, the effect would perhaps be a 2x improvement. Since in fact G contributes heavily, R much less, and B almost none, 1.5x seems like a reasonable guess as to the true benefit. And even that benefit is horizontal only — vertical resolution is not improved at all. A 1.5x improvement on one axis only is approximately equivalent to a 1.22x bump in screen resolution (both axes). And, since a grid of pixels displayed as elongated rectangles does not yield as clear an image as the same number of pixels arranged in a square grid (thanks to the law of diminishing returns on each axis), we’ll (somewhat arbitrarily) knock that benefit down by half to 1.11x. (And let’s not even talk about annoying color fringes, or the uneven positioning of the red pixels between the green ones...) Notice how close we’re getting to 1x (no benefit)? This is probably why many people say they don’t see a benefit, or only a slight one.

 

BS From MS

Microsoft’s ClearType page goes one further than exaggerating the benefit as “triple” — they do a decent job of describing sub-pixel anti-aliasing, but then visually demonstrate its benefits by comparing it not to a greyscale sample, but instead to a black-and-white one:

 

If they reworked their examples to compare ClearType against greyscale, probably a lot of people viewing the webpage wouldn’t see any obvious difference (even if they had the right type of display), and would think, “What kind of BS is Microsoft trying to push here?” So MS just compared it to black-and-white for a really pulse-pounding improvement. That’s real BS, but most readers won’t likely catch it.

 

Sub-Pixels and the iPhone

Does the iPhone use sub-pixel anti-aliasing? Does it even have a screen that could support it? I got hold of one just long enough to examine it with that 30X microscope, and found that although the screen definitely has the RGB triplets that could support sub-pixel rendering, it appears that the iPhone’s software does not use it. Note that if it did, it would have to perform a very different form of sub-pixel rendering when the iPhone was being used in landscape orientation. I studied it in both portrait and landscape, and didn’t see any evidence of sub-pixel font rendering. Perhaps the iPhone’s superb resolution (163 pixels per inch) makes it unnecessary — everything looked great when viewed normally (with the unaided eye).

 

Applying sub-pixel detail to your own image in Photoshop

1. Start with any image rendered at 3x the scale you want to achieve when you’re done. This example is black-and-white, but you can use anything, including color photographs. Important: Your image must have real detail at this scale — you can’t just blow up your material to 3x its original size and expect this technique to work; it won’t.

2. Make sure your image has horizontal and vertical dimensions that are divisible by 3 — pad the image if necessary to achieve this.

3. Create a new layer and fill it with an RGB-striped pattern exactly like this one:

4. Move the RGB-striped layer behind the image layer (you may need to rename the image layer first if it’s currently titled “Background”), then change the mode of the image layer to “Multiply.”

5. Flatten the image so it has only one layer.

6. Resize the image to exactly 33.33% of its current size. You must use the “Bilinear” setting for your resize. Now the image looks like this:

The vertical, color stripes through the middle of the image are due to a bug in Photoshop. Not sure what you can do about that; maybe Adobe will fix it someday.

7. The image has 1/3 the brightness that it’s supposed to, so use the “Levels” control, and change the upper input level from 255 to 85, like so:

Done!

Here’s a comparison of an image that was shrunk to 1/3 its original size with simple bicubic image resizing, and the subpixel method above:

bicubic subpixel

 

Update 2017.04.10Here’s a good example image provided to me by Amaroq Dricaldari.

 

Send me an e-mail!

 

Back to Tutorials.