From Wikimedia Commons, the free media repository
Jump to navigation Jump to search

Viewing an image at 1:1 magnification (100%, i.e. pixel per pixel) on a computer screen brings out every tiny flaw. Even the smallest amount of noise, grain or blurriness becomes apparent at this size. You may have noticed however that much of those flaws goes away when you view the images at a somewhat smaller magnification. So it may be tempting to "improve" an image by downsampling it (scaling it to a smaller size). But is this really a good idea?


Downsampling is effectively a form of blurring. Blur algorithms for digital images are all based on the computation of some type of average over the values of neighbouring pixels.

Consider for example scaling a digital photograph to 50% (for ease of computation) of its original size. The resulting image will have only half the number of pixels horizontally and vertically. Essentially, each pixel of the resized image is the result of taking a 2x2 pixel block from the original image and averaging the four colours. Obviously blurring an image in this manner will hide quite a bit of the flaws of the original image, but it also degrades the overall quality.

Why does it seem to look better?[edit]

If downsampling means blurring an image, then why doesn't it look blurred? Basically, that's all due to "unfair" viewing conditions. At 100% magnification, the downsampled image appears smaller than the original (because it has less pixels), so that you can't see the damage that was done!

To make a meaningful comparison, both images need to be viewed at the same size. So if you are viewing the original image at 100% magnification, this means viewing the 50% downsampled image at 200% magnification.

An example[edit]

1. Original
2. Resampled to 50%, shown at same size
3. Denoised and sharpened from original

An image says more than a thousand words, so let's see an example. Image number 1 is cropped from the original photograph and displayed at 100% magnification. Number 2 is "improved" by downsampling the image. For a meaningful comparison, both images are shown here at the same size. In this manner it is evident that the downsampled image is indeed blurrier than the original. Not really an improvement, is it?

For comparison, image 3 shows a denoised and slightly sharpened version of the original image. Even basic noise filtering algorithms attempt to preserve as much of the image as possible, rather than indiscriminately blurring everything. In this example, a more sophisticated algorithm based on higher mathematics was applied (GREYCstoration, open source). Likewise, sharpening is certainly not achieved by blurring an image, and thus in particular not by downsampling. (Some sharpening methods like "unsharp mask" actually involve a blurred copy of the image, but it is used to increase local contrast, and not to blur the final output.)