A variety of closely similar techniques are often applied to the treatment of multiple images, for example video frames, in order to improve the quality of the final image.
Unfortunately, the name given to the method does not always make it clear precisely what technique is being used. This is important because similar methods that differ in detail may lead to significantly different results.
Most astronomical imaging, indeed most imaging in general, results in an array of point brightness values (pixel values), each of which represents the combination of a representation of the brightness of a point in the sky, some random noise value and some systematic error value related to the position on the imaging device.
The systematic values are present in all comparable images whether or not there is any light falling on the camera target. It is therefore usual to make both a light (shutter open) and a dark (shutter closed) version of the scene and to subtract the dark image from the light image to remove the systematic errors in the image. Unfortunately, because there is also a random noise component in both the light image and the dark image, the subtraction of the dark frame actually aggravates the noise by the square root of two (41% worse). The techniques for reducing the random noise contributed by the dark frame are identical to those for reducing the random noise in the light image that we are about to consider.
Once the systematic errors in the image or frame are removed, we can address the problem of the random noise. If we add a large number of otherwise identical image frames together, the contribution to the pixel values from the sky brightness at the associated point, will add in proportion to the number of images added. The random noise however, being uncorrelated, will add in proportion to the square root of the number of images. Accordingly, the ratio of the wanted brightness signal to the unwanted noise signal will improve in proportion to the square root of the number of images added. If we add 100 images together, the signal-to-random-noise ratio will have improved by a factor of 10 compared with any one image.
Of course, this assumes that the contributing image frames are correctly aligned. If the capturing telescope has perfect tracking, then the images will all be coincident. If not, then part of the combinatorial process must include bringing the images into alignment. Software exists to do this manually or automatically.
So far, the discussion has not depended on the detailed method of adding the pixel values together. Part of the confusion lies in the terminology used and partly in the distinction between the mathematical model and the practical implementation.
In general, the pixel values emerging from a capture device are encoded into integer values that have to lie within a certain range dictated by the number of bits used. CCD imagers manufactured specifically for astronomical imaging usually deliver images in 12 or 16 bit pixel values. The web cams and video cameras which feature so prominently in QCUIAG discussions at best deliver 8-bit pixel values. In the case of colour cameras, there are usually 3 x 8-bit values for the three colour planes.
When we seek to add these image frames together we must address the issue of precisely what method to employ. A number of solutions have been offered.
Method 1- Add, divide, then save as 8-bit image
If the pixel values in the image frames are added and then each pixel value is divided by the number of images and saved as integer values in an 8-bit image, then the overall average magnitude of the pixel values remains the same and there is no danger of saturating the pixel values (ie exceeding the permitted value in the bitfield) in the resulting array.
Method 2 - Add, then save as 16 or 32-bit image
If the pixel values in the images are simply added together, without division, then the absolute magnitude of the pixel values will quickly exceed the capacity of an 8-bit field to accommodate it. The solution is to store the pixel values in a larger bit field which can accommodate the sum of 256 (16-bit integers) or 16 million (32-bit integers) images which begin as 8-bits.
Method 3 - Add, divide then save as a floating point image
This is identical to the previous method except for a scale factor. The same information is preserved.
Comparison of the three methods
Proponents of Method 1 argue that since display screens only support 8-bits of intensity in each colour plane, there is no point in saving the data with more precision. This is valid if the only ambition for the image data is to display it with linear contrast. They also put forward the benefit that 8-bit image files are readily manipulated by popular graphical editing applications.
Proponents of Method 2 argue that much more information is preserved compared to Method 1. In the simple case of two images in which a certain pixel in one image had the value 4 and the value 5 in the other, the resulting image would contain more information in the Method 2 result (9) than in the Method 1 result (4) which is indistinguishable from two pixels of value 4. Since astronomical image analysis and processing is substantially different from routine graphical editing, the need for a specialised processing application is not regarded as disadvantageous.
Method 2 allows for greater precision in photometric or astrometric analysis and also permits non-linear treatment of the brightness range to bring within the range of display technology a much wider range of real sky brightness. What this means is that suitable processing can show bright and dark parts of a nebula or galaxy on a single picture, even if the ratio of true brightness is greater than 255:1. This information has already been discarded in Method 1.
It might be argued that the fact that the contributing images contain pixels which are only 8-bit integers, implies that the summed image cannot incorporate more than 256 distinct levels. This may be true in the perfectly noise-less case. In practice, the addition of random noise to the signal means that the sum of many pixels whose true value should be part way between two quantization levels will yield a result appropriately intermediate between the sum of the two source values. In effect noise provides a means of interpolating the quantization steps if the pixel values are added over many images. In all practical cases, the noise is sufficiently large to perform this task in a linear manner - if it were not so, then we would not be considering methods of improving the signal-to-noise ratio.
The AstroVideo program uses Method 2 by default but has the option of saving at reduced brightness resolution (Method 1) if required.
Method 3 retains the advantages of Method 2 but is not in widespread use for amateur astronomical applications.
Finally it should be noted that these differences really come about because of the format that is used to save the data. In the case of image processing applications that perform the whole process including the addition or averaging, equivalent means of contrast adjustment and final display while preserving the full dynamic range of the data internally, will approach more closely the ideal mathematical model and will therefore produce identical results from either addition or averaging.