Image Noise
Perry Sprawls, Ph.D.

Online Textbook

Table of Contents

 

INTRODUCTION AND OVERVIEW
EFFECT ON VISIBILITY
QUANTUM NOISE
RECEPTOR SENSITIVITY
    Screen-Film Radiography
    Intensified Radiography
    Digital Radiography
    Fluoroscopy
GRAIN AND STRUCTURE NOISE
ELECTRONIC NOISE
EFFECT OF CONTRAST ON NOISE
EFFECT OF BLUR ON NOISE
IMAGE INTEGRATION
    Human Vision
    Video Camera Tubes
    Digital Processing
IMAGE SUBTRACTION

 

INTRODUCTION AND OVERVIEW

CONTENTS

    It is generally desirable for image brightness (or film density) to be uniform except where it changes to form an image. There are factors, however, that tend to produce variation in the brightness of a displayed image even when no image detail is present. This variation is usually random and has no particular pattern. In many cases, it reduces image quality and is especially significant when the objects being imaged are small and have relatively low contrast. This random variation in image brightness is designated noise.

   All medical images contain some visual noise. The presence of noise gives an image a mottled, grainy, textured, or snowy appearance. The figure below compares two images with different levels of noise. Image noise comes from a variety of sources, as we will soon discover. No imaging method is free of noise, but noise is much more prevalent in certain types of imaging procedures than in others.


The Image on the Right (B) Has More Noise Than the Image on the Left (A)

The Image on the Right (B) Has More Noise Than the Image on the Left (A)


   Nuclear images are generally the most noisy. Noise is also significant in MRI, CT, and ultrasound imaging. In comparison to these, radiography produces images with the least noise. Fluoroscopic images are slightly more noisy than radiographic images, for reasons explained later. Conventional photography produces relatively noise-free images except where the grain of the film becomes visible.

   In this chapter we consider some of the general characteristics of image noise along with the specific factors in radiography and fluoroscopy that affect the amount of noise.

 

EFFECT ON VISIBILITY

CONTENTS

   Although noise gives an image a generally undesirable appearance, the most significant factor is that noise can cover and reduce the visibility of certain features within the image. The loss of visibility is especially significant for low-contrast objects. The general effect of noise on object visibility was described in the first chapter (Image Characteristics and Quality) and illustrated in the figure in that chapter titled, "Effect of Noise on Object Visibility." The visibility threshold, especially for low-contrast objects, is very noise dependent. In principle, when we reduce image noise, the "curtain" is raised somewhat, and more of the low-contrast objects within the body become visible.

  Question to consider: If the noise level can be adjusted for a specific imaging procedure, then why not reduce it to its lowest possible level for maximum visibility?
Although it is true that we can usually change imaging factors to reduce noise, we must always compromise. In x-ray imaging, the primary compromise is with patient exposure and dose; in MRI and nuclear imaging, the primary compromise is with imaging time. There are also compromises between noise and other image characteristics, such as contrast and blur. In principle, the user of each imaging method must determine the acceptable level of noise for a specific procedure and then select imaging factors that will achieve it with minimum exposure, imaging time, or effect on other image quality characteristics.

 

QUANTUM NOISE

CONTENTS

   X-ray photons impinge on a surface, such as an image receptor, in a random pattern. No force can cause them to be evenly distributed over the surface. One area of the receptor surface might receive more photons than another area, even when both are exposed to the same average x-ray intensity.

   In all imaging procedures using x-ray or gamma photons, most of the image noise is produced by the random manner in which the photons are distributed within the image. This is generally designated quantum noise. Recall that each individual photon is a quantum (specific quantity) of energy. It is the quantum structure of an x-ray beam that creates quantum noise.

   Let us use the illustration below to refresh our concept of the quantum nature of radiation to see how it produces image noise. Here we see the part of an x-ray beam that forms the exposure to one small area within an image. Remember that an x-ray beam is a shower of individual photons. Because the photons are independent, they are randomly distributed within an image area somewhat like the first few drops of rain falling on the ground. At some points there might be clusters of several photons (drops) and, also, areas where only a few photons are collected. This uneven distribution of photons shows up in the image as noise. The amount of noise is determined by the variation in photon concentration from point to point within a small image area.



The Concept of Quantum Noise

The Concept of Quantum Noise
 

   Fortunately we can control, to some extent, the photon fluctuation and the resulting image noise. The illustration above shows two 1-mm square image areas that are subdivided into nine smaller square areas. The difference between the two areas is the concentration of photons (radiation exposure) falling within the area. The first has an average of 100 photons per small square, and the second an average concentration of 1,000 photons per small square. For a typical diagnostic x-ray beam, this is equivalent to receptor exposures of approximately 3.6 ÁR and 36 ÁR, respectively. Notice that in the first large area none of the smaller areas has exactly 100 photons. In this situation, the number of photons per area ranges from a low of 89 photons to a high of 114 photons. We will not, however, use these two extreme values as a measure of photon fluctuation. Because most of the small areas have photon concentrations much closer to the average value, it is more appropriate to express the photon variation in terms of the standard deviation. The standard deviation is a quantity often used in statistical analysis (see the chapter titled, "Statistics") to express the amount of spread, or variation, among quantities. The value of the standard deviation is somewhat like the "average" amount of deviation, or variation, among the small areas. One of the characteristics of photon distribution is that the amount of fluctuation (standard deviation value) is related to the average photon concentration, or exposure level. The square root of the average number of photons per area provides a close estimate for the value of the standard deviation. In this example the standard deviation has a value of ten photons per area. Since this is 10% of the average value, the quantum noise (photon fluctuation) at this exposure has a value of 10%.

   Let us now consider the image area on the right, which received an average of 1,000 photons per area. In this example, we also find that none of the small areas received exactly 1,000 photons. In this case, the photon concentrations range from 964 photons to 1,046 photons per area. Taking the square root of the average photon concentration (1,000) gives a standard deviation value of 33.3 photons. It appears we have an even higher photon fluctuation, or noise, than in the other area. However, when we express the standard deviation as a percentage of the average photon concentration, we find that the noise level has actually dropped to 3.3%.

   We have just observed what is perhaps the most important characteristic of quantum noise; it can be reduced by increasing the concentration of photons (i.e., exposure) used to form an image. More specifically, quantum noise is inversely proportional to the square root of the exposure to the receptor.

   The relationship between image noise and required exposure is one of the issues that must be considered by persons setting up specific x-ray procedures. In most situations, patient exposure can be reduced, but at the expense of increased quantum noise and, possibly, reduced visibility. It is also possible, in most situations, to decrease image noise, but a higher exposure would be required. Most x-ray procedures are conducted at a point of reasonable compromise between these two very important factors.

 

RECEPTOR SENSITIVITY

CONTENTS

   The photon concentration, or exposure, that is required to form an image is determined by the sensitivity of the receptor. The sensitivities of the receptors used in x-ray projection imaging (radiography and fluoroscopy) vary over a considerable range, as shown in the illustration below. This chart shows the approximate values used for specific imaging applications.


Receptor Sensitivity Values Used in X-Ray Imaging

Receptor Sensitivity Values Used in X-Ray Imaging

 

   Screen-Film Radiography

CONTENTS

   The sensitivity of a radiographic receptor (cassette) is determined by characteristics of the screen and the film and the way they are matched. The factors that affect receptor sensitivity do not necessarily alter the quantum noise characteristics of the receptor. The major factors that affect radiographic receptor sensitivity are film sensitivity, screen conversion efficiency, and screen absorption efficiency. The quantum noise level is determined by the concentration of photons actually absorbed by the receptor, rather than the concentration of photons delivered to it. Increasing receptor sensitivity by changing any factor that decreases the number of photons actually absorbed will increase the quantum noise.

   The receptor exposure required to form an image (receptor sensitivity) can be changed by modifying several factors, as indicated in the illustration below. Film sensitivity, which is shown to the right in the illustration, determines the amount of light required to produce the desired film density. If film sensitivity is increased to reduce the amount of light required, this, in turn, will reduce the number of x-ray photons that must be absorbed in the screen. The result would be an image with increased quantum noise. Recall that the effective sensitivity of a particular film and screen combination depends on the matching of the spectral sensitivity characteristics of the film to the spectral characteristics of the light produced by the screen. When the two characteristics are closely matched, maximum sensitivity and maximum quantum noise are produced. In radiography, changing the film sensitivity (i.e., changing type of film) is the most direct way to adjust the quantum noise level in images. Quantum noise is usually the factor that limits the use of highly sensitive film in radiography.


Relationship of Radiation Quantities within an Intensifying Screen-Film Receptor

Relationship of Radiation Quantities within an Intensifying Screen-Film Receptor


   Conversion efficiency is the characteristic of an intensifying screen that is, in effect, the fraction of absorbed x-ray energy actually converted into light. The conversion efficiency value for a particular screen is determined by its composition and design. It cannot be changed by the user. In principle, a high conversion efficiency increases receptor sensitivity and reduces patient exposure. Unfortunately, an increase in conversion efficiency decreases the quantity of x-radiation that must be absorbed in the screen, and this, in turn, increases quantum noise. Therefore, a high conversion efficiency is not always a desirable characteristic for intensifying screens. It should be adjusted by the manufacturer to a value that produces a proper balance between receptor sensitivity and quantum noise.

   The only way to increase radiographic receptor sensitivity without increasing quantum noise is to increase the absorption efficiency. An increase in absorption efficiency does not change the amount of radiation that must be absorbed to produce an image. It does, however, reduce the required incident exposure since a greater proportion of the radiation is absorbed.

   Recall that several factors determine absorption efficiency: namely, screen composition, screen thickness, and photon energy spectrum. The relationship between radiographic receptor sensitivity and quantum noise can be summarized as follows. The amount of quantum noise in a properly exposed image is directly related to the amount of x-ray energy actually absorbed in the intensifying screen. Changing factors, such as type of screen material, screen thickness, and KVp (photon energy spectrum), that affect absorption efficiency will alter the overall receptor sensitivity in relation to the quantum noise level. On the other hand, changing film sensitivity, spectral matching, and the conversion efficiency of the intensifying screen generally changes quantum noise and receptor sensitivity.

   Two screen-film combinations with the same sensitivity are shown in below. One system uses a relatively thick high-speed screen and a film with conventional sensitivity. The other system uses a thinner detail-speed screen and a more sensitive film. The images produced by these two systems differ in two respects. The system using the thicker screen has more blur but less quantum noise than the system using the more sensitive film. The reduction in noise comes from the increases in absorption efficiency and increased blur.


Comparison of Image Quality between Two Screen-Film Combinations

Comparison of Image Quality between Two Screen-Film Combinations

 

   Intensified Radiography

CONTENTS

   Quantum noise is sometimes more significant in intensified radiography done with fluoroscopic systems (both video and spot films) than in screen-film radiography because of generally higher receptor sensitivity values (i.e., lower receptor exposures). This higher sensitivity is obtained by using image intensifier tubes as described in the chapter on fluoroscopy. With such systems, the quantum noise level can be adjusted by the engineer.
   
 

 

  

Digital Radiography

CONTENTS

    There is a distinct difference between film-screen and digital radiographic receptors with respect to quantum noise.  As we have just seen, the noise level in film-screen radiography is determined primarily by the receptor sensitivity (or speed).  That is determined by the design characteristics of the intensifying screens and film used.
When using a film-screen receptor the exposure must be set to match the sensitivity of the receptor or the results will be either and underexposed (light film) or overexposed (dark film) image.  Therefore, all acceptable films, from an exposure and  contrast perspective, will be produced with a receptor exposure that is determined by the sensitivity characteristic of the receptor.  The noise level can only be changed by changing the receptor, typically by changing the film to one with a different sensitivity (speed).

Digital radiographic receptors do not have a fixed sensitivity like film-screen receptors.  One of the valuable characteristics of digital receptors is a wide exposure dynamic range as illustrated below.  This means that images with good contrast characteristics can be produced over a wide range of exposure values.  It is not like radiographs recorded on film where any deviation from the the correct or optimum exposure results in under or over exposed films.

There are definite advantages of this wide dynamic exposure range.  Exposure errors do not result in images with loss of contrast like with film.  Another advantage is the ability to capture the full range of exposure coming from the patient's body where there are large variations in body density and penetration, such as in the chest.  When the full exposure range is captured digital processing can then be used to enhance and optimize the contrast.  This is the normal procedure in digital radiography.

Excessive quantum noise is a potential problem in digital radiography because it is possible to produce images with low exposures that will still look good as far as contrast is concerned.  This condition is illustrated below by the image on the left near the lower end of the exposure dynamic range.  Contrast is still good but the noise is two high.
In principle, a digital radiographic system sets it's sensitivity (speed) after the exposure is made so that it correct for the actual exposure.

In digital radiography it is important that appropriate exposure and technique factors be used for each procedure.  An optimum (correct) exposure is one that produces an image with an acceptable noise level without unnecessary or excessive exposure to the patient.

Digital radiographic systems display, along with the image, and indication of the amount of exposure used to form the image.  Different factors are used by the various manufacturers to display the exposure information.  The "S" factor, as use by one manufacturer is illustrated below.  The "S" value displayed with an image indicates the effect sensitivity (speed) used by the system for that specific image.
A high S factor (like 1000) indicated the image was formed with a low exposure and excessive noise would be expected.  A low S factor (like 50) indicates and unnecessarily  high exposure was used.  The image quality is good because of the low noise but the patient was subjected to unnecessary exposure.

 

Images Produced with Different Exposures Throughout the Wide Dynamic Range of a Digital Radiographic Receptor.

 

   Fluoroscopy

CONTENTS

  
The receptor sensitivity of a conventional fluoroscope is typically in the range of 1 ÁR to 10 ÁR per image frame. This relatively low exposure produces images with considerable quantum noise. In normal fluoroscopic viewing, however, we do not see one image frame at a time but an average of several frames, as discussed below.

Some fluoroscopic systems can be switched into a low-noise mode, which will improve the visibility of low-contrast detail. In the low-noise mode, the receptor sensitivity is reduced, and more exposure is required to form the image.  This is usually known as the HLC or high-level control.

It is possible to develop receptor systems that would have greater sensitivity and would require less exposure than those currently used in x-ray imaging. But, there is no known way to overcome the fundamental limitation of quantum noise. The receptor must absorb an adequate concentration of x-ray photons to reduce noise to an acceptable level.

 

GRAIN AND STRUCTURE NOISE

CONTENTS

   Although the quantum structure of the x-ray beam is the most significant noise source in most x-ray imaging applications, the structure of the film, intensifying screens, intensifier tube screens, or digital receptors can introduce noise into images.
   An image recorded on film is composed of many opaque silver halide crystals, or grains. The grains in radiographic film are quite small and are not generally visible when the film is viewed in the conventional manner. The grainy structure sometimes becomes visible when an image recorded on film is optically enlarged, as when projected onto a screen. Whenever it is visible, film grain is a form of image noise.
   Film-grain noise is generally a more significant problem in photography than in radiography, especially in enlargements from images recorded on film with a relatively high sensitivity, (speed).

   Image-intensifying screens and the screens of intensifier tubes are actually layers of small crystals. An image is formed by the production of light (fluorescence) within each crystal. The crystal structure of screens introduces a slight variation in light production from point to point within an image. This structure noise is relatively insignificant in most radiographic applications.

 

ELECTRONIC NOISE

CONTENTS

   Video images often contain noise that comes from various electronic sources. Video (TV) image noise is often referred to as snow. Some of the electronic components that make up a video system can be sources of electronic noise. The noise is in the form of random electrical currents often produced by thermal activity within the device. Other electrical devices, such as motors and fluorescent lights, and even natural phenomena within the atmosphere generate electrical noise that can be picked up by video systems.
   The presence of noise in a video system becomes especially noticeable when the image signal is weak. Most video receivers have an automatic gain (amplification) circuit that increases the amount of amplification in the presence of a weak signal. This amplifies the noise and causes it to become quite apparent within the image. This effect can be easily observed by tuning a TV (video) receiver to a vacant channel or a channel with a weak signal. The presence of excessive electronic noise in a fluoroscopic image is often the result of a weak video signal because of system failure or misadjustment.

 

EFFECT OF CONTRAST ON NOISE

CONTENTS

   The noise in an image becomes more visible if the overall contrast transfer of the imaging system is increased. This must be considered when using image displays with adjustable contrast, such as some video monitors used in fluoroscopy, and the viewing window in CT, MRI, and other forms of digital images. High contrast film increases the visibility of noise.

 

EFFECT OF BLUR ON NOISE

CONTENTS

  The visibility of image noise can often be reduced by blurring because noise has a rather finely detailed structure. The blurring of an image tends to blend each image point with its surrounding area; the effect is to smooth out the random structure of the noise and make it less visible.

   The use of image blurring to reduce the visibility of noise often involves a compromise because the blurring can reduce the visibility of useful image detail. High-sensitivity (speed) intensifying screens generally produce images showing less quantum noise than detail screens because they produce more image blur. The problem is that no screen gives both maximum noise suppression and visibility of detail.

   A blurring process is sometimes used in digital image processing to reduce image noise, as described in the next section.

 

IMAGE INTEGRATION

CONTENTS

   Integration is the process of averaging a series of images over a period of time. Since most types of image noise have a random distribution with respect to time, the integration of images can be quite effective in smoothing an image and reducing its noise content. Integration is, in principle, blurring an image with respect to time, rather than with respect to space or area. The basic limitation of using this process is the effect of patient motion during the time interval.

   Integration requires the ability to store or remember a series of images, at least for a short period of time. Several devices are used for image integration in medical imaging.

 

   Human Vision

CONTENTS

   The human eye (retina) responds to average light intensity over a period of approximately 0.2 seconds. This integration, or averaging, is especially helpful when viewing fluoroscopic images.

   The conventional fluoroscopic display is a series of individual video images. Each image is displayed for one thirtieth of a second. Because a relatively low receptor exposure (less than 5 ÁR) is used to form each individual image, the images are relatively noisy. However, since the eye does not "see" each individual image, but an average of several images, the visibility of the noise is reduced. In effect, the eye is integrating, or averaging, approximately six video images at any particular time. The noise actually visible to the human eye is not determined by the receptor exposure for individual fluoroscopic images but by the total exposure for the series of integrated images.

 

   Video Camera Tubes

CONTENTS

   Certain types of video camera tubes used in fluoroscopy have an inherent lag, or slow response, to changes in an image. This lag is especially significant in vidicon tubes. The effect of the lag is to average, or integrate, the noise fluctuations and produce a smoother image. The major disadvantage in using this type of tube for fluoroscopy is that moving objects tend to leave a temporary trail in the image.

 

   Digital Processing

CONTENTS

   When a series of images is acquired and stored in a digital memory, the images can be averaged to reduce the noise content. This process is frequently used in DSA and MRI.

 

IMAGE SUBTRACTION

CONTENTS

   There are several applications in which one image is subtracted from another. A specific example is DSA. A basic problem with any image subtraction procedure is that the noise level in the resulting image is higher than in either of the two original images. This occurs because of the random distribution of the noise within each image.

   Relatively high exposures are used to create the original images in DSA. This partially compensates for the increase in noise produced by the subtraction process.