General: A Small Glossary of Photography Terms

On this page, I provide a brief glossary of terms that I use in the photo section of this Website in various areas. I offer this glossary in order to avoid redundancy - which has crept in because of the several cameras that I cover on my site. On this page, I only want to offer brief definitions and provide links to more extended information on the Internet for those who want to learn more.

Overview of Terms

Note: Planned terms are listed without a link.

Glossary

Aberrations

The optimum aperture of a lens, that is, the aperture at which it is sharpest, varies from lens to lens. As a general rule it is between one and three stops down from the maximum aperture for the center of the field. No lens is perfect, they all have aberrations which reduce their performance. Classically there are five so-called "Seidel" aberrations:

• Spherical aberration
• Coma
• Astigmatism
• Field curvature
• Distortion

All lenses have these aberrations and they are worse in fast lenses. Stopping down a lens greatly reduces spherical aberration and to a lesser extent reduced the effects of coma, astigmatism and field curvature on image sharpness. Distortion is unaffected by aperture. A 6th aberration, chromatic aberration, is to a first approximation unaffected by aperture, too.
(From Bob Atkins: Optimum Aperture - Format size and diffraction, adapted)

Airy Disk

The airy disk might be called the "circle of confusion" caused by diffraction. For an ideal circular aperture, the 2-D diffraction pattern is called an "airy disk," after its discoverer George Airy. The width of the airy disk is used to define the theoretical maximum resolution for an optical system (defined as the diameter of the first dark circle of the diffraction pattern ). It can be calculated from the working aperture of a lens and the wavelength of the light used (see below).

When the diameter of the airy disk's central peak becomes large relative to the pixel size in the camera or the maximum tolerable circle of confusion, it begins to have a visual impact on the image. Once two airy disks become any closer than half their width, they are also no longer resolvable (Rayleigh criterion).
(From Sean McHugh: Lens Diffraction & Photography, Cambridge in Colour, adapted)

Note: For images of the airy disk, including overlapping airy disks, see the tutorial Lens Diffraction & Photography on the "Cambridge in Color" Website by Sean McHugh

Calculating the Airy Disk Diameter

The diameter of the airy disk can be calculated as follows:

•  D = 2.43932 * λ * (focal ratio of telescope or working aperture of lens) (D = Diameter of Airy Disk in mm; λ = wavelength in mm, e.g. 546 nm = 0.000546 mm)

Example: If focal ratio = f/4 and a wavelength of 546 nm used, then D = 0.00533 mm); this comes close to the circle of confusion for small-sensor cameras.

Another formula for the diameter of the airy disc is:

•  A = 7200 * arctan (1.21966 * λ / d) (A = angular diameter of the airy disk in arc seconds, d = diameter of telescope main mirror in mm)

Airy Disk and Pixels...

Knowing the diffraction limit requires knowing how much detail a camera could resolve under ideal circumstances. With a perfect sensor, this would simply correlate with the size of the camera sensor's pixels. However, real-world sensors are a bit more complicated (for details and "complications" see here):

• Anti-aliasing (AA) filter: As a result of the sensor's anti-aliasing (AA) filter (and the Rayleigh criterion), an airy disk can have a diameter of about 2-3 pixels before diffraction limits resolution (assuming an otherwise perfect lens). However, diffraction will likely have a visual impact prior to reaching this diameter. Note that some cameras, like the Leica M (Typ 240) and the Ricoh GR, do not have an AA filter.
• Wavelength of light: Since the size of the airy disk also depends on the wavelength of light, each of the three primary colors will reach its diffraction limit at a different aperture.
• Bayer Array: Sensors utilizing a Bayer array allocate twice the fraction of pixels to green as red or blue light, and then interpolate these colors to produce the final full color image. This means that as the diffraction limit is approached, the first signs will be a loss of resolution in green and pixel-level luminosity. Blue light requires the smallest apertures (highest f-stop) in order to reduce its resolution due to diffraction.
The end result is that the camera's resolution is better than one would expect if each 2 x 2 block containing all three colors represented a pixel, but not quite as good as each individual pixel. The exact resolution is really a matter of definition* (e.g. extinction resolution, artifact-free resolution; for the definitions see here).
The "real-world resolution limit" of a Bayer array is typically around 1.5 x as large as the individual pixels (good rough estimate that's based on actual measurements).

*) Strictly speaking, the width of the airy disk needs to be at least ~3 x the pixel width in order for diffraction to limit artifact-free, grayscale resolution on a Bayer sensor, although it will likely become visible when the airy disk width is near 2 x the pixel width.

Aliasing (Alias Frequencies)

The term aliasing denotes

• Low frequency artifacts, sometimes quite disturbing, that appear when the system receives significant signal energy above the Nyquist frequency (from so-called alias frequencies). (From: Aliasing, Imatest, adapted)
• An effect that causes different signals to become indistinguishable (or aliases of one another) when sampled. It also refers to the distortion or artifact that results when the signal reconstructed from samples is different from the original continuous signal. (From: Wikipedia, adapted)

Aliasing can occur in signals sampled in time, for instance digital audio, and is referred to as temporal aliasing. Aliasing can also occur in spatially sampled signals, for instance digital images. Aliasing in spatially sampled signals is called spatial aliasing. (From: Wikipedia, adapted)

In photography, "system" typically means an image sensor. Color aliasing in Bayer sensors can be particularly troublesome. "Moiré fringing" is also a type of aliasing. In the audio domain, alias frequencies appear as disturbing noises. (From: Aliasing, Imatest, adapted)

Anti-Aliasing, Anti-Aliasing (AA) Filters

Aliasing is controlled by means of so-called anti-aliasing (lowpass) filters. In the photography domain, the effect is that they blur the image slightly (a classic tradeoff). Cameras with the same sensor but without an AA filter are sharper in direct comparison. (From: Aliasing, Imatest, adapted)

Aperture

In optics, an aperture is a hole or an opening through which light travels. More specifically, the aperture and focal length of an optical system determine the cone angle of a bundle of rays that come to a focus in the image plane. The aperture determines how much light reaches the image plane (the narrower the aperture, the darker the image for a given exposure time), controls the depth of field, prevents vignetting, and reduces lens aberrations.

In photography and astronomy, the term "aperture" refers to the diameter of the aperture stop rather than the physical stop or the opening itself. For example, in a telescope the aperture stop is typically the edges of the objective lens or mirror (or of the mount that holds it). One then speaks of a telescope as having, for example, a 100 millimeter aperture.
In photography, it is the hole or opening formed by the metal leaf diaphragm inside the lens or the opening in a camera lens through which light passes to expose the film.

Note that the aperture stop is not necessarily the smallest stop in the system. Magnification and demagnification by lenses and other elements can cause a relatively large stop to be the aperture stop for the system.

Aperture size is usually indicated by f-numbers (or f-stops). The f-number is defined as:

• f-number = (focal length of the lens) / (diameter of the opening).

As the formula shows, the larger the f-number, the smaller the lens opening (for the same focal length), the smaller the f-number, the larger the opening. On the other hand, depending on the focal length, the same f-number corresponds to different diameters of the opening. Here are some examples:

• RX100 (f/11): diameter = 10.4/11 = 0.95 mm
• Leica X Vario (f/11): diameter = 18/11 = 1.64 mm
• Zeiss Sonnar 50 mm (f/11): diameter = 50/11 = 4.55 mm

F-numbers are calibrated as powers of 2, but rounded for convenience (f/22, f/16, f/11, f/8, f/5.6, f/4, f/2.0, f/1.8, f/1.4, ....). The (full) f-number steps correspond to doubling (or halving) the amount of light.
(From Wikipedia, Photography - The Resource Page, and A Glossary of Photographic Terms; largely adapted)

For an introduction to aperture and depth of field, you may want to read A Beginner's Guide to Aperture and Depth of Field on www.photographytalk.com.

Bayer Filter, Bayer Sensor

A Bayer filter mosaic (named after its inventor, Bryce Bayer; patented 1975) is a color filter array (CFA) for arranging RGB color filters on a square grid of photosensors. Its particular arrangement of color filters is used in most single-chip digital image sensors used in digital cameras, camcorders, and scanners to create a color image. The filter pattern is 50% green, 25% red and 25% blue.

A camera sensor that uses a Bayer filter, is called a Bayer sensor.

Circle of Confusion (CoC)

The circle of confusion (CoC) is an - arbitrary - convention for defining acceptable sharpness. It depends on visual acuity, viewing conditions, and the amount of enlargement. For example, if you make huge enlargement for print, the circle of confusion that was defined technically may be too large to let the printed photo look "acceptably sharp."

According to Wikipedia, a circle of confusion (CoC) is

• an optical spot caused by a cone of light rays from a lens not coming to a perfect focus when imaging a point source

Other names are: disk of confusion, circle of indistinctness, blur circle, or blur spot.

The circle of confusion is used to define the part of an image that is "acceptably sharp" and depends on visual acuity, viewing conditions, and the amount of enlargement. It is used in calculations of the hyperfocal distance and the depth of field. (From Wikipedia, adapted)

A standard value of CoC is often associated with sensor sizes, for example:

• Full frame format: 0.028 to 0.029 mm (older lenses base their DOF scales even on a CoC of 0.03 to 0.035; the "original" size for 35 mm film was set to 1/30 mm)
• DX format (APS-C with crop factor of 1.5): 0.019 to 0.02 mm (DOFMaster lists a value of 0.025 mm)

Note: The circle of confusion is an - arbitrary - convention for defining acceptable sharpness. It depends on visual acuity, viewing conditions, and the amount of enlargement. For example, if you make huge enlargement for print, the circle of confusion that was defined technically my be too large to let the printed photo look "acceptably sharp."

How the Original 1/30 mm was Calculated (35 mm Film)

The original definition of the circle of confusion was based o the resolving power of the human eye, which was assumed to be no smaller than one quarter of a millimeter in diameter on a piece of 8 x 10 paper 250 millimeters from the eye. Merklinger writes:

• The human eye is said to be capable of resolving a spot no smaller than one quarter of a millimeter in diameter on a piece of paper 250 millimeters from the eye. If this spot were on an 8 by 10 inch photograph made from a 35 mm negative, the enlargement factor used in making the print would have been about eight. Thus if spots smaller than one-quarter millimeter are unimportant in the print, then spots smaller than one-thirty-second of a millimeter in diameter are unimportant in the negative. The usual standard used in depth-of-field calculations is to permit a circle-of-confusion on the negative no larger than one-thirtieth of a millimeter in diameter. (From Merklinger)

How the Zeiss Paper "Schärfentiefe und Bookeh" Explains the Size of the Circle of Confusion

In his paper Schärfentiefe und Bookeh, H. H. Nasse from Zeiss explains the size of the circle of confusion a little bit differently, and a little bit similar to the above... Again, the resolving power of the human eye is the starting point. The resolving limit for a "normal" human observer was found to be about 8 pairs of lines per millimeter, if a periodic black-and-white pattern is seen from a distance of 250 mm. You can also use angular values if you want to describe the resolving power independent from the distance. You get that the physiological critical angle of the human eye is at least one arc minute.

If you compare this with what Merklinger writes, you will find that this leads to a diameter of half the size of the one given above, namely 1/3000 of the image diagonal. This size is regarded as the "strictest" requirement for the circle of confusion that makes practical sense. Therefore, the criterion is relaxed to a CoC of 1/1500 of the image diagonal (or 2 arc minutes) for an "acceptable" sharpness. This corresponds to the requirement of 0.03 mm for the CoC for 35 mm film. Now we have arrived at the same criterion!

Zeiss Formula

The Zeiss formula is a supposed formula for computing a circle of confusion (CoC) criterion for depth of field (DOF) calculations. The formula is c = d/1730, where d is the diagonal measure of a camera format, film, sensor, or print, and c the maximum acceptable diameter of the circle of confusion. (From Wikipedia, adapted)

Interestingly, Zeiss themselves state two version of their formula:

• A certain amount of blur is supposed to be tolerable. According to international standards the degree of blur tolerable is defined as 1/1000th of the camera format diagonal, as the normally satisfactory value. With 35 mm format and its 43 mm diagonal only 1/1500th is deemed tolerable, resulting in 43 mm/1500 ≈ 0.030 mm = 30 μm of blur. (From "Depth of Field – An Insider's Look" (PDF). Camera Lens News #1. Carl Zeiss AG. 1997)

Depth of Field (DOF, Near Limit, Far Limit)

Depth of field (DOF) is the distance between the nearest (near limit) and farthest (far limit) objects in a scene that appear acceptably sharp in an image. Although a lens can precisely focus at only one distance at a time, the decrease in sharpness is gradual on each side of the focused distance, so that within the DOF, the unsharpness is imperceptible under normal viewing conditions. (From Wikipedia, adapted)

Note: Depth of field tables and programs for calculating such tables suggest that there are two precise planes, the near limit and the far limit, between which everything is rendered with optimal sharpness. In reality, depth of field is based on "allowed fuzziness," represented by the circle of confusion, that is, on a convention, and is therefore arbitrary. Sharpness is changing continually with the distance between object and camera (and may also depend on the content of the photo and other factors) - it is not constant between the two limits and also does not end abruptly at them. Moreover, most programs are based on the model of light cones and circles of confusion, which is an idealization of what really goes on in a lens. Chromatic aberration, color, and diffraction are all neglected in the model. All in all, the depth of field tables and the formulae used to calculate them are just a practical guidance, not more. (After: Schärfentiefe und Bokeh by H. H. Nasse, Zeiss, 2010; in German)

For an introduction to aperture and depth of field, you may want to read A Beginner's Guide to Aperture and Depth of Field on www.photographytalk.com.

Diffraction

Diffraction can be described as "the spreading out of a light beam when it's 'squeezed' through a small aperture." The smaller the aperture, the more the light spreads out. (From Bob Atkins: Optimum Aperture - Format size and diffraction)

A more thorough definition is: Diffraction is an optical effect which limits the total resolution of your photography - no matter how many megapixels your camera may have. It happens because light begins to disperse or "diffract" when passing through a small opening (such as your camera's aperture). This effect is normally negligible, since smaller apertures often improve sharpness by minimizing lens aberrations. However, for sufficiently small apertures, this strategy becomes counterproductive - at which point your camera is said to have become diffraction limited. Knowing this limit can help maximize detail, and avoid an unnecessarily long exposure or high ISO speed. (From Sean McHugh: Lens Diffraction & Photography, Cambridge in Colour)

Diffraction thus sets a fundamental resolution limit that is independent of the number of megapixels, or the size of the film format. It depends only on the f-number of your lens, and on the wavelength of light being imaged (see formula for calculating the airy disk). One can think of it as the smallest theoretical "pixel" of detail in photography. Furthermore, the onset of diffraction is gradual; prior to limiting resolution, it can still reduce small-scale contrast by causing airy disks to partially overlap. (From Sean McHugh: Lens Diffraction & Photography, Cambridge in Colour)

Note: For more information on diffraction, including graphics that visualize diffraction effects, see the tutorial Lens Diffraction & Photography on the "Cambridge in Color" Website by Sean McHugh (there is also a second page covering more advanced topics).

Disk of Confusion (after Merklinger)

The disk of confusion (DoC; diameter d) is, according to Merklinger, an exact analog of the circle of confusion (CoC; diameter c) to describe depth of field. The disk lies, however, in the object field, that is, in the scene to be photographed. Merklinger explains it as follows:

• Let's think of it another way. Suppose we have in our camera, located on the film, a very tiny but bright source of light: a very tiny "star". That star will project its light through our camera lens (acting now as a projector). Wherever that starlight falls on a flat surface we will see a disk of light. The size of that disk of light will depend upon where the surface is relative to where the lens is and where it is focused. If the surface happens to be right where the lens is focused, we will see only a tiny bright point of light. A little ways in front of or behind where the lens is focused, we would see a small disk of light. ... We'll call this disk the disk of confusion.

According to Merklinger, "the disk of confusion is about the size of the smallest object which will be recorded distinctly in our image. Smaller objects will be smeared together; larger objects will be outlined clearly - though the edges may be a bit soft."

The size of the disk of confusion is easily estimated for two specific distances:

• At half the distance from the camera to the point of exact focus, the disk is half the working diameter of our lens.
• At twice the distance to the point of focus, the disk is equal to the lens diameter.

For details, see here and the extended version.

Exif Data

Exif, often incorrectly EXIF, means "exchangeable image file format" and is a standard that specifies the formats for images, sound, and ancillary tags used by digital cameras (including smartphones), scanners and other systems handling image and sound files recorded by digital cameras.

There is a lot of useful information in the Exif tags, but some of the useful data is "hidden" in manufacturer-specific tags that are usually not well documented.

Exif Tools

To inspect Exif data, you need a tool that displays them. Many photo applications, for example, do so, but most applications and tools display only a subset of the Exif data. Moreover, camera manufacturers hide manufacturer-specific data in Exif fields that are not documented publicly. Therefore, it is difficult to decipher this data, even though it may be of primary importance for judging what "happened" to certain photos.

ExifTool

The probably largest set of Exif data is displayed by the ExifTool application provided by Phil Harvey. This is a command-line tool that can be installed on Unix, Windows, and Apple Macintosh computers. For Windows, there is also a GUI shell available, which makes working with ExifTool easier. For the Apple Macintosh, see here.

Hyperfocal Distance

The hyperfocal distance is useful for shooting with manual focus and aperture. There are two commonly used definitions of hyperfocal distance, leading to values that differ slightly:

• Definition 1: The hyperfocal distance is the closest distance at which a lens can be focused while keeping objects at infinity acceptably sharp. When the lens is focused at this distance, all objects at distances from half of the hyperfocal distance out to infinity will be acceptably sharp.
• Definition 2: The hyperfocal distance is the distance beyond which all objects are acceptably sharp, for a lens focused at infinity.

The distinction between the two meanings is rarely made, since they lead to almost identical values. The value computed according to the first definition exceeds that from the second by just one focal length. (From Wikipedia, adapted)

On this site, I use the formula according to definition 1 for calculating hyperfocal distances* and depth of field ranges. This is also typically the definition of the hyperfocal distance that you find on the Internet.

*) Note: Hyperfocal distance calculation are based on "allowed fuzziness," represented by the circle of confusion. Hyperfocal distance tables and the formulae used to calculate them are just a practical guidance, not more. (After: Schärfentiefe und Bokeh by H. H. Nasse, Zeiss, 2010; in German)

Focusing the camera at the hyperfocal distance results in the largest possible depth of field for a given f-number. Focusing beyond the hyperfocal distance does not increase the far DOF (which already extends to infinity), but it does decrease the DOF in front of the subject, decreasing the total DOF. Some photographers consider this wasting DOF; however, there is also a rationale for doing so (see Merklinger approach). Focusing on the hyperfocal distance is a special case of zone focusing in which the far limit of DOF is at infinity. (From Wikipedia, adapted)

If the lens includes a DOF scale, the hyperfocal distance can be set by aligning the center of the infinity mark on the distance scale with the far limit mark on the DOF scale corresponding to the f-number to which the lens is set*. Some cameras with automatic lenses like the Ricoh GXR and GR display a depth of field scale that allows you to set the hyperfocal distance (or an approximation of it). You can also use calculators on the Internet or calculator apps that allow you to determine the hyperfocal distance, which depends on three factors:

• Focal length (the exact length, not the equivalent length),
• Aperture (f-number),
• Circle of confusion (coc), which is a technical measure of "sharpness."

For some cameras, like the Ricoh GR, the Leica X Vario, and the Sony RX100 M1, you can determine the hyperfocal distance "after the fact" if you are using ExifTool. This program shows the hyperfocal distance in the "Composite" section. This is data that ExifTool calculates from other data.

*) This is not true, if you use full frame lenses on an APS-C camera. In this case, use the far limit marker of an aperture that is one stop greater than you set (exactly, it would be 1.3 to 1.5 f-numbers).

Moiré

The term moiré effect refers to an apparent coarse grid, that arises due to the superposition of regular, finer grids. The resulting pattern whose appearance is similar to the patterns of interference, is a special case of aliasing due to sub-sampling. (From: Moiré-Effekt - Wikipedia, adapted and translated)

Alias signals occur during scanning of image templates with changing spatial frequencies, then one speaks of a moiré effect, for example in garments such wool sweaters or jackets with thin stripes, or pictures of tiled roofs. Often Moiré effects are also seen in the TV picture when appropriate textures are displayed. This is due to a superposition of the spectra of the sample function, whose output signals are periodic with fsample. (From: Alias-Effekt - Wikipedia, adapted and translated)

Nyquist Frequency

The Nyquist frequency (fN), named after electronic engineer Harry Nyquist, is

• half of the sampling rate of a discrete signal processing system (Wikipedia).
• the bandwidth of a sampled signal, and is equal to half the sampling frequency of that signal (Praat Manual, Nyquest Frequency).

In other (more formal) words:

• The Nyquist sampling theorem states that if a signal is sampled at a rate dscan and is strictly band-limited at a cutoff frequency fC no higher than dscan/2, the original analog signal can be perfectly reconstructed.
The frequency fN = dscan/2 is called the Nyquist frequency (From: The Nyquist sampling theorem and aliasing, Norman Koren)

In the context of digital camera sensors, the Nyquist frequency (fN) is

• The highest spatial frequency where a digital sensor can capture real information.
Nyquist frequency fN = 1 /( 2 * pixel spacing) = 0.5 cycles/pixel (Nyquist frequency, Imatest)

Example: In a digital camera with 5 micron pixel spacing, dscan = 200 pixels per mm (200 x 5 microns = 1 mm) or 5080 pixels per inch.
Nyquist frequency fN = 100 line pairs per mm or 2540 line pairs per inch (Norman Koren).
(Thus, 1 line pair (or two lines) correspond(s) to 1 wavelength of the Nyquist frequency, which corresponds to two pixels)

Nyquist Frequency, Aliasing, Anti-Aliasing (Filter)

Signal energy above fN is aliased - it appears as artificial low frequency signals in repetitive patterns, typically visible as Moiré patterns. In non-repetitive patterns aliasing appears as jagged diagonal lines - "the jaggies." Grain, which is random, can appear as large jagged clumps. (Norman Koren)

Any information above fN that reaches the sensor is aliased to lower frequencies, creating potentially disturbing Moiré patterns. Aliasing can be particularly objectionable in Bayer sensors in digital cameras, where it appears as color bands. The ideal lens/sensor system would have MTF = 1 below Nyquist and MTF = 0 above it. Unfortunately this is not achievable in optical systems; the design of anti-aliasing (lowpass) filters always involves a tradeoff that compromises sharpness.

A large MTF response above fN can indicate potential problems with aliasing, but the visibility of the aliasing is much worse when it arises from the sensor (Moire patterns) than it is when it arises from sharpening (jagged edges; not as bad). It is not easy to tell from MTF curves exactly which of these effects dominates. The Nyquist sampling theorem and aliasing contains a complete exposition. (Nyquist frequency, Imatest)

Zone Focusing

Zone focusing derives its name from the fact that there is a "zone" around the focus/distance that you set on a lens, which appears "acceptably sharp." The basic zone focusing procedure for a lens with a depth of field scale is as follows:

• Decide for the minimum and maximum distances (distance range) at which the subjects that you want to photograph are likely to appear (for example, between 3 m and 10 m).
• Set a middle-to-small aperture (f/8 or f/11) on the lens and identify the corresponding marks on the lenses' depth of field scale.
• Turn the focusing ring so that the distance range lies within the depth of field range for the selected aperture. If the distance range does not fit, either stop the lens down further or decide for a smaller distance range.

This sets the lens into a "snapshot" mode that does not require further focusing.

Note: If you use a full frame lens (for example, a rangefinder lens) on an APS-C camera, you have to stop down the lens one f-stop (or better, 1.3 to 1.5 f-stops).

For details, see here.

References

Alias Frequencies, Nyquist Frequency, and More

 gerd (at) waloszek (dot) de About me made by on a mac!
 01.08.2018