On this page, I provide a brief glossary of terms that I use in the photo section of this Website in various areas. I offer this glossary in order to avoid redundancy - which has crept in because of the several cameras that I cover on my site. On this page, I only want to offer brief definitions and provide links to more extended information on the Internet for those who want to learn more.
Note: Planned terms are listed without a link.
The optimum aperture of a lens, that is, the aperture at which it is sharpest, varies from lens to lens. As a general rule it is between one and three stops down from the maximum aperture for the center of the field. No lens is perfect, they all have aberrations which reduce their performance. Classically there are five so-called "Seidel" aberrations:
All lenses have these aberrations and they are worse in fast lenses. Stopping
down a lens greatly reduces spherical aberration and to a lesser extent reduced
the effects of coma, astigmatism and field curvature on image sharpness. Distortion
is unaffected by aperture. A 6th aberration, chromatic aberration, is to a
first approximation unaffected by aperture, too.
(From Bob Atkins: Optimum Aperture - Format size and diffraction, adapted)
The airy disk might be called the "circle of confusion" caused by diffraction. For an ideal circular aperture, the 2-D diffraction pattern is called an "airy disk," after its discoverer George Airy. The width of the airy disk is used to define the theoretical maximum resolution for an optical system (defined as the diameter of the first dark circle of the diffraction pattern ). It can be calculated from the working aperture of a lens and the wavelength of the light used (see below).
When the diameter of the airy disk's central peak becomes large
relative to the pixel size in the camera or the maximum tolerable circle
of confusion, it begins to have a visual impact on the image. Once two
airy disks become any closer than half their width, they are also no longer
resolvable (Rayleigh criterion).
(From Sean McHugh: Lens Diffraction & Photography, Cambridge in Colour, adapted)
Note: For images of the airy disk, including overlapping airy disks, see the tutorial Lens Diffraction & Photography on the "Cambridge in Color" Website by Sean McHugh
The diameter of the airy disk can be calculated as follows:
Example: If focal ratio = f/4 and a wavelength of 546 nm used, then D = 0.00533 mm); this comes close to the circle of confusion for small-sensor cameras.
Another formula for the diameter of the airy disc is:
Knowing the diffraction limit requires knowing how much detail a camera could resolve under ideal circumstances. With a perfect sensor, this would simply correlate with the size of the camera sensor's pixels. However, real-world sensors are a bit more complicated (for details and "complications" see here):
*) Strictly speaking, the width of the airy disk needs to be at least ~3 x
the pixel width in order for diffraction to limit artifact-free, grayscale
resolution on a Bayer sensor, although it will likely become visible when
the airy disk width is near 2 x the pixel width.
(From Sean McHugh: Lens Diffraction & Photography, Cambridge in Colour, adapted; for details see also here)
The term aliasing denotes
Aliasing can occur in signals sampled in time, for instance digital audio, and is referred to as temporal aliasing. Aliasing can also occur in spatially sampled signals, for instance digital images. Aliasing in spatially sampled signals is called spatial aliasing. (From: Wikipedia, adapted)
In photography, "system" typically means an image sensor. Color aliasing in Bayer sensors can be particularly troublesome. "Moiré fringing" is also a type of aliasing. In the audio domain, alias frequencies appear as disturbing noises. (From: Aliasing, Imatest, adapted)
Aliasing is controlled by means of so-called anti-aliasing (lowpass) filters. In the photography domain, the effect is that they blur the image slightly (a classic tradeoff). Cameras with the same sensor but without an AA filter are sharper in direct comparison. (From: Aliasing, Imatest, adapted)
In optics, an aperture is a hole or an opening through which light travels. More specifically, the aperture and focal length of an optical system determine the cone angle of a bundle of rays that come to a focus in the image plane. The aperture determines how much light reaches the image plane (the narrower the aperture, the darker the image for a given exposure time), controls the depth of field, prevents vignetting, and reduces lens aberrations.
In photography and astronomy, the term "aperture" refers
to the diameter of the aperture stop rather than the physical stop or
the opening itself. For example, in a telescope the aperture stop is typically
the edges of the objective lens or mirror (or of the mount that holds it).
One then speaks of a telescope as having, for example, a 100 millimeter aperture.
In photography, it is the hole or opening formed by the metal leaf diaphragm inside the lens or the opening in a camera lens through which light passes to expose the film.
Note that the aperture stop is not necessarily the smallest stop in the system. Magnification and demagnification by lenses and other elements can cause a relatively large stop to be the aperture stop for the system.
Aperture size is usually indicated by f-numbers (or f-stops). The f-number is defined as:
As the formula shows, the larger the f-number, the smaller the lens opening (for the same focal length), the smaller the f-number, the larger the opening. On the other hand, depending on the focal length, the same f-number corresponds to different diameters of the opening. Here are some examples:
F-numbers are calibrated as powers of 2, but rounded for convenience (f/22,
f/16, f/11, f/8, f/5.6, f/4, f/2.0, f/1.8, f/1.4, ....). The (full) f-number
steps correspond to doubling (or halving) the amount of light.
(From Wikipedia, Photography - The Resource Page, and A Glossary of Photographic Terms; largely adapted)
A Bayer filter mosaic (named after its inventor, Bryce Bayer;
patented 1975) is a color filter array (CFA) for arranging RGB color filters
on a square grid of photosensors. Its particular arrangement of color filters
is used in most single-chip digital image sensors used in digital cameras,
camcorders, and scanners to create a color image. The filter pattern is 50%
green, 25% red and 25% blue.
(From Wikipedia, adapted)
A camera sensor that uses a Bayer filter, is called a Bayer sensor.
The circle of confusion (CoC) is an - arbitrary - convention for defining acceptable sharpness. It depends on visual acuity, viewing conditions, and the amount of enlargement. For example, if you make huge enlargement for print, the circle of confusion that was defined technically may be too large to let the printed photo look "acceptably sharp."
According to Wikipedia, a circle of confusion (CoC) is
Other names are: disk of confusion, circle of indistinctness, blur circle, or blur spot.
The circle of confusion is used to define the part of an image that is "acceptably sharp" and depends on visual acuity, viewing conditions, and the amount of enlargement. It is used in calculations of the hyperfocal distance and the depth of field. (From Wikipedia, adapted)
A standard value of CoC is often associated with sensor sizes, for example:
Note: The circle of confusion is an - arbitrary - convention for defining acceptable sharpness. It depends on visual acuity, viewing conditions, and the amount of enlargement. For example, if you make huge enlargement for print, the circle of confusion that was defined technically my be too large to let the printed photo look "acceptably sharp."
The original definition of the circle of confusion was based o the resolving power of the human eye, which was assumed to be no smaller than one quarter of a millimeter in diameter on a piece of 8 x 10 paper 250 millimeters from the eye. Merklinger writes:
In his paper Schärfentiefe und Bookeh, H. H. Nasse from Zeiss explains the size of the circle of confusion a little bit differently, and a little bit similar to the above... Again, the resolving power of the human eye is the starting point. The resolving limit for a "normal" human observer was found to be about 8 pairs of lines per millimeter, if a periodic black-and-white pattern is seen from a distance of 250 mm. You can also use angular values if you want to describe the resolving power independent from the distance. You get that the physiological critical angle of the human eye is at least one arc minute.
If you compare this with what Merklinger writes, you will find that this leads to a diameter of half the size of the one given above, namely 1/3000 of the image diagonal. This size is regarded as the "strictest" requirement for the circle of confusion that makes practical sense. Therefore, the criterion is relaxed to a CoC of 1/1500 of the image diagonal (or 2 arc minutes) for an "acceptable" sharpness. This corresponds to the requirement of 0.03 mm for the CoC for 35 mm film. Now we have arrived at the same criterion!
The Zeiss formula is a supposed formula for computing a circle of confusion (CoC) criterion for depth of field (DOF) calculations. The formula is c = d/1730, where d is the diagonal measure of a camera format, film, sensor, or print, and c the maximum acceptable diameter of the circle of confusion. (From Wikipedia, adapted)
Interestingly, Zeiss themselves state two version of their formula:
Depth of field (DOF) is the distance between the nearest (near limit) and farthest (far limit) objects in a scene that appear acceptably sharp in an image. Although a lens can precisely focus at only one distance at a time, the decrease in sharpness is gradual on each side of the focused distance, so that within the DOF, the unsharpness is imperceptible under normal viewing conditions. (From Wikipedia, adapted)
Note: Depth of field tables and programs for calculating such tables suggest that there are two precise planes, the near limit and the far limit, between which everything is rendered with optimal sharpness. In reality, depth of field is based on "allowed fuzziness," represented by the circle of confusion, that is, on a convention, and is therefore arbitrary. Sharpness is changing continually with the distance between object and camera (and may also depend on the content of the photo and other factors) - it is not constant between the two limits and also does not end abruptly at them. Moreover, most programs are based on the model of light cones and circles of confusion, which is an idealization of what really goes on in a lens. Chromatic aberration, color, and diffraction are all neglected in the model. All in all, the depth of field tables and the formulae used to calculate them are just a practical guidance, not more. (After: Schärfentiefe und Bokeh by H. H. Nasse, Zeiss, 2010; in German)
Diffraction can be described as "the spreading out of a light beam when it's 'squeezed' through a small aperture." The smaller the aperture, the more the light spreads out. (From Bob Atkins: Optimum Aperture - Format size and diffraction)
A more thorough definition is: Diffraction is an optical effect which limits the total resolution of your photography - no matter how many megapixels your camera may have. It happens because light begins to disperse or "diffract" when passing through a small opening (such as your camera's aperture). This effect is normally negligible, since smaller apertures often improve sharpness by minimizing lens aberrations. However, for sufficiently small apertures, this strategy becomes counterproductive - at which point your camera is said to have become diffraction limited. Knowing this limit can help maximize detail, and avoid an unnecessarily long exposure or high ISO speed. (From Sean McHugh: Lens Diffraction & Photography, Cambridge in Colour)
Diffraction thus sets a fundamental resolution limit that is independent of the number of megapixels, or the size of the film format. It depends only on the f-number of your lens, and on the wavelength of light being imaged (see formula for calculating the airy disk). One can think of it as the smallest theoretical "pixel" of detail in photography. Furthermore, the onset of diffraction is gradual; prior to limiting resolution, it can still reduce small-scale contrast by causing airy disks to partially overlap. (From Sean McHugh: Lens Diffraction & Photography, Cambridge in Colour)
Note: For more information on diffraction, including graphics that visualize diffraction effects, see the tutorial Lens Diffraction & Photography on the "Cambridge in Color" Website by Sean McHugh (there is also a second page covering more advanced topics).
The disk of confusion (DoC; diameter d) is, according to Merklinger, an exact analog of the circle of confusion (CoC; diameter c) to describe depth of field. The disk lies, however, in the object field, that is, in the scene to be photographed. Merklinger explains it as follows:
According to Merklinger, "the disk of confusion is about the size of the smallest object which will be recorded distinctly in our image. Smaller objects will be smeared together; larger objects will be outlined clearly - though the edges may be a bit soft."
The size of the disk of confusion is easily estimated for two specific distances:
Exif, often incorrectly EXIF, means "exchangeable image file format" and is a standard that specifies the formats for images, sound, and ancillary tags used by digital cameras (including smartphones), scanners and other systems handling image and sound files recorded by digital cameras.
There is a lot of useful information in the Exif tags, but some of the useful data is "hidden" in manufacturer-specific tags that are usually not well documented.
To inspect Exif data, you need a tool that displays them. Many photo applications, for example, do so, but most applications and tools display only a subset of the Exif data. Moreover, camera manufacturers hide manufacturer-specific data in Exif fields that are not documented publicly. Therefore, it is difficult to decipher this data, even though it may be of primary importance for judging what "happened" to certain photos.
The probably largest set of Exif data is displayed by the ExifTool application provided by Phil Harvey. This is a command-line tool that can be installed on Unix, Windows, and Apple Macintosh computers. For Windows, there is also a GUI shell available, which makes working with ExifTool easier. For the Apple Macintosh, see here.
The hyperfocal distance is useful for shooting with manual focus and aperture. There are two commonly used definitions of hyperfocal distance, leading to values that differ slightly:
The distinction between the two meanings is rarely made, since they lead to almost identical values. The value computed according to the first definition exceeds that from the second by just one focal length. (From Wikipedia, adapted)
On this site, I use the formula according to definition 1 for calculating hyperfocal distances* and depth of field ranges. This is also typically the definition of the hyperfocal distance that you find on the Internet.
*) Note: Hyperfocal distance calculation are based on "allowed fuzziness," represented by the circle of confusion. Hyperfocal distance tables and the formulae used to calculate them are just a practical guidance, not more. (After: Schärfentiefe und Bokeh by H. H. Nasse, Zeiss, 2010; in German)
Focusing the camera at the hyperfocal distance results in the largest possible depth of field for a given f-number. Focusing beyond the hyperfocal distance does not increase the far DOF (which already extends to infinity), but it does decrease the DOF in front of the subject, decreasing the total DOF. Some photographers consider this wasting DOF; however, there is also a rationale for doing so (see Merklinger approach). Focusing on the hyperfocal distance is a special case of zone focusing in which the far limit of DOF is at infinity. (From Wikipedia, adapted)
If the lens includes a DOF scale, the hyperfocal distance can be set by aligning the center of the infinity mark on the distance scale with the far limit mark on the DOF scale corresponding to the f-number to which the lens is set*. Some cameras with automatic lenses like the Ricoh GXR and GR display a depth of field scale that allows you to set the hyperfocal distance (or an approximation of it). You can also use calculators on the Internet or calculator apps that allow you to determine the hyperfocal distance, which depends on three factors:
For some cameras, like the Ricoh GR, the Leica X Vario, and the Sony RX100 M1, you can determine the hyperfocal distance "after the fact" if you are using ExifTool. This program shows the hyperfocal distance in the "Composite" section. This is data that ExifTool calculates from other data.
*) This is not true, if you use full frame lenses on an APS-C camera. In this case, use the far limit marker of an aperture that is one stop greater than you set (exactly, it would be 1.3 to 1.5 f-numbers).
The term moiré effect refers to an apparent coarse grid, that arises due to the superposition of regular, finer grids. The resulting pattern whose appearance is similar to the patterns of interference, is a special case of aliasing due to sub-sampling. (From: Moiré-Effekt - Wikipedia, adapted and translated)
Alias signals occur during scanning of image templates with changing spatial frequencies, then one speaks of a moiré effect, for example in garments such wool sweaters or jackets with thin stripes, or pictures of tiled roofs. Often Moiré effects are also seen in the TV picture when appropriate textures are displayed. This is due to a superposition of the spectra of the sample function, whose output signals are periodic with fsample. (From: Alias-Effekt - Wikipedia, adapted and translated)
The Nyquist frequency (fN), named after electronic engineer Harry Nyquist, is
In other (more formal) words:
In the context of digital camera sensors, the Nyquist frequency (fN) is
Example: In a digital camera with 5 micron pixel spacing, dscan = 200
pixels per mm (200 x 5 microns = 1 mm) or 5080 pixels per inch.
Nyquist frequency fN = 100 line pairs per mm or 2540 line pairs per inch (Norman Koren).
(Thus, 1 line pair (or two lines) correspond(s) to 1 wavelength of the Nyquist frequency, which corresponds to two pixels)
Signal energy above fN is aliased - it appears as artificial low frequency signals in repetitive patterns, typically visible as Moiré patterns. In non-repetitive patterns aliasing appears as jagged diagonal lines - "the jaggies." Grain, which is random, can appear as large jagged clumps. (Norman Koren)
Any information above fN that reaches the sensor is aliased to lower frequencies, creating potentially disturbing Moiré patterns. Aliasing can be particularly objectionable in Bayer sensors in digital cameras, where it appears as color bands. The ideal lens/sensor system would have MTF = 1 below Nyquist and MTF = 0 above it. Unfortunately this is not achievable in optical systems; the design of anti-aliasing (lowpass) filters always involves a tradeoff that compromises sharpness.
A large MTF response above fN can indicate potential problems with aliasing, but the visibility of the aliasing is much worse when it arises from the sensor (Moire patterns) than it is when it arises from sharpening (jagged edges; not as bad). It is not easy to tell from MTF curves exactly which of these effects dominates. The Nyquist sampling theorem and aliasing contains a complete exposition. (Nyquist frequency, Imatest)
Zone focusing derives its name from the fact that there is a "zone" around the focus/distance that you set on a lens, which appears "acceptably sharp." The basic zone focusing procedure for a lens with a depth of field scale is as follows:
This sets the lens into a "snapshot" mode that does not require further focusing.
Note: If you use a full frame lens (for example, a rangefinder lens) on an APS-C camera, you have to stop down the lens one f-stop (or better, 1.3 to 1.5 f-stops).
For details, see here.