Centre de Visió per Computador - Universitat Autònoma de Barcelona

CVC | UAB | Bristol
subglobal1 link | subglobal1 link | subglobal1 link | subglobal1 link | subglobal1 link | subglobal1 link | subglobal1 link
subglobal2 link | subglobal2 link | subglobal2 link | subglobal2 link | subglobal2 link | subglobal2 link | subglobal2 link
subglobal3 link | subglobal3 link | subglobal3 link | subglobal3 link | subglobal3 link | subglobal3 link | subglobal3 link
subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link | subglobal4 link
subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link | subglobal5 link
subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link
subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link
subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link

Camera Calibration Methods

small logo

Measuring the spectral sensitivity of a trichromatic digital camera

We measured directly the spectral sensitivity of the Sigma SD10 sensor in order to use it as a photometric/colorimetric device. The measurements were conducted using a set of 31 spectrally-narrowband interference filters (10 nm bandwidth with peaks from 400 to 700 nm, spanning the visible part of the electromagnetic spectrum), a Macbeth Colorchecker card and a TopCon SR1 spectroradiometer. This technique allowed us to characterize the camera's output in terms of a device-independent colour space (such as the CIE XYZ colour space) for each pixel. However, since the recovering of CIEXYZ trichromatic values from the camera’s RGB output is an ill-posed problem, constraints had to be imposed to restrict all possible solutions to the most interesting/plausible. The camera picture was created from the raw camera output (thus avoiding any embellishments applied to the image by the camera manufacturer). Our results were tested using real data obtained from the camera pictures under both natural and incandescent illuminations. The camera settings during the calibration are sumarised below.

A simple trichromatic camera model

We modeled our trichromatic imaging device as a set of three sensors (first, linear module), each responding to a given portion of the electromagnetic spectrum, with spectral sensitivities that largely overlap. This first module produces an output labeled R, G and B (see figure). The second transformation correspond to a non-linear module which converts the sensor’s output (RGB) into r, g and b values (grey-levels) to create the three final components of a colour image. At this stage we can think of R, G, and B as internal representations of the total energy captured by each sensor, and r, g, and b as the grey-levels that define each individual pixel in the final image.

We selected the camera settings so that the integration time (referred here as T, a.k.a. the inverse of the shutter speed) is calculated automatically by the camera to keep the amount of energy falling onto the sensors within their dynamic range (most pixels are neither "too dark" or "saturated"). This integration time T is recorded in the picture header. The figure below shows the schematics of such model.

Camera ModelThe criterion by which the second (non-linear) module modifies the output of the first module depends on the manufacturer (who may consider primarly the cosmetic appearance of the final image and the characteristics of the final presentation device i.e. printer, monitor, etc). There is also some kind of illumination compensation mechanism, usually called "white balance" so that the final picture reflects approximately the observer’s impression (which is defined by our brain's tendency to get rid of the colour of the illumination). This mechanism may be automatic or manual (i.e. the user must enter the type of illumination under which the picture was taken) and generally adds a differential “gain factor” to each of the RGB channels in the second module to approximately match the characteristics of the presentation device and provide a cosmetically acceptable image. The algorithms used by the camera manufacturer to reach the final output are in general unknown. Assuming that the overall behaviour of the sensors with intensity is the same, we could just look at one of the three sensors (e.g. the middle-wavelength or green) and use it as template. The relationship between the green sensor's output and the light shinning on it can be expressed as:

grey-level as a function of captured Energy   :Equation 1

where g is the grey-level value of a given pixel plane (e.g. the green pixel plane), E is the energy captured by the sensor (which in turn depends of its spectral sensitivity, the inegration time and some geometrical factors of the camera optics like as aperture and focal length). In this analysis we will ignore smallish artifacts (such as spatial inhomogeneity of the image caused by lens aberrations, the MTF of the lens, etc.), assuming that R, G and B are strictly dependent of the incoming spectral radiance. These will be dealt with by the subsequent tunning of the model's parameters to experimental data.

If the green sensor sensitivity S(λ) was the same as the CIE (1931) Ycolour matching function, we could write E in terms of Luminance (L).

if S(λ) = Y(λ), then:

Energy Flux as a function of luminance   :Equation 2
Captured energy in terms of luminance    

where E is the energy flux (Joules m-2) incident on the sensor, f is the camera's aperture ratio (the ratio between its focal length l and aperture a, represented by the camera’s F-stop number), T is the integration time as obtained from the pictures' headers, K is the maximum luminous efficacy of radiant power (683 lm/W) and L is luminance.

A simplification like the one proposed above would make our life much simpler because then:

simpler   :Equation 3

g would be just a function of Luminance. But life is usually more complicated than that.

First estimation of the camera’s non-linearities

The main problem with our estimation of the relationship f is the dependence of E (total energy captured by the sensor) on the sensor's sensitivity S(λ), which is an unknown function of wavelength . Suppose we take a series of pictures of a given target (of known radiance) at different illumination levels and plot g as a function of radiance. We still do not know what part of this radiance spectrum is captured by the sensor!. If we had a equal-energy illuminant (the same spectral radiance at any given wavelength) we may be able to estimate how much each R, G, or B sensor is collecting relative to the others, but we still don't know their spectral sensitivities.

However, it is possible to estimate the shape of the relationship between sensor output and light intensity by assuming that the green sensor's Sg(λ) has a shape similar to the CIE (1931) Y colour matching function and spans over a similar part of the visual spectrum. This assumption seems to be reasonable as a starting point since it would make sense for camera manufacturers to approximate the output of middle-wavelength sensor to a linear function of luminance. However, our experimental setup (see below) uses an incandescent light source, which radiates more energy in the long-wavelength (reddish) part of the visible spectrum than in the short-wavelength (bluish) part and without knowing the corresponding sensor’s sensitivities for Sr(λ) and Sb(λ) it is not possible to work out their exact relationships to light intensity.

Camera ModelTo estimate the relationship between light intensity and camera output, we illuminated a Macbeth ColorChecker colour rendition card with incandescent light (see figure on the left and detailed methods below) and obtained both, the radiometric measures of the central part of the bottom row of squares and the average pixel value (and corresponding StdDev.). Ten different pictures were taken at ten different integration times (ranging from 0.32 to 2.59 sec). Saturated or dark (noisy) values were discarded. Following the initial supposition that the camera green sensor was collecting energy in about the same spectral region as the CIE (1931) Y colour matching function, we estimate the energy flux I (in Joules m-2) captured by the sensor using equation 2.

In our particular case, h (the camera's aperture) was 5 mm, l (the focal length) was 35 mm, and K is the maximum luminous efficacy of radiant power (683 lm/W). L is the luminance measured by the spectroradiometer and T (integration time) was obtained from the pictures' headers. The camera settings were chosen for best convenience, since a small aperture provides the bigger depth of field that is needed in most naturalistic photography. The figure above shows the relationship between the measured energy flux (E) and the corresponding pixel grey-levels. Notice that despite the manufacturer's claim that the camera provides sensitivities of 16-bit per sensor, we could not obtain values of g larger than 10,000 (less than 14-bit) because of saturation. Still, this is significantly bigger than the usual 8-bit per sensor of commercial digital cameras.

The following function was fitted to the measured data:

green sensors sigma   :Equation 4

where G is the picture grey-level for the green sensor,E is the Energy flux and a, b, c and d are free parameters. As a first step, we assumed a similar relationship to be valid for the other two sensors (red and blue). This function (which is similar to the "gamma function" of CRT monitors) was chosen not only because it fits the data appropriately but also because it is reversible (see next steps).

Equation 4 gives us an estimation (remember that it was obtained by supposing that the spectral sensitivity of our sensor is the same as the CIE Y colour matching function!) of the dependency between grey-level and energy flux for the green sensor of the Foveon SD10. However, it will allow us to carry on with the next step of our camera calibration.

1. Macbeth ColorChecker rendition card Camera ModelThis is a typical image of the Macbeth card used here, illuminated by our incandescent light source (therefore it looks reddish). More details...

2. The TopCon SR1 SpectroRadiometer Camera ModelThe spectroradiometer used for all radiometric measures was the SR1. It is an old model controlled by a PC through a National Instruments GPIB-32 controller card. It can provide full spectroradiometric measures in the range 380-720nm at 1,5 or 10 nm intervals. Full specifications downloadable here.

3. A conventional Image SensorCamera Model



A typical image sensor has colour filters applied to a single layer of pixel sensors in a tiled mosaic pattern (Bayer pattern). As a result, the filters let only one wavelength of light - R(red), G (green), or B(blue) - pass through to any given pixel location.

4. The Foveon(R) X3TM Image Sensor Camera ModelThe Foveon(R) X3TM image sensors features three layers of pixel sensors, which are embedded in silicon to take advantage of the fact that red, green and blue light penetrate silicon to different depths allowing colour to be measured at every pixel.

5. Spectral transmittance of the narrowband interference filtersThe spectral transmittance of the 31 narrowband interference filters is shown below. Their peaks were separated about 10 nm. All measurements were done in 1 nm steps.


Camera Model

6. Spectral radiance of the incandescent light The spectral radiance of the light (Osram HLX 64657FGX-24V, 250W) used in this measurement was measured in 1nm steps. Units are W.m-2.nm-1.sr-1All traces of IR light were removed by a IR-blocking filter placed in front of the light source.

Camera Model

7. Bristol University: Camera Characterization Laboratory Laboratory optical bench for characterising digital cameras. (a) Constant current power-supply. (B) constant light-source  (Tungsten). (c) Reflector box with white target disk. (D) Filter-holder. (E) Camera/spectro-radiometer location. (F) Filters. (G, H) climpex and optical bench apparatus ensuring stability of equipment. Photo courtesy of George Lovell.

Camera Model

8. Calibration Camera Settings The camera settings during the calibation were as follows (unless specified otherwise).

'ISO FILM SPEED' * '100'
'CAMERA SERIAL' '02002380'
'EXPOSURE TIME' (varies)***
'SHUTTER SPEED' (varies)***
'EXPNET' '0.125'
'DRIVE' '2S'
'LENS MODEL' '145'
'APERTURE' '7.02501'
'LENS FOCUS RANGE' '28 to 70'

*Refers to its analog photography equivalent.
**Does not influence the raw image output.
**Not constant throughout the calibration.
All values were obtained direclty from picture headers.

About Us | Contact Us