Complete Color Integrity
Complete Color Integrity
Complete Color Integrity
,G
,B
, aG
,
CFS-276 16
aB
) and let a = c
, then we have (c
, c
, c
) = ((cR)
, (cG)
, (cB)
, G
, B
) + k (R
w
,G
w
,B
w
) = (R
+kR
w
,G
+kG
w
,B
+kB
w
)
and that result is not equal to the required ((R+kR
w
)
,(G+kG
w
)
,(B+kB
w
)
).
CFS-276 19
Trying to Deal With Color I ntegrity in Photoshop
Adding Black and Adding White in Photoshop
When we try to "add black" or "add white" in Photoshop we immediately run into a problem.
The (R,G,B) values are typically (almost always) "gamma-encoded." To really understand
the reason for this requires mathematics, but the gist is that traditionally most digital images
have been 24-bits per pixel (which is the same as 8 bits/channel). When pixel values are
expressed directly in terms of light intensities for each of red, green, and blue and crammed
into 24 bits many visible colors get pushed out the edges and are lost. To cram the pixel
values into the 24 bits without losing visible colors the pixel values had to be compressed, the
same general idea as compressing zip files. This was done using a process called gamma-
encoding. Gamma-encoding was convenient because it was already in widespread use in the
analog video world for an entirely different reason. Unfortunately, Photoshop and most other
image editing programs generally try to work directly on the encoded pixels rather than on the
intensity values that they represent. Sometimes this actually works satisfactorily. Often, and
to varying degrees, it does not. Unlike the earlier days, Photoshop now offers the capability
for images to have more than 24 bits per pixel. Although this makes gamma-encoding
unnecessary Photoshop still uses gamma-encoding in almost all cases and worse, still works
directly on the encoded pixels.
One place where working on gamma-encoded images does work satisfactorily in many cases
is with "adding black." Photoshop has several tools which perform a function equivalent to
adding black, but none of them are obvious. The preferred tool is Levels, where the
"highlights" portion of the tool does the correct action of adding black or removing black over
much of its range. The shadow and mid-range functions of Levels destroy color integrity and
are to be avoided.
The fact that Levels highlights control will work to properly add or remove black from a
gamma-encoded image is pure serendipity. Although most working profiles are gamma-
encoded, some such as sRGB are not and other new working profiles are coming into use
which use L* encoding rather than gamma encoding. For these profiles, Levels highlights
does not accurately function to add or remove black.
When we try to "add white" the situation is considerably worse. The Levels shadow or
blackpoint tool might have served this purpose just as the Levels highlight tool will correctly
add and remove black in many cases. But the Levels shadows tool works directly on the
gamma-encoded image and in this case it simply does not work, often destroying color
integrity quite noticeably. For digital camera images the ACR (Adobe Camera RAW) plug-in
"shadows" slider (earlier versions) or "black" slider (later versions) appears to make the
correct adjustment for removing white, but it does not provide for adding white or for
working with tinted white. You might think that putting in a separate layer of white and then
adjusting its percentage transparency should work, but the additions of layers appear to be
done to the already gamma-encoded image, so even that does not work (see at the end of
Color I ntegrity from the Viewpoint of Basic Physics and Mathematics for details on this).
Technically it would be possible to convert the gamma-encoded image to a "linear" image,
make the "add white" adjustments and later convert back, but that is tedious. It depends upon
CFS-276 20
Photoshop to correctly convert the image encoding twice as well as making the adjustment for
adding white, all without applying hidden "improvements" along the way, never a good bet.
To avoid what baggage might be hidden in a "profile conversion" one could choose to do the
gamma-encoding and decoding using the Levels middle-gray adjustment which is widely
known to be a gamma adjustment. But Levels middle gray deviates from being gamma and
the deviations are greatest just where they will do the most harm to this decoding-encoding.
(See http://www.c-f-systems.com/Docs/ColorIntegrityCFS-243.pdf page 18 for details.)
Hidden somewhere among the plethora of Photoshop tools there may be one or two that are
capable of adding or removing white accurately, but as I write this I have not found any.
Also as I write this the ColorNeg and ColorPos plug-ins do correctly add and remove black
with the "lightness" slider and do correctly add and remove white with the "shadow" slider.
Technically when adding white with the shadow slider you also need to add a little black with
the lightness slider to account for the colored areas that the white "paint" covers. In the future
ColorNeg, ColorPos and perhaps a new plug-in will make more complete use of what has
been learned in the study that led to this web page.
CFS-276 21
Color Integrity and Color balance A Few Examples
People who are new to the concept of color integrity tend to confuse it with color balance.
We have explained and used both of these concepts in several ways on this Color Integrity
Complete page. Here we will give just a few examples. I make no claim that these are good
photographs or that they are good illustrations of what I mean by lack of color integrity. See
the section Why We Give Few Illustrative Examples of Color Integrity to understand.
First,
The image on the left has color integrity. You may not agree with its color balance, but if you
don't it is an easy matter to change it to your liking. The image on the right does not have
color integrity. You may not agree with its color either, but it will not respond well to
changes in color balance. In fact, you would find it difficult to adjust it by any means and
really get it right.
Similarly,
Again, the image on the left has reasonably good color integrity. And again you may disagree
with its color balance in fact I often think it is too red myself. And since it has color
integrity its color balance can be easily adjusted. The image on the right does not have good
color integrity. If you compare the skin tones in the two images you will find that they are
very similar while the ocean is very different in color. It would be nearly impossible to get
the image on the right to look right natural. For both the elephant and the beach walkers the
differences are in adjustments that have been made to the entire image. In neither case was a
selection made and part of the image treated differently.
CFS-276 22
Sometimes getting the color balance right can require very exacting adjustments of the colors
of an image. We present here an image in which three different color balances have been
applied:
In this image there really is no object where the viewer is really certain of its precise color, yet
most people photographers, at least viewing these three versions will definitely prefer one
to the other two. However, not all people will choose the same one of the three images and
some people will think none of the three is really correctly color balanced. These images are
really quite close in color balance (plus or minus 5CC), especially compared to what you see
below. The principal reason for the sensitivity to color balance is the rock. We know it is a
gray rock but that it likely is not precisely gray. Our differing experiences lead to
disagreement as to exactly what tint it might take on. Seen by themselves rather than side by
side any one of the three images might look properly color balanced.
The next image is at the other end of the color balance sensitivity range. This is a scan of a
print which I made years ago using wet darkroom processes the reddish spot at lower right
is actually a defect of aging.
The image on the left is approximately as it would have been white balanced. The image on
the right shows a very different color balance which most people would accept or even prefer
to the white balance. Even placing these two images side by side does not make one or the
other really jump out as wrong. But we can go farther than that:
CFS-276 23
This shows that the same scene can be color balanced much more to the red and still not be
objectionable to most people, especially if not seen in comparison to the others. We do not
wish to give the impression that this is all a warm-cool, red-blue effect, so here is one that has
been shifted to the green:
This extreme case works as well as it does for two reasons. First, the color of the lighting in a
hazy sunset is ambiguous. The eye expects this and furthermore it expects the lighting to be
changing rapidly with time at sunset. But this still would not work well if the image did not
have reasonably good color integrity to start with. In each case the colors running from dark
to light blend as they would in natural lighting.
CFS-276 24
Comments on Calibrating Digital Images.
Calibration is the process of characterizing the camera or the film scanner so that the images it
produces correspond to the colors and variations of color in the original scene with acceptable
accuracy. Note that this does not mean the image will be what we want as a final product,
only that it accurately represents the original scene.
We find that it is best to calibrate a camera or scanner using a target having a stepped
grayscale for which the RGB pixel values are known for each step. The quality of the
grayscale is important as the steps should be uniformly gray with no tinting. The calibration
results from comparing the camera's image of this grayscale with the known values for each
step. This comparison involves aligning three elements for each of the three channels (R,G,B)
so the camera image and the known grayscale values best match. The three elements are 1) a
white balance adjustment to account for the lighting used to take the camera image, 2) a
blackpoint adjustment, and 3) a "curve" which describes how the camera or film responds to
increasing light levels. Of these three elements, the first is specific to the lighting and will be
different for different lighting conditions. The second typically results from a combination of
several factors and will be different for images taken under different conditions. Only the
third element depends just on the digital camera or film being tested and so only the third
actually is a calibration of the digital camera or film. These third corrections, the "curves,"
need to be applied to all images from that digital camera or that type of film. Blackpoint
adjustments and white balance (color balance) differ from image to image. These cannot be
corrected automatically as part of the calibration but must be determined as required for each
different photographic situation; sometimes for each image. As we discuss several other
places in this document, the first adjustment, establishing color balance, is done by "adding
and removing black" from the color channels and so does not affect color integrity. The
second adjustment, blackpoint, is done by "adding or removing white" from the color
channels and so it also does not affect color integrity. Since neither of these adjustments
affect color integrity only the third adjustment actively causes the calibration to establish
color integrity in the image. Calibration methods that are currently in use often make the
mistake of including the first and/or second element as part of the calibration. The first and
second elements need to be accounted for while calculating the calibration, but should not
form part of the calibration itself.
We have a web page < http://www.c-f-systems.com/DunthornCalibration.html> that goes into
detail on grayscale calibration and describes how it can be done in Photoshop. Our
ColorNeg and ColorPos plug-ins <http://www.c-f-systems.com/Plug-ins.html> for
Photoshop provide several methods of grayscale calibration, using known grayscales as
described above, but also using grayscales in which the target pixel values for the steps of the
grayscale are not known and even for a "grayscale" selected from the natural grays available
in many images. The plug-ins also have a non-grayscale calibration method called
"FilmType" which actually uses the fact that most people will select an image with color
integrity as more natural from a series of images with varying loss of color integrity. The
curves that produce the most natural image are thus a good estimate of the calibration curves.
The calibration methods in these plug-ins pre-date the work that led to this Complete Color
CFS-276 25
Integrity page and will likely be improved as the result of this work, but in their current state
they still work reasonably well with the new concepts.
All of our calibrations use gamma curves for the calibration of films, cameras, and scanners.
It is certainly possible to use other more complicated curves for calibration and as gamma
curves are nearly as old as photography itself, it certainly is valid to question using them in
this age of computers. But our experience is that using a more complicated form rarely results
in improved performance of the system. With film the actual calibration curves are never
known with high precision due to variations in processing and handling. The use of a system
of gamma curves is much more forgiving of variations in the film than is the use of more
complicated curves, particularly where the complicated curve shape is different for Red,
Green, and Blue. As for most digital cameras, if the image data is available in truly raw form,
the "curve" does not depart much from a straight line (or a gamma of 1) except possibly for
the very brightest part of the image a small range of 1/3 stop or so and there is very little
that can be done about that brightest somewhat unstable part in any event. In fact, the main
problem with digital cameras is getting the raw camera image data into a computer image file
without having the program doing the transfer "improve" the image as the transfer is made.
We will have a web page dealing with this problem very soon, we hope. Meanwhile, the
literature on our ColorPos plug-in has some suggestions.
One obvious question about the above is why use just a grayscale? This is a color calibration
don't you need color patches to do an adequate calibration? In the past we have explained
why using color patches is not necessary. In a digital image gray is composed of equal
amounts of Red, Green, and Blue. We calibrate the camera or film so that the responses to
Red, Green, and Blue are equal on each step of a grayscale. This fully defines the way the
film or sensor responds to Blue light as it goes from dark to light and the same is true for
Green light and for Red light. But as you will learn below, it is also true that you cannot do a
better calibration than using a grayscale although many probably most digital camera
makers claim to.
"Acceptable Accuracy" in Calibration
The key to proper calibration is in understanding exactly what "acceptable accuracy" must
mean for calibration to work properly. Cameras digital cameras or film cameras are
simply not capable of producing a completely visually accurate record of the colors in a
scene, but a camera can produce scenes with colors that are completely visually realistic.
This is because the sensitivity of the eye to colored light is different than the camera
sensitivity to colored light. Generally the two can see colors as nearly the same, often
visually identical, but also quite often there is a visible difference. An object you see as a
particular shade of blue the camera might record as a slightly more greenish blue. But this
difference is not consistent. There may be another, different object that you see as an
identical shade of blue to the first blue object, but the camera records that object as a slightly
more reddish blue than you see. It is important to realize that these differences cannot be
corrected by adjusting the camera image to be slightly less green to agree with the first object
because then the second object would have an even more pronounced reddish cast. Likewise,
making the camera image slightly less red would correct the second object but then the first
CFS-276 26
object would appear as even more greenish. The phenomenon that leads to these differences
in the way the camera sees and the way you see is called metamerism. It is the same
phenomenon that sometimes causes an item of clothing to be a different color under store
lighting than it is when you get it out in daylight. If you want to more fully understand
metamerism, you can do no better than looking in the standard textbook on color, Roy A.
Berns' "Billmeyer and Saltzman's Principles of Color Technology."
Just above, metamerism made the camera see as two different shades two color patches which
we see as matching. The reverse of this is just as common. The camera can see two color
patches as identical in color while to us one patch may be slightly more red than the other.
Again, it is impossible to "fix" this situation. If the two patches have an identical color in the
camera's image there is no way to tell whether or not that color came from a color patch that
we see as more red so there is no way to know whether or not the color should be adjusted to
be more red. That is, there is no way in general to know which sort of color "patch" that
color came from. If we are taking a picture of a target with colored patches, it is possible to
use our special knowledge to tell exactly which color patch is which. And that is where
camera calibrators get into trouble. They can "calibrate" the colored patches to be different
than what the camera actually sees.
Metamerism is a natural phenomenon. The eye routinely has to deal with metameric effects
as the character of the light changes in a scene. Even daylight has large changes in character
(technically "in spectrum") as cloud conditions change, as natural reflections alter the light, as
time of day alters the light, etc. And so metameric color shifts are usually seen as natural
unless the shift is uncommonly large or there is direct comparison between the image and the
original scene. Another property of metameric color shifts is that they are consistent whether
the patch is brightly or dimly lighted. If the camera sees a particular blue object in bright light
as slightly redder than you do, then in dim light from the same light source it will also see the
same object as slightly redder than you do. This consistency in the light/dark behavior of
metamerism is important. As a result metamerically shifted colors will behave as is required
for color integrity and will look natural even though they may not match the colors of the
original scene. So, for an image to have color of "acceptable accuracy" the key is to have the
image give visually the same color for a given colored object whether the object is brightly or
more dimly lit in the scene. This means that in calibration of a camera or film, metameric
effects can be ignored and in fact it is best to do so. As we have shown above, trying to
compensate for metameric effects at best just decreases the metameric color shifts of some
colors at the expense of increasing the metameric shifts of other colors. Worse, as we show in
the section on profiling, some methods of "calibration" that are in common use can actually
destroy color integrity in such a way that it is irretrievable.
CFS-276 27
Calibration and Color Profiling
In the four years since this document was originally written I have made extensive
studies of the world of digital camera calibration, working with Christoph Oldendorf.
To our amazement we have found that the standards used for the calibration of
digital cameras are seriously flawed. These flaws have been carried through into the
calibrations of all or nearly all digital cameras. Being seriously flawed, the
calibrations have failed to perform well, resulting in a chaotic situation in which all the
different parties involved have applied myriad ad hoc "corrections" to the basic
calibration in seeking acceptable results. The section and the section below were
written long before this was known to me. At this time I am not prepared to disclose
the details of our research into digital camera calibration and so I cannot rewrite
these sections based upon what we now know. At the same time I do not want to
delete these sections and distort the past record of my research. The reasoning and
concepts in these sections is sound but is based on the premise that the published
technical descriptions and standards of digital camera calibration accurately modeled
what was being done in practice. That premise has proven to be false. Therefore I
am leaving these sections as originally published, but with this added paragraph so
that the reader will not be misled.
In the present state of digital imaging nearly all attempts at calibration of digital cameras or
film are referred to as profiling. This is because most of the methods used are based on the
ICC Color Management (http://www.color.org) system in which profiles are used to
characterize various imaging devices. There are very serious problems with this approach.
This is not a fault of the ICC Color Management system itself, which is in fact well-designed,
ingenious, and very successfully applied in color managing displays, printers, and commercial
printing. The problem is that when it comes to cameras the ICC system is widely
misunderstood and has often been grossly misapplied.
The Pitfalls of Using Profiling as Camera Calibration
Given a digital image that is, a computer file representing an image the purpose of the ICC
Color Management system is to make that file visually appear as similar as possible when
rendered on various devices, CRT displays, LCD displays, ink-jet printers, laser printers,
commercial printing services, etc. It does this by using profiles that describe how each such
device responds to image data to produce color images. "Device" as used here is a very rigid
term. When you use a different type of paper in a printer it becomes a different "device" and
CFS-276 28
requires a different profile even though you think of it as the same printer. As a CRT ages its
behavior changes so that it becomes a different "device" and needs a different profile to make
its displayed colors acceptably accurate. ICC profiling also extends to scanners. Scanners
have a built-in light source that is controlled by the scanner. A page that is scanned in a
scanner today can be expected to produce the same result if it is scanned in that same scanner
next week. So, the scanner is a "device" and can be profiled. If we use ICC profiles to scan a
page on a scanner, view it on a display, and print it on an ink-jet printer we can expect that the
displayed and the printed page will look very much like the original paper page that was
scanned.
In trying to extend this to cameras we first find that a camera is really not a "device." The
source of illumination is not constant as is the case with scanners. While some photographic
studio or photographic laboratory situations can be exceptions, for general photography the
light can come from many different sources at many different intensities and can be altered by
sky conditions, reflection and in many other ways before it reaches the subject. Properly
applied, the ICC system would require a different profile for each one of these myriad light
sources. Since that is impossible, the typical approach has been to pick one light source and
profile for just that one source. This is given an official appearance by choosing an official-
looking light source, typically D50 or D65, and going to great pains to be exact about it. But
neither D50 nor D65 illumination match any real lighting conditions and so any real image
that is "calibrated" by using a D50 camera profile will not be accurate and will be very
inaccurate for images taken in, say, daylight. To compensate for this, fudge factors that have
nothing to do ICC profiling are introduced, but the whole package is still called "profiling" so
that it appears to be under the wing of ICC. Apparently this makes everyone feel better.
This would be bad enough, but there is another current trend that is even worse. In dealing
with cameras, film or digital, or scanners we are (nearly) always dealing with systems of three
primary colors, additive Red, Green, Blue or sometimes the subtractive Cyan, Magenta,
Yellow set. The ICC has profile formats which are designed to deal with the special
considerations required for three-primary systems. But the big triumph of ICC Color
Management has been in dealing with printers, both desk and large printing service systems.
Color printers are commonly based on multi-color systems rather than three-primary systems.
The most common multi-color system is CMYK (Cyan-Magenta-Yellow-blacK) but printers
often uses even more colors. The ICC has profile formats which are designed to deal with the
special considerations of these multi-color systems. These multi-color profiles naturally have
a lot of control over color matching, and can allow quite precise choice of inks which best
match the various colors and tones throughout much of the visible range. This is possible
because when there are more than three colors of ink, some visible colors and tones typically
can be represented by many different combinations of the inks and in other cases colors and
tones can be represented that would not be possible if just three of the ink colors were used.
Where multiple choices are possible the choice may be made on economic grounds.
At the start traditional standard color charts were used for checking and calibrating digital
cameras. These charts had, in one form or another, numerous patches of color as well as a
gray scale and there had been colorimetric measurements made on the various color patches
which defined each of their colors and thereby which RGB values each patch ideally should
CFS-276 29
have in a computer representation. When using specified lighting and one of these color
charts to calibrate a digital camera to a three primary color ICC profile it was possible to get a
reasonably close match to most of the color RGB values but naturally, due to metamerism
some patches might closely match while some others might be visibly different. At some time
in the development of digital cameras someone had the bright idea of using the ICC multi-
color profiles intended for printers with these three primary color RGB systems. By choosing
one of the most detailed multi-color profiles they found they could actually "calibrate" the
camera so that every one of the color patches and gray scales was an exact match! Thus the
color chart would be duplicated exactly and of course the "superior" color ability of the
system became a major selling point. The problem is that this approach is completely
fallacious. Given correct lighting conditions the resulting profile will indeed make images of
the color chart that are perfect matches to the original. At the same time, the profile will
make the color response of the camera worse for nearly everything other than the color chart
itself. With a camera profiled in this patchwork quilt manner colors can shift several times
going from darker to brighter illumination color integrity is not only lost but becomes nearly
impossible to regain.
The serious problem described above really should be obvious, at least to persons with a
mathematical background as presumably would be the case for persons setting up profiling
calibrations. We can be generous and assume it really must not be that obvious rather than
conclude that we have a lot of disingenuous people who have programmed calibration profiles
for digital cameras. Being generous, let me try to explain the problem. Nearly all digital
cameras have a single sensor chip with an array of identical light sensitive cells. To make the
chip color sensitive, a tiny red, green, or blue filter is placed over each sensor cell. Since all
the sensors are identical that means that the response to light as it goes from darker to lighter
is identical for all cells regardless of color. In addition, the chip is normally designed so that
the output it produces is proportional to the light intensity it sees (that is, it is linear) over all
or nearly all of the range of interest. Therefore, any calibration should deviate very little from
linear for all three color channels. Furthermore, there is no crosstalk between the color
channels that is of a predictable nature. The red channel simply measures the amount of red
light and that measurement is not influenced in any predictable way by the amount of light
that the blue and green sensors see. For this reason any "calibration" which tries to adjust the
red value of a pixel differently for different readings in the corresponding blue and green
pixels is a false calibration. This in effect changes the near-linear "curve" of the red (or other
color) channel to what might be described as lumpy, and with a lumpiness that varies
according to the color levels sensed in the other channels. This plays havoc with color
integrity, which demands a smooth transition from dark to light.
This can be confusing because the BAYER interpolation usually used on digital camera
images does adjust the red value of a pixel differently according to the readings of nearby
blue and green pixels. But this is done according to the geometric relationship of several
surrounding pixels and the values each pixel is sensing. Geometry within the image is the
key. A specific pair of blue and green pixels will influence the red pixel value in different
ways depending upon the geometry of placement of the pixel values within the image. The
calibration form we treat above requires that the red pixel be changed the same way any time
a particular (R,G,B) color is found. In fact BAYER interpolation generally does damage to
CFS-276 30
color integrity, but BAYER interpolation has its largest influence at edges within the image
and does little to the larger areas of similar color to which the eye is most sensitive. BAYER
interpolation is necessary and the effect it has is not very harmful visually.
Finally, and primarily because I get asked about this, a comment on the transfer matrices
which play a part in ICC profiles. These are linear transformations which are intended to
accurately convert colors from their expression in one set of three primaries, r, g, b, to another
set of three primaries r', g', b'. We will assume for the moment that the primaries r', g', b' are
known for the target system. In order for the transformation to be meaningful, the three
primaries r, g, b of our camera or film also need to be known. They aren't. So, the use of
transfer matrices with film or digital cameras is basically meaningless and can be harmful.
To explain, a CRT display uses phosphors which radiate the red, green, and blue colors each
at a very specific wavelength of light. This direct connection between the primaries and
specific wavelengths gives the CRT very specific values of r,g,b. With film and digital
cameras there is no direct connection between the primaries and a specific wavelength of
light. For example, the red filters used in a digital camera pass a wide band of wavelengths of
light throughout the red region of the spectrum. When single wavelengths "primaries" are
given for a digital cameras they represent averages (integral averages) over the band of
wavelengths actually passed by the filters and moreover the average is weighted according to
a standard light source (D50 for example). If the many different light sources which the
camera will actually experience were used in the calculation instead of D50, there would
result a whole collection of different r,g,b "primary" values each of which is as valid a
characterization of the camera primaries as any other. All of these result in are very similar
rgb systems and it makes much more sense to just use the target r',g',b' as the primaries for the
camera rather than converting to r',g',b' from some artificially averaged r,g,b for the camera
that is bound to be incorrect most of the time. The situation is basically the same for film
cameras, see <http://www.c-f-systems.com/Docs/ColorNegativeFAQ.html#CM>for more
detail.
By this I do not mean transforms via transfer matrices are not ever useful. In situations where
the difference in primaries is larger and/or the source has a fixed lighting system, transfer
matrices are very useful. But for a typical general use camera or film the transform is wishful
thinking at best and may well serve to degrade the image rather than improving its accuracy.