How to Prepare Images For Publication – Part One

The post How to Prepare Images For Publication – Part One appeared first on Digital Photography School. It was authored by Herb Paynter.

It is a known fact that images viewed on computer monitors don’t always match what comes out of inkjet printers. This is because the color pixels captured by digital cameras are defined quite differently than the pixels portrayed on the computer monitor and the monitor’s pixels differ quite significantly from the ink patterns that are literally sprayed onto the paper.

But even though both inkjet printers and printing presses both use CMYK inks, the images printed on inkjet printers usually don’t produce the same appearance when printed in publications. This is quite true, but why?

Color images are displayed differently on each device because the technologies for each medium use different processes; monitors (left), inkjet (middle), and halftones (right).

The answer to this mystery eludes many of today’s magazine publishers and even many publication printers. This is a problem that the digital imaging community (photographers, image editors, and pre-press operators) have struggled with for decades. Color Management Professionals (CMPs) undergo rigorous color science studies to understand how to maintain the same look in color images that are reproduced on different substrates and a variety of printing processes. Since you may want to produce your images in print, we’ll look at a synopsis of what the challenges are and some surefire ways to produce the results you’re looking for.

First and foremost, cameras and monitors capture and project color images as RGB light but all ink-based printers must convert these RGB colors into CMYK colors behind the scene! Even though you send RGB files to your inkjet printer, the printer doesn’t rely on RGB inks to produce all the colors in the prints. RGB colors are for projecting colors while CMYK colors are used to print colors.

Projected colors are always viewed in RGB while printed colors are always produced from some formulation of CMYK inks. That’s simply how color science works. Printers don’t print the RGB colors directly. While you send RGB images to your inkjet printer, it converts those colors into some form of CMYK during the printing process. Even when you send an RGB file to your eight-color printer, the base CMYK colors are augmented by slight amounts of Photo Cyan, Photo Magenta, Red, and Green colors. However, there has been one printer (the Oce´ LightJet) that produced color prints from RGB, but it didn’t use printing inks… it was a photographic printer that exposed photographic paper and film using RGB light. This printer is no longer manufactured.

Each printing process utilizes a unique pattern to express the variable tones between solid and white.

Viva le difference

The inkjet printing process is completely different from the print reproduction process. As a matter of fact, the two systems are overtly dissimilar. If your images are headed for print and you are not sure of which printing process will be utilized, you might be headed for trouble. Here’s why.

The possible surfaces for inkjet printing vary wildly and include everything from paper to wood, from metal to fabric, and on virtually every surface and texture in-between. To accommodate this range of printing applications, inkjet “inks” are liquid rather than solid, so they can be applied to varied surfaces and substrates.

Dots versus spots. The peanut butter consistency of press inks and the well-defined shapes of the halftone dots used by the printing industry differ significantly from the liquid inks and less defined “micro-dot” dithering used by the inkjet printing process.

The color spots produced by inkjet printing systems may include more than a dozen colors and are liquid to accommodate almost any surface. Printing press dots are well-defined symmetrical shapes and are much thicker consistency to accommodate the high-speed transfer to paper. Both inks are translucent because they must blend to create other colors.

The extremely small inkjet droplets appear more like a mist than a defined pattern; each pixel value (0-255) creating a metered amount of microscopic spots so small that the human eye perceives them as continuous tones. Due to the smoothness of the tones and graduations of color, inkjet images require a bit of sharpening to deliver detail (detail remember is a product of contrast, and contrast is not a natural inkjet strength).

Dot structure of halftone images (left) and color dither pattern (right).

Both the inkjet and publication systems convert the RGB (red, green, and blue) values of each pixel into equivalent CMYK (cyan, magenta, yellow, and black) values before printing those colors onto paper. However, after the color conversion, the two processes take decidedly different paths to deliver ink on paper.

While printing presses use grid-based, well-defined dots that are impressed into paper surfaces, inkjet printers utilize micro-dot patterns sprayed onto surfaces. The same image may appear in several different forms during the reproduction process. Original image (far left), digital pixel (near left), printed halftone (near right) and Inkjet dither (far right)

Publications use the geometric structure of halftone dots to interpret pixel values as tonal values on paper surfaces. Each pixel produces up to four overprinted color halftone dots. These halftones dots translate darker values of each color into large dots and lighter values into smaller dots. The full range of darkest-to-lightest tones produce dots that vary in size depending on the press and paper being printed.

To avoid the visually annoying conflict that occurs when geometric grids collide (called a moire pattern), each CMYK grid pattern is set on a very carefully calculated angle. The positive advantage that inkjet images have over halftone images is that the image resolution required for inkjet prints is significantly less than the resolution required by the halftone process employed by publication images.

However, the most important issues to address with print have to do with color fidelity and tonal reproduction. The difference in the way inkjet images and publication images are prepared makes a huge difference in the way the images appear when they come out the delivery end of the process.

Inkjet printers are like ballet dancers while printing presses are more like Sumo wrestlers; not unlike chamber music versus thunder roll. One is quiet, graceful and articulate, the other noisy, violent and powerful.

The biggest difference between the two processes can be seen in the highlight and shadow areas. Inkjet inks are sprayed onto substrates through a very controlled matrix of 720-1440 spots per inch using a slow and measured inches-per-minute process. Publication presses smash ink into the paper under extreme pressure, at speeds measured in images-per-minute, translating the entire tonal range into a limited geometric matrix of just 150 variable-size dots per inch. Publication presses are huge, high-speed, rotary rubber stamps.

Inkjet printers carefully step the paper through the machine in an extremely precise manner while the printing press shows no such restraint. Presses display an amazing ability to control the placement and transfer of images in spite of the blazing speed of the process.

You might be able to dress a Hippopotamus in a tutu, but you can’t expect it to pirouette. There are simply physical limitations. At production speeds, the shadow details suffer, delicate highlights tend to drop-off rather abruptly, and the middle tones print darker. The printing industry is aware of this dot gain issues and compensates for them with G7 process controls and compensation plate curves, but the beast remains a beast.

There’s a pretty good chance that both color and tonal detail will be unwittingly lost in the printing process if nominally prepared images are sent to press. Having spent many years of my career in both photo labs and the pressroom, I can assure you that detail in both the lightest portions and darkest areas (and placement of the middle tones) will need special attention to transfer all the detail on the press. Highlights get flattened, and shadows get closed more easily because of the high speeds and extreme pressures involved.

This means that images destined for print must exhibit more internal contrast in the quarter tones (between middle tones and highlights) and three-quarter tones (between middle tones and shadows as well as a slight adjustment to the middle tones to reproduce at their best. I’m sure I will hear some disagreement about this from some publishers, but as a former pressman, I know that images that do not get some special attention usually print somewhat flat.

The image on the left might look good as a print, but it would reproduce poorly on a press. The shadow areas would get even darker and lose all detail. The image on the right will darken slightly in the lower tones producing an excellent result in print. White balance is also critical in publication printing. Compensating for the unavoidable effects of the press always pays off.

There is a cardinal rule in printed publications that states that even areas of the whitest whites and darkest darks must contain dots. The only “paper white” should be specular (light reflecting from glass or chrome) and even pure black doesn’t print solid black; everything contains dots. Unlike inkjet printers, printing presses cannot hold (or print) dots smaller than 2-3% value (247). Dots smaller than this never make it onto the paper. This is why additional internal contrast is needed on both ends of the tonal range.

Photographers certainly know their way around cameras and software (Lightroom or Photoshop), and they understand color and tonality as it relates to mechanical prints. They are also accustomed to references to RGB (red, green, and blue) colors and may even understand how inkjet printers work, but very few are familiar with the behavior and limitations of huge printing presses. The analogy of ballet dancers versus Sumo wrestlers is an accurate one.

Photographers understand fine art prints and image editing software though few see their photos through the eyes of pressmen. But perhaps they should!

There is a significant difference between preparing photos for inkjet printers and preparing images for publication presses. The publication RGB-vs-CMYK conversion thing differs significantly from inkjet conversion in color gamut, image saturation, and tonal reproduction.

When an image is captured, it can potentially possess more than 4000 tones per (RGB) color. That’s a whole bunch of possible colors. But the sobering factor is that all printing processes reduce those possible 4000 tones down to a mere 256 tones per RGB color before any ink hits the paper. Obviously, the post-processing tone and color shaping of camera images are super-critical! Simply put, how the photographer shapes all that data before it is ready for print determines how much detail and clarity will get printed on the pages of the magazine.

Once again, the top picture would print great on an inkjet printer but would lose very critical detail on a press. Compensation for the unavoidable effects of the press is always advised. In Part 2 of this series, I’ll show you exactly what adjustments were made to this photo. Additional sharpening also helps compensate for the slight blurriness of the halftone process.

The old adage “start with the end in mind” comes clearly into focus here. No matter how much data is captured by the digital camera, the publication press is the ultimate arbiter of tones and colors, and deserves the loudest voice in the conversation. The color gamut of CMYK conversion is even more restricted than the basic sRGB gamut of Internet images, making this post-processing exercise perhaps the most precarious scenario of them all. If you ignore the special attention needed for magazine images, don’t expect the images to pop off the page. Ignore the press’s advice, and you’ll pay the price in both detail and color reproduction.

In the follow-up article entitled “Preparing Images for Publication Part 2,” I’ll reveal the literal “trade secrets” for producing great publication images.


Preparing Images for Publication - Part one

The post How to Prepare Images For Publication – Part One appeared first on Digital Photography School. It was authored by Herb Paynter.

Understanding Imaging Techniques: the Difference Between Retouching, Manipulating, and Optimizing Images

The post Understanding Imaging Techniques: the Difference Between Retouching, Manipulating, and Optimizing Images appeared first on Digital Photography School. It was authored by Herb Paynter.

Understanding Imaging Techniques

Three distinct post-production processes alter the appearance of digital photographs: Retouching, Manipulating, and Optimizing. These terms may sound similar enough to be synonymous at first glance, but they are entirely different operations. Once you understand the difference between these three processes, your image editing will take on new meaning, and your images will deliver powerful results.

Image retouching

Photo retouching is image alteration that intends to correct elements of the photograph that the photographer doesn’t want to appear in the final product. This includes removing clutter from the foreground or background and correcting the color of specific areas or items (clothing, skies, etc.). Retouching operations make full use of cloning and “healing” tools in an attempt to idealize real life. Unfortunately, most retouching becomes necessary because we don’t have (or take) the time to plan out our shots.

Our brain tends to dismiss glare from our eyes, but the camera sees it all. A slight change of elevation and a little forethought can save a lot of editing time.

Planning a shot in advance will alleviate much of these damage control measures but involves a certain amount of pre-viewing; scouting out the area and cleaning up items before the camera captures them. This includes “policing” of the area… cleaning mirrors and windows of fingerprints, dusting off surfaces, and general housekeeping chores. This also includes putting things away (or in place), previewing and arranging the lighting available and supplementing the lighting with flash units and reflectors where required, checking for reflections, etc.

Benjamin Franklin coined the phrase “an ounce of prevention is worth a pound of cure,” which pretty much sums up the cleanup chores. We also use the phrase “preventative maintenance;” fixing things before they break and need repair.

Admittedly, we don’t often have the luxury of time required to primp and polish a scene before we capture it, and retouching is our only option. However, sometimes all we need to do is evaluate the scene, move around and see the scene from another angle, or wait for the distraction to move out of the scene.

Sometimes a small reposition can lessen the amount of touchup and repair needed.

We can’t always avoid chaos, but we could limit the retouching chore with a little forethought. It takes just a fraction of a second to capture an image, but it can take minutes-to-hours to correct problems captured.

Image manipulation

Manipulation is a bit different, though it occasionally is a compounded chore with retouching. When we manipulate a photo, we truly step out of reality and into fantasyland. When we manipulate an image we override reality and get creative; moving, adding elements to a scene or changing the size and dimension. When we manipulate an image, we become a “creator” rather than simply an observer of a scene. This is quite appropriate when creating “art” from a captured image, and is ideal for illustrations but perhaps shouldn’t be used as a regular post-capture routine.

Photo-illustration is an excellent use of serious manipulation, and can be quite effective for conveying abstract concepts and illustrations.

Earlier in my career, I worked as a photoengraver in a large trade shop in Nashville Tennessee during the early days of digital image manipulation. The shop handled the pre-press chores for many national accounts and international publications. On one occasion in 1979, we were producing a cover for one of these magazines. On the cover was a picture of Egypt’s President Anwar Sadat set against one of the great pyramids. Unfortunately, the pyramid was in a position that interfered with the titles on the magazine’s cover.

While this is not the exact picture used in the magazine, you see the challenge.

The Art Director for the magazine sent instructions for us to shift the pyramid in the picture so that the titles would not interfere with it. Moving that thing was an amazing feat back then. Normal airbrushing would have left obvious evidence of visual trickery, but digital manipulation opened a whole new potential for near-perfect deception. We were amazed at the potential but a bit nervous about the moral implications of using this power.

This venture was accomplished (over a decade before Photoshop) on an editing machine called a SciTex Response, a workstation supported by a very powerful minicomputer. Nobody outside that small building knew that from Nashville, we pushed an Egyptian pyramid across the desert floor until revealed years later. Shortly thereafter, digitally altered images were prohibited from use as evidence in a court of law by the Supreme Court of the United States. Today, this level of manipulation lets you routinely alter reality and play god on a laptop, sitting on a park bench.

Manipulation is powerful stuff and should be used with serious restraint; not so much for legal reasons, but because of diminishing regard for nature and reality. Fantasyland is fun, but reality is where we live. We quite regularly mask skies and replace boring clouds with blue skies and dramatic clouds, and even sunsets – all without hesitation. We can move people around a scene and clone them with ease using popular photo editing software. Reality has become anything but reality. Photo contests prohibit photo manipulation in certain categories, though a skillful operator can cover their digital tracks and fool the general public. However, savvy judges can always tell the difference.

Typical manipulation consisting of a clouded sky to replay lost detail.

Personal recommendation: keep the tricks and photo optics to a minimum. Incorporating someone else’s pre-set formulas and interpretation into your photos usually compromises your personal artistic abilities. Don’t define your style by filtering your image through someone else’s interpretation. Be the artist, not the template. Take your images off the assembly line and deal with them individually.

Image optimization

Photo optimization is an entirely different kind of editing altogether and the one that I use in my professional career. I optimize photos for several City Magazines in South Florida. Preparing images for the printed page isn’t the same as preparing them for inkjet printing. Printing technology uses totally different inks, transfer systems, papers, and production speeds than inkjet printers. Each process requires a different distribution of tones and colors.

Since my early days in photoengraving, I’ve sought to squeeze every pixel for all the clarity and definition it can deliver. The first rule (of my personal discipline) is to perform only global tonal and color adjustments. Rarely should you have to rely on pixel editing to reveal the beauty and dynamic of a scene. Digital photography is all about light. Think of light as your paintbrush and the camera as nothing more than the canvas that your image is painted on. Learn to control light during the capture and your post-production chores will diminish significantly. Dodging, burning and other local editing should be required rarely, if at all.

Both internal contrast and color intensity (saturation) were adjusted to uncover lost detail.

Even the very best digital camera image sensors cannot discern what is “important” information within each image’s tonal range. The camera’s sensors capture an amazing range of light from the lightest and the darkest areas of an image, but all cameras lack the critical element of artistic judgment concerning the internal contrast of that light range.

If you capture your images in RAW format, all that amazing range packed into each 12-bit image (68,000,000,000 shade values between the darkest pixel and the lightest) can be interpreted, articulated, and distributed to unveil the critical detail hiding between the shadows and the highlights. I’ve edited tens of thousands of images over my career, and very few cannot reveal additional detail with just a little investigation. There are five distinct tonal zones (highlight, quarter-tones, middle-tones, three-quarter-tones, and shadows) in every image, and each can be individually pushed, pulled, and contorted to reveal the detail contained therein. While a printed image is always distilled down to 256 tones per color, this editing process lets you, the artist, decide how the image is interpreted.

Shadow (dark) tones quite easily lose their detail and print too dark if not lightened selectively by internal contrast adjustment. The Shadows slider (Camera Raw and Lightroom) was lightened.

The real artistry of editing images is not accomplished by the imagination, but rather by investigation and discernment. No amount of image embellishment can come close to the beauty that is revealed by merely uncovering reality. The reason most photos don’t show the full dynamic of natural light is that the human eye can interpret detail in a scene while the camera can only record the overall dynamic range. Only when we (photographers/editors/image-optimizers) take the time to uncover the power and beauty woven into each image can we come close to producing what our eyes and our brain’s visual cortex experience all day, every day.

Personal Challenge

Strive to extract the existing detail in your images more than you paint over and repair the initial appearance. There is usually amazing detail hiding there just below the surface. After you capture all the potential range with your camera capture (balancing your camera’s exposure between the navigational beacons of your camera’s histogram), you must then go on an expedition to explore everything that your camera has captured. Your job is to discover the detail, distribute the detail, and display that detail to the rest of us.

Happy hunting.

The post Understanding Imaging Techniques: the Difference Between Retouching, Manipulating, and Optimizing Images appeared first on Digital Photography School. It was authored by Herb Paynter.

Photo Finishing – Challenge Yourself to Reveal the Personality in Every Image You Capture

The post Photo Finishing – Challenge Yourself to Reveal the Personality in Every Image You Capture appeared first on Digital Photography School. It was authored by Herb Paynter.

Many folks think that photography takes place in the camera, but that’s not the whole truth. Photography is a two-part process that involves 1) capturing the light from a scene, and 2) shaping that captured light into a form that matches what your mind saw when you took the picture. The capture process does happen inside the camera, but the shaping part happens on your computer.

The Capture, or Photo Process

We give the camera credit for things that it doesn’t actually do. Don’t get me wrong, capturing all the light in a scene is a monumental undertaking. Keeping track of millions of points of light is a very critical and specialized responsibility. However, the camera is not so much an artistic tool as it is a capture device with a single purpose – to accurately record the light from the surfaces of objects in a scene. While that purpose can get complicated with lighting challenges, the camera is still just box with a round glass eye and a single function: to record light.

When the light of a scene enters the camera lens, it gets dispersed over the surface of the camera’s image sensor, a postage-size electrical circuit containing millions of individual light receptors. Each receptor measures the strength of the light striking it in a metric called “lumens.” Each receptor on this sensor records its light value as a color pixel.

The camera’s image processor reads the color and intensity of the light striking each photoreceptor and maps each image from those initial values, producing a reasonable facsimile of the original scene. When this bitmap of pixels gets viewed from a distance, the eye perceives the composite as a digital image.

The real magic happens after the storing of light on the memory card. The image that first appears when you open the file is the image processor’s initial attempt at interpreting the data recorded by the camera’s image processor. Most times, the initial (JPEG) image interpretation of this data is an acceptable record of the original scene, though not always.


Your camera provides several pre-set programs that adjust the three settings in the camera that affect exposure: aperture, shutter speed, and ISO.

Three main controls determine your exposure: the shutter speed, the aperture, and the ISO. The camera presets (A, S, and M) allow you to determine the depth of field and/or speed with which the camera captures the light.

The A (aperture priority) mode allows you to set the size of the lens opening (f-stop) while the camera automatically sets the shutter speed. The S (shutter priority) mode lets you set the duration of the lens opening (shutter speed) while the camera adjusts the size of the lens opening. The letter P (program mode) allows you to determine the best mix of aperture and shutter speed while your camera retains the correct balance of light for the exposure. The letter M (manual mode) gives you complete control over all settings but requires to balance the overall exposure.

Your camera’s variable ISO (International Standards Organization) setting adjusts the light sensitivity of the camera’s image sensor, allowing you to capture scenes in dim or bright light; the higher the number, the more sensitive the light receptors become, allowing you to capture images in lower levels of light.

The Histogram

Your camera provides a small graph that roughly indicates how well the camera is set to correctly capture the light in the current scene.

This graph displays the range of light coming through the lens and approximates the current light distribution that captured under the current settings. By adjusting the three settings mentioned above, you can shift and somewhat distribute this range of light to best record the full range of light.

Color balancing the light

Every scene’s color cast is influenced by the temperature of the light illuminating that scene. When the scene is captured outside, the Sun’s position in the sky and the influence of cloud cover alters the color of the light. Your camera offers at least two ways to compensate for the differences in color temperature (Auto White Balance and Pre-set Color Balance).

Auto White Balance

The Auto White Balance (AWB) sensor in your camera seeks any prominent white or neutral subject in the scene and shifts the entire color balance of the scene in an effort to neutralize that element. But there is an assumption with AWB that you desire the current lighting to be perfectly neutral in color.

Any clouds interfering with the sunlight will have a slight influence on the neutrality of 6500° (natural sunlight) lighting. AWB takes that slight shift out of the equation. Most of the time, this is a great idea. However, to record early morning or late afternoon (golden hour) lighting accurately, AWB will neutralize those warm colors and completely lose that “warm” mood.

Pre-Set White Balance Settings

Your camera offers several pre-sets to offset any known color casts caused by specific lighting situations. These settings appear in every digital camera “Settings” display and may appear in a slightly different order or wording. Daylight sets the camera to record scenes under typical mid-day outdoor lighting. Cloudy/Overcast shifts the colors toward orange to compensate for the bluish cast caused by light filtering through nominal cloud cover.

Shade offers a stronger orange shift to compensate for completely overcast (stormy) skies. Flash provides a very similar color temperature lighting as Daylight and is intended to prepare the image sensor for artificial daylight or “Speed light” type flash devices.

Tungsten/Incandescent shifts the colors toward the blue end of the color range to compensate for the warmer shift of incandescent lights. Fluorescent attempts to compensate for the greenish cast of gas-charged fluorescent lights.

Kelvin/Custom permits the user to set a custom color balance setting, essentially teaching the camera what “neutral” gray color looks like. All of these pre-sets attempt to correct non-neutral lighting conditions.

The Sculpting, or Finishing Process

While the camera does capture the full range of reflected light in a scene, it has no way of knowing the best tonal curve to apply to each image. Many times the five tonal ranges (highlight, quarter, middle, three-quarter, and shadow) need to be reshaped to best interpret the light captured at the scene. This tonal contouring process is the magic of sculpting the light into a meaningful visual image.

This little fella perched outside my front door and caught me off guard. I didn’t have time to fiddle with the controls to optimize the lighting situation. My first click got his attention and the second got this expression. Fortunately, I capture my images in both jpg and RAW formats simultaneously. Doing so allowed me to post-process the tones and display to you what I actually saw that morning.

I use the term “sculpting” when talking about image editing because it best describes the rearranging of tones in a digital image. Only ideal lighting balance looks great when rendered as a “stock” JPEG camera image.

This sculpting or finishing process amounts to the clarification of tones and colors in a digital image; making the image appear in final form the way the human mind perceived it in the original scene. While the color balancing aspect of this process is a bit more obvious, the tonal recovery is actually more critical to the final presentation.

The digital camera cannot capture all of the dynamics of the visible spectrum on a sunny day, nor can it determine the best balance of those tones. The camera’s image sensor simply captures all the light possible and presents the data to the camera’s image processor to sort out. Under perfectly balanced lighting, this works out just fine, but occasionally detail hides in the shadows and gets lost in the highlights, requiring help from the photographer/editor to balance out the tones.

This is where the individual tone-zones come into play, and the sliders available in RAW processing software (Camera Raw, Lightroom, On1 Camera Raw, Exposure X4) are invaluable. The internal contrast of every image (Whites, Highlights, Middle tones, Shadows, Blacks) can be pushed around and adjusted in a very non-linear manner (in no particular order) to reveal detail that otherwise remains hidden.


Photo finishing isn’t complete until both color and tones are correctly adjusted for maximum effect, matching the emotion of the original scene. Only then is your image ready for viewing. Challenge yourself to squeeze the detail and reveal the potential personality out of every image you capture. It’s well worth the extra effort.

The post Photo Finishing – Challenge Yourself to Reveal the Personality in Every Image You Capture appeared first on Digital Photography School. It was authored by Herb Paynter.

What Your Camera Can’t See

The post What Your Camera Can’t See appeared first on Digital Photography School. It was authored by Herb Paynter.

For all the incredible technology packed into cameras, there is one missing element that will remain missing perhaps forever. The missing element? The combination of human eyesight and the brain’s image processor called the Visual Cortex.

1 - What Your Camera Can't See

The Visual Cortex

The Visual Cortex is located in the lower rear of your brain. It is here the real color perception magic happens – magic that goes way beyond the analysis capabilities of any camera on the planet. As you understand this human version of the camera’s image processor, your understanding of the photo process will come into clearer focus.

Medical experts tell us that more than 80% of what we experience enters our brain through our vision. Your eyes capture light’s amazing array of colors as the eye’s lens focuses light beams onto the panoramic viewing screen in the back of your eyeball called the retina.

2 - What Your Camera Can't See

Your brain is very forgiving. It focuses light entering through your eyes, and automatically color corrects almost every lighting condition and color cast en route to the Visual Cortex. Within seconds, your eyes and brain adjust to a wide range of lighting intensities and color influences and deliver very believable images to your mind. And it all happens without you even realizing it. No white balance to set, no color shifts to neutralize. Your brain’s magic intuition and forgiving nature do a crazy-good job of color correction for you.

Your camera records colors a bit more objectively. However, even when shooting RAW files, decisions about color still have to be made in the editing process. Your camera simply doesn’t have cognitive or reasoning skills and thus must be tutored to interpret what it “sees” accurately. You might say that your camera sees, but it doesn’t observe.

White balance and memory colors

When you visually observe a white sheet of paper in a daylight lighting (preferably outside, in natural light), the paper looks… white. Even when you observe that same white paper indoors under tungsten light, your brain recognizes that the paper is really white. This is because the human brain possesses what we call “memory colors;” a basic set of colors that are so familiar that even lighting variances cannot confuse.

Your camera cannot remember what color white is when it is captured under different types of lighting. It must be told every time. What your camera calls “memory” isn’t the same “memory” that your human brain possesses.

3- What Your Camera Can't See

When you set your camera’s White Balance to Daylight and take a picture of the white paper outside, it indeed appears white. That is merely the way the camera’s image sensor is biased to record light under daylight (6500° Kelvin) color conditions. However, when you move inside and shoot the same white paper under tungsten lighting (using the same Daylight WB), the paper appears to the camera to be somewhat yellow.

4- What Your Camera Can't See

Auto White Balance (left) and Tungsten (right)

Changing the camera’s WB setting to Auto White Balance (AWB) and shooting the paper under a typical table lamp light, the picture still appears slightly yellow.  Even when you set the camera’s WB to Tungsten, the paper still fails to appear perfectly neutral white, though it appears much closer to white.

5- What Your Camera Can't See

The truth is, there are colors in the visual spectrum that digital cameras record differently than film cameras did in the past. And neither technology captures and records the exact colors that the human eye sees or the mind perceives. This is why most captured images, for all their beauty, still lack the full sense of authenticity and depth that the human mind experiences from light observed in every scene.

Technically (and spectrally), in each case, the camera is telling the truth, just not the “truth” that we perceive with our eyes. This is, of course, a good example of why we shoot in raw format. When captured in raw format, all regimented color categories get ignored. Any color shifts can be corrected and lighting variances addressed in the post-processing stage.

As mentioned earlier, the can’t the camera see the white paper as white (the way our eyes do) regardless of the lighting situation because the camera doesn’t have an onboard reference registry of “memory colors” the way our brains do.

6- What Your Camera Can't See

The brain automatically remaps each scene’s color cast to your brain’s “memory colors.” Think of these memory colors as preference presets in your brain’s color interpreter. These memory colors automatically compensate for variable lighting situations. The infinite Look Up Table (LUT) variables that would be needed for a camera to replicate this basic, natural brain function would have to be both immense and incredibly complex. No matter how smart digital devices become, they’ll never replace the magic of human interpretation.


So what have we learned? Your camera, for all its sophistication, cannot automatically correct color casts. It simply isn’t human. That means that your camera ultimately benefits from and makes use of your understanding of the behavior of light and color. Armed with this knowledge, you’ll produce images that more closely replicate color as your mind perceived it. Photography is a two-part process that requires the camera to do its job and for you to do yours. What is defined by the clinical term as “post-processing” is merely finishing the job that your camera started.

Moreover, this is a good thing. Your judgment and interpretation of the colors your mind saw when you captured the image can guide you as you tweak and make minor adjustments to your images. Don’t think of this as a burden. Recognize this as a gift. You, the photographer, are the producer of the image. Your camera is merely a tool that provides all the “raw” materials you’ll need to share what your mind observed when you captured the scene.

This is why photography is an art, and why this art requires an artist. You are that artist.

Celebrate the partnership you have with your camera. Together, you produce visual beauty.

The post What Your Camera Can’t See appeared first on Digital Photography School. It was authored by Herb Paynter.

You Are Your Own Best Teacher – Learning From Your Photography Mistakes

The post You Are Your Own Best Teacher – Learning From Your Photography Mistakes appeared first on Digital Photography School. It was authored by Herb Paynter.

Personal experience is the very best teacher. Reading tutorials, studying the professionals, and mastering the fundamentals will certainly incrementally improve your photographic skills, but you’ll grow exponentially when learning from your photography mistakes. This is most true when you study your mistakes. You only learn when you make a mistake and know why.

Learning from your photography mistakes

Conversely, if you don’t seriously study the shots that you captured from each outing (both good and bad), you’ll be more prone to make those mistakes again and again and never clearly understand why. Discovering how camera settings and scene lighting produced specific results can give you real insights that even a private tutor may not deliver. You are your own best teacher because this kind of lesson is concentrated on you alone and concerns you alone. You aren’t competing with anybody else, nor are you being judged by anyone else.

Metadata and EXIF Information

Metadata is the techno-term for the settings your camera uses to capture digital pictures; which includes File Properties and Exif (camera capture data). Every camera collects facts that describe just about everything your camera knows about the pictures it takes.

Metadata and Exif information accompanies every image captured and is disclosed by a variety of different software applications, and it is exhaustively disclosed in Adobe’s Bridge software. The illustrated examples in this article have were captured from Bridge. While Lightroom delivers a small subset of this information, Bridge lists virtually everything and acts as a “bridge” (clever name) between the files and other Adobe software to catalog and process the images.

1 - Learning from your photography mistakes

Metadata reveals that this photo was set up in Auto mode with AWB (Auto White Balance) and Matrix metering which opened the Aperture to 3.5, evenly exposing the scene and allowing the camera to correctly balance the colors based on the neutral gray elements in the scene.

2 - Learning from your photography mistakes

This shot illustrates the danger of setting the camera for full Manual operation but incorrectly selecting Tungsten lighting as the light source which biases the colors toward the cooler (blue) side of the spectrum. Tungsten setting expects the yellow cast of tungsten lights, however, the outdoor lighting was shaded sunlight. The Aperture was set manually to f/22 which did not allow enough light to expose the darkened scene.

Discover what works and what doesn’t

Get hard on yourself and discover what works and what doesn’t. Then try to repeat the results you received from your best shots. If you make this exercise a habit, and seriously analyze why some shots worked, and others didn’t, you’ll improve with every outing. Learn to appreciate the “keepers” but don’t view the rejects as failures… they are merely lessons from which to learn.

Note the difference that the time of day makes and the angles (and severity) of the shadows produced during different hours of the day. Take notes on why some shots are 5-star picks, and some others are rejects. Become a student of your work and watch your learning curve shorten.

This metadata also teaches you the limitations and restrictions of specific settings. Sometimes processes that fail are caused by equipment failure rather than judgment error. Here’s an example of the camera being set up for a flash image but encountering an entirely different lighting condition when the flash failed to fire. The ripple effect of a flash misfire caused a massive failure in the camera’s exposure, focus, and color.

3 - Learning from your photography mistakes

The metadata reveals that this image was captured correctly. All processes functioned as expected, resulting in a color-correct, well-exposed picture.

4 - Learning from your photography mistakes

The metadata in this file reveals why the image is overexposed, grossly discolored, and blurry. While the flash was instructed to fire, it failed (probably because the flash was fully charged and ready to fire). This resulted in an image that the camera’s settings (Aperture Priority and Auto exposure) forced the camera to compensate the lack of flash lighting with extremely slow shutter speed. The yellow cast was the result of tungsten lighting in the room while the image sensor’s color balance expected daylight (flash temperature) settings.

Develop a routine

Develop a routine and a personal discipline that forces you to shoot during the same time of day for a full week. Note that I said “force,” rather than try. Personal discipline is a wonderful trait and one that can improve your photographic skills very quickly. Who knows, it might actually affect other areas of your life that need improvement too.

If you only shoot occasionally, you’ll develop skills at a slower pace. Moreover, if you only critically review your work occasionally, you’ll learn at a snail’s pace. Make the review process a regular exercise, and it becomes habit… a good one. I once had a professor who stated in almost every class, “repetition is the exercise of your mental muscle.” The advice sounded strange back then, but it makes perfect sense now.

Every session you shoot produces winners and losers. Make it a habit to examine all metadata from your session to deduce what went right and what didn’t. More importantly, you’ll learn why. Take ownership of your mistakes, especially errors in judgment. You only grow when you recognize a mistake and work to overcome it. While you’ll always be very proud of the great shots you take, you’ll learn more from the shots that didn’t work!

5 - Learning from your photography mistakes

The metering used in this shot was Pattern or Matrix, which averages light readings from the entire frame to influence the shutter speed. The average exposure was based on middle-tone (18%) gray. The sunlight reflecting from the sand on the ground and the black feathers in the bird’s wings established the outer parameters of the exposure, producing an unacceptably dark overall exposure. Had I chosen Spot metering, the picture would have considered only the tones in the middle of the frame, thus lightening the overall exposure.

More often than not, this examination shows you how your camera reacts to specific lighting in a scene. It sometimes produces profound shifts in exposure from small differences in the framing of a scene. Weird but true. While cameras are thought to have “intelligence,” in reality they have no intelligence or no judgment capabilities of their own. They’re merely algorithms that affect settings based on the lighting observed in the scene.

6 - Learning from your photography mistakes

The camera angle was shifted to reduce the amount of sunlight reflection in the frame which, in turn, changed the lighting ratio and lightened the resulting exposure. Reviewing this result taught me to carefully evaluate a scene for content before choosing a metering system.

There are many ways to learn

There are many ways to learn. Taking courses online, reading tutorials and technique books, and tips and tricks columns all teach us a little something more. Years ago I decided to learn how to play the game of golf. After shooting some very embarrassing and humbling rounds, I realized that I desperately needed help. I bought many golf magazines and tried to mimic the stance and swings pictured in the exercises. I watched a large number of video tutorials and listened to advise from everybody, but my game remained poor.

Nothing improved and I only became discouraged. It was when I practiced the disciplines on a regular basis and took serious notes on what worked and why that my game began to improve. I continued to fail simply because I didn’t analyze (and learn from) my mistakes. You learn a lot when you expose yourself to the valuable experience of others, but you’ll only truly grow in your photography skills after you study your own results. So here’s an exercise:

An exercise to help you learn

Open any of the excellent software packages that display both the Metadata (aperture, metering type, ISO, color mode, and shutter speed) and Camera Data, or Exif information (exposure mode, white balance, focal length, lens used, light source, flash behavior, etc.) from both RAW and formatted photos.

Set the View in the software so that you can observe the images in browser or catalog mode, allowing you to see thumbnail views of the files in each session. Also, set the window to display the settings for each image as you step from one image to another.

Whether you shoot in Manual, Aperture or Shutter priority, or even Auto mode, the software lists the individual camera settings exhaustively for each image.

Next: note the variations in lighting between the images and recognize what changes in the camera settings cause the small shifts in the results. Each variation gets linked to one or more of the camera settings; sometimes just a small shift in ISO.

If you allow Auto to control any aspect of your shots, the camera makes subtle changes to shutter speed, ISO, or aperture. Using Auto can be very beneficial in this learning stage because you’ll see how each of these controls affects the appearance.

Make a short columned note card and enter the basic settings for the keepers. Add the weather and lighting conditions that existed at the time of the shot.

Keep this note card in your camera bag and try to replicate the results from the keepers.

Repeat this exercise regularly and watch your results, judgment, and predictability improve.


You are your best teacher and your camera’s metadata and EXIF information recorded automatically with every shot is the notebook recording detailed information about every shot. Your confidence and efficiency should improve along with your photography when you study your notes. Who knows, this could be the shot-in-the-arm that pushes you forward.

Share with us how you have learned from your own mistakes in the comments below.

The post You Are Your Own Best Teacher – Learning From Your Photography Mistakes appeared first on Digital Photography School. It was authored by Herb Paynter.

RAW Files: Digital Manifestations of the Emperor’s New Clothes

What’s all the fuss and hype about RAW files? Let’s look at a little story as a comparison.

The Emperor’s New Clothes

The Hans Christian Andersen story of an incredibly vain King is an amusing tale with an interesting moral.

One day the king, who was very fond of fine clothing, was approached by two slick-talking swindlers. They posed as weavers, and they said they could weave the most magnificent fabrics imaginable. Not only were their fabrics uncommonly fine, but clothes made of this fabric were invisible to anyone who was unfit for office, or who was unusually stupid.

“Those would be just the clothes for me”, thought the Emperor. “If I wore them I would be able to discover which men in my empire were unfit for their posts. And I could tell the wise men from the fools.” As the story goes, the king bought into the story and the clothes. As a result, the people of the kingdom discovered more about their king than they ever cared to know.

RAW Files: Digital Manifestations of the Emperor’s New Clothes

Ignorance of the truth sometimes comes at an embarrassing price.

RAW Files

The truth occasionally gets lost in marketing hype, even in photography. How many times have you heard the claim that vast amount of visual information can be seen in RAW image files? There’s a major problem with that claim, the same problem that “exposed” the king in all his vanity. The claim ain’t exactly accurate.

RAW files do indeed contain all the information collected by a digital camera’s image sensor. But the file’s information itself cannot be viewed because the RAW data is not an image at all, it’s merely numbers.

Only when these numbers are parsed (interpreted) as colors and tones by special software can they display any visual information. RAW Interpreter software builds an initial visual image from the data in the file.

The RAW image, just like the ill-informed Emperor’s clothes, doesn’t actually exist until the file data is interpreted. There is no such thing as a RAW image, only RAW data.

RAW Files: Digital Manifestations of the Emperor’s New Clothes - interpretor software

RAW Interpreter software includes Adobe’s Camera Raw and Lightroom, ON1’s Photo Raw 2018 and Alien Skin’s Exposure3, among others.

When you do open a RAW file in Camera Raw, Lightroom, ON1 Photo Raw, Alien Skin Exposure 3, etc., the image you initially see on the screen is actually based on the camera’s built-in JPEG expression of the RAW data; a mere rough draft of the file’s potential information. The camera’s exposure settings (recorded along with the RAW image) determine the file’s initial appearance on the computer monitor.

Once this initial image appears on the monitor, each RAW Interpreter software provides a fairly exhaustive array of color and tonal sliders that can shape the data into a variety of interpretations. Each interpretation can be saved in JPEG format and published for others to see. Folks who shoot and publish JPEGs directly out of the camera are really shortchanging the file’s potential and leaving important color and detail on the cutting room floor.

Emperor Raw beef With vegetables - RAW Files: Digital Manifestations of the Emperor’s New Clothes

The RAW Truth

The term RAW is not an acronym for some technical phrase nor is it a reference to some uncooked food. It is merely a coined word describing the collection of undeveloped (latent) image data from the camera’s image sensor. This data file contains all the raw chroma and luminous data extracted from millions of light buckets called image receptors located on the camera’s image sensor. Each light bucket is covered by a blue, green, or red filter.

Emperor 4k bayerarray - RAW Files: Digital Manifestations of the Emperor’s New Clothes

Individual image sensors are like small light meters, each covered by a red, green, or blue filter. The Bayer filter array uses more green filters than red and blue, relying on the camera’s image processor to interpret the correct light color and intensity for each pixel.

These RGB filters split the incoming light into three channels of information. Each receptor records the strength of the filtered light as an individual color that will eventually form a single pixel in the image.

While the initial grid of receptors is covered with more green filtered buckets than red or blue, the purpose for this imbalance is a bit too complicated for this article. Suffice to say, the image processor in the camera performs some very complicated math to determine each pixel’s color value and brightness.

Emperor Nikon Camera Back - RAW Files: Digital Manifestations of the Emperor’s New Clothes

A digital camera’s image processor sends RGB pixel values to the camera’s viewing screen to preview each scene prior to capturing the image.

This light capture process begins even before the display is visible on the back of the camera. Every time you reposition the camera to frame your shot, the image processor does its magic again and delivers a new preview of the composition. If your camera is set to display a pre-capture histogram of the scene, this processor data is used to simulate the graph on the histogram.

But the real heavy-lifting happens when you push the shutter button and the image is captured. Once all the individual colors are recorded on the sensor and delivered to the processor, the final image information is preserved on the camera’s hard drive.

Emperor Purple Iris CameraRaw

The individual tonal values (luminosity) of the RAW file were fine-tuned in Adobe’s Camera Raw software to reveal detail not visible in the JPEG file.

In a RAW file, the value of each pixel can be extensively adjusted for hue (color), saturation (intensity), and luminance (brightness). JPEG files record pixels with the same initial color values but the JPEG file format significantly restricts the ability to adjust those values in the editing process. The latitude of JPEG adjustments is significantly limited.

Emperor BahamaBlue Exposure3

The controls in Alien Skin’s Exposure3 Raw Interpreter software provide extensive control over hue and saturation color adjustments.

File Types

JPEG files record each color pixel as an initial luminance (brightness level) and chroma (color) value. When all the pixels on the grid (bitmap) are collectively interpreted in imaging software, a visible replica of the original scene appears on the monitor. If that same image is also captured as RAW information, the values of luminance and chroma are captured in the context of a larger color space and can be interpreted in a wide variety of expressions of the original scene.

Emperor 35mm Negatives

Color negatives are produced from latent images when exposed films are fully developed in photo chemical solutions.

RAW files have been likened to photographic color film negatives in that when they are “developed” (viewed in RAW Interpreter software), the image can be “printed” (published) in a number of unique colors and tonal versions.

But the truth is that because this RAW file is not an image per se, but a record of the light characteristics captured by each of the camera’s light buckets, the original image data contained in the RAW file never gets altered, it only gets interpreted.

The interpretations are records of the luminous and chroma adjustments made to the RAW bitmap pixels. These interpretations are what gets saved as JPEG images.

Unlike the yarn spun by the king’s “couturiers,” RAW data files deliver custom-tailored results and can make you look really smart in a couple of ways. Dress your images for success.

The post RAW Files: Digital Manifestations of the Emperor’s New Clothes appeared first on Digital Photography School.

Tips for Ensuring You Get Sharp Photos Every Time

How many times have you captured an image that looks great as a thumbnail only to lose that sharpness when it is enlarged? If you’re like me, TOO MANY times. It happens to all of us all too often, but it doesn’t have to. You probably know the reasons why and how to avoid the problem, but let’s review them all in one setting to you can get sharp photos every time.

Tack Sharp photos - Leaves

There are several known contributors to soft photos and specific ways to prevent them.

First and foremost – clean the lens

Clean Lens - Tips for Ensuring You Get Sharp Photos Every Time

Fingerprints and dust on the lens are the most obvious hinderances to sharp pictures and are one of the most commonly overlooked causes. Carry a small clean microfiber cloth (or packets of lens cleaning wipes) in your camera bag at all times, and keep the lens cap on the lens when it’s not in use.

Become a clean freak with your lenses.

Aperture Settings

While shooting with the aperture wide open does allow you to use higher shutter speeds, it can also have an adverse effect on image sharpness because of an issue called a spherical aberration.

Simply put, light rays travel in straight lines. When they pass through a lens, the curve of the lens actually bends the light rays and diffuses their focus. The more the rays are curved, the softer the focus. When the entire rounded surface of the lens is utilized (as in when using a wide open aperture), the light-bending is increased and the sharpness on the outer edges of the picture is somewhat softened.

This aberration issue is most evident in less expensive lenses.

Tack Sharp photos - Aperture

It is widely known that an aperture 2-3 stops down from wide open produces the sharpest results. If your shot doesn’t require an extremely shallow depth of field to blur the background, close the lens down a stop or two and compensate the exposure with a slower shutter speed or higher ISO.

But be aware that extremely small aperture openings (f/22 and higher) present their own problem called diffraction. When light is forced through a very small opening, the outer rays bend to get past the small opening, which can soften the image and require a longer exposure time.

Lessons learned: Either aperture extreme will cause a slight softening of the image. Except for special applications, so stay in the middle of the road!

Lens Quality

It’s always good advice to buy the best glass you can afford. It is a known factor that THE most critical equipment in your camera bag is not your fancy camera body, but the quality of the glass in front of your camera.

Tack Sharp photos - Lens

Save your money and invest in quality lenses (f/2.8 or faster). Most of us carry at least one zoom lens, but these lenses, because of the complex grouping of internal glass, are seldom faster than f/2.8, and many are as slow as f/4.5 – f/5.6. The lower the number, the more light that passes through the lens. An f/1.4 prime (fixed length) lens always produces sharper images, though it costs more money.


Believe it or not, the cleanliness or dirtiness of the air can have a significant impact on your photography, especially long-range shots like landscapes. Both heatwaves rising from the hot ground and floating particles of dust and pollutants (what we lovingly call atmosphere) bend the lightwaves, dull the saturation, and blur the focus of your pictures.

Tack Sharp photos - Rust

Living on the “beach coast” of Florida, steady breezes come in off the ocean that are refreshing on a hot summer day but they contain serious amounts of salt. This air salt can be seen for miles in the distance while driving down the coastline. The saltwater mist hangs in the air and has an adverse effect on both metallic surfaces and photographic subjects.

The most ideal weather for shooting razor-sharp pictures is those delightful hours right after it rains. That happens in Florida like clockwork almost every afternoon and at least once every day, Florida gets a nature-shower that lasts for less than an hour and leaves the air sparkling clear for all kinds of outdoor activities. Thankfully, these daily showers scour the air and rinse the salt from both nature and automobiles.

Depth of Field

Choose an f-stop that will keep your entire subject in sharp focus. If you want to keep your subject in full focus while blurring the background, do the math to figure out the depth of field that will remain in full focus at a particular distance.

Each focal length lens has its own “pocket of precision” or focal zone for each subject-lens distance. Take the time to explore your lens’s capabilities so that you will be prepared.

Tack Sharp photos DOF

The depth of field is particularly critical in macro photography. The very nature of the process limits the actual focus on subjects to a very shallow distance. Sometimes this works out well and sometimes it just doesn’t.

Learn the limits of each macro lens’s “pocket” before you make your shot. If your camera allows you to preview the depth of field, use it religiously. Very small changes in the lens-to-subject distance have a very big effect on the focal distance.

Use the One-Third, Two-Thirds Rule

All photographers know that higher number f-stops mean greater depth of field, but maybe some don’t realize that there is an important ratio involved in the field of focus. This ratio must be considered when choosing the f-stop for a particular shot.

While the length of the lens affects how much of the subject will be in total focus, where you set your focus point is also critically important.

This is true whether you are using Automatic, Spot or Manual focusing. Learn to divide the desired focus area into thirds and set the focus one-third into that distance. When you focus on a particular spot, two-thirds of the focal range behind that spot will remain in focus while only one-third of the area in front of that spot will remain sharp.

This is why portrait photographers set their focus on the subject’s eyes. This way the distance from the nose to the ears remain in focus.

Autofocus Versus Manual Focus

Tack Sharp photos - Lumix Manual Focus

Unless your subject has a high level of contrasting edges and is located in the middle of your field of view, you might want to consider using manual focus. Autofocus is a life-saver most of the time, but any higher contrast item in the scene could very well steal the camera’s attention.

Camera autofocus is designed to zero-in on high contrast and those areas in the scene will always set the camera focus. If your subject is located in subdued lighting, try switching to manual focus instead.

Shutter Speed

Slow shutter speeds in hand-held conditions always present problems. No matter how still you hold, your body is always in motion.

The simple fact that your breathe and have a heartbeat means that slight motion will most likely become an issue with slow shutter speeds. Even the slight motion of pushing the shutter button is a contributing factor in this process. I personally make it a point to not go below 125/th of a second when shooting hand-held. Bracing yourself against a stable surface or using a tripod is always advisable.

Tack Sharp photos - Remote Trigger

Use a tripod and a remote trigger. The ultimate preparation for capturing detailed and sharp photos is to take human motion out of the equation altogether.

Once you mount your camera on a tripod, frame the scene, set the focus, set the appropriate f-stop for the depth of field, switch to the electronic shutter (if available on your camera). Set up a remote trigger using either a cable release or a smartphone app. Then sit back and be ready to pull the trigger when the scene is right.

Compensate ISO for Shutter Speed

If your shot requires a shallow depth of field or lower f-stops, try dialing up more light sensitivity (increased ISO). Most ideal lighting situations accommodate 200-400 ISO, but low lighting scenarios may require you to set the camera to significantly higher ISO.

But keep in mind that ISO determines how sensitive the image sensor is to light and darkness. Very high ISO will yield higher levels of electronic noise in your picture. Noise is the polar opposite of “signal.” Make your choice of ISO carefully if the image is to be enlarged at all.

About Image Sharpening

Tack Sharp photos - Smart Sharpen

Nominal sharpening takes place (usually) at the time the photo is taken. However, sometimes additional sharpening may be necessary. Beware, image sharpening should always be the last step in image preparation.

Most photos are intended to be sharp and detailed. But refrain from sharpening your images in the editing process in a ditch effort to bring out more detail. Image sharpening artificially simulates image sharpness and can actually degrade the digital image. Unless you use a sharpen brush, every time you sharpen an image in post-production you also enhance the non-subject elements in the scene.

So make sharpening for detail a last resort.


Make it a habit to capture the highest level of detail in the original shot. Take the time to learn each of these precautions and then consider them briefly before you take your shot. If you discipline yourself to go through this checklist the next half-dozen times you shoot, this will become a mental-muscle memory that you check subconsciously.

Exercise your good habits and you’ll come home with more sharp photos and become a sharpshooter.

The post Tips for Ensuring You Get Sharp Photos Every Time appeared first on Digital Photography School.

The Illusion of Photography and the Miracle of Sight

The photographic process is a grand illusion from top to bottom. Think about it. Everything about the process is visual trickery. Photography provides a reasonable facsimile of real-life perception, and luckily, your brain’s visual cortex is very forgiving and willing to play along with this ruse.

Here’s what I mean.

Illusion of Photography CameraLensEye

Your brain is pretty smart; way smarter, adaptable and intelligent than we sometimes give it credit for. Human perception takes place on a completely different level than photography (or even videography). But photographic science and advanced camera functions do hold certain advantages over nature’s system. Here’s a look at a comparison of the two systems.

Motion / Still Life

Your camera is able to capture slices of time and literally freeze motion in its tracks. Camera shutter speeds slice and dice life into instances of time lasting just thousandths of a second each. Only when we fail to set the shutter speed and (ISO) sensitivity properly are objects in motion recorded as a blur.

Illusion of Photography Motion Blur

Your eyes, on the other hand, have rarely seen anything absolutely still, unless it is a rock formation or building. Even then, our view is constantly changing simply because our body moves continually.

While your eyes capture thousands of frames each second, they process the images quite differently than your camera. They stream high-speed snapshots to your brain’s visual cortex – two at a time (right and left views) providing dimension and shape. And they do this all day, every day. No batteries or memory cards are required.

Your eyes shift and refresh their view thousands of times every second to paint complete three-dimensional moving scenes in your mind. This is perpetual “streaming” at the speed of light.


The illusion of moving pictures (or movies) comes close to replicating what human eyes perceive as motion. The action portrayed in motion pictures is accomplished when single-frame images are flashed onto a screen sequentially at the same speed that they were recorded. The process works effectively to simulate what the human eye processes at much higher rates.

The major difference is the processing speed. Specific video codecs (computer word for the compression and decompression process) involve industry-standard capture/playback speeds (frames-per-second) designed to match the processing power of various playback systems. Videos are recorded and played back at speeds up to 60/fps to trick the eye into perceiving motion instead of seeing individual frames flickering by.

Autofocus and Blurred Backgrounds

The camera focuses on a single plane or depth of field and blurs the rest of the picture. You have the option to automatically focus on all subjects in the scene or select specific pinpoint areas.

If you set the camera to autofocus, you must remember that the camera always seeks and focuses on the objects with the highest contrast ratio in the scene. To control this you may select between face detection, autofocus tracking, multiple focus points (zone focus) or overall scene settings to tell the camera your preference.

Camera focus is all about managing the blur; making the eye concentrate on a particular part of the scene.

Illusion of Photography Autofocus Blur

Your eyes don’t really see blurs at all. They automatically focus on the single subject of your attention and gradually defocus and separate the view of the non-subject areas. This is quite different than camera “bokeh.” Close one eye and view a scene in the room, then switch eyes and notice how the background shifts.

The human eye displaces subjects in the background while the camera attempts to blur them. We’ve been conditioned to accept photo blurs as if they are a part of real life, even though they aren’t!

Two Versus Three-Dimensionality

Single lens cameras capture only two-dimensional images; with height and width. Items in focus are limited to a single defined “plane,” or distance from the camera. The dimension of depth is simulated by blurring objects which are not in perfect focus.

Your eyes never observe scenes in only two dimensions; they see every scene in three dimensions, through two converged horizontal viewpoints, your left, and right eyes. Your eyes adjust and shift focal length almost instantaneously. Only recently has Hollywood caught onto the 3-D trick.

Dimension, like depth, is perceived visually by slightly defocusing and horizontally shifting the two scenes behind the object in focus. This differs significantly from the camera’s method of simply blurring and softening the background. While depth can be simulated, dimension cannot be. Dimension requires a process called parallax, a word derived from the French “parallaxe” meaning “fact of seeing wrongly.”

Depth of Field

The camera uses its single lens to capture subjects from a direct frontal view. With the camera, you can also determine how much of the scene you want in focus by a managing the depth of field (DOF); blurring both the foreground and background for emphasis.

You can’t do this with your eyes. If you concentrate on an object close to you, pretty much everything behind the subject will automatically be defocussed.

Illusion of Photography DOF

Each of your two eyes sees that same subject from a slightly horizontally-offset angle, which is a very good thing! This overlapping, crisscross view allows you to see enough of the sides of each subject to sense dimension, judge distances, and safely navigate your way around obstacles. When the eye’s two views are combined, they provide a unique depth and dimension to your perception.

Try walking around when viewing the scene ONLY through your camera’s viewfinder and you’ll notice the difference.

Sphere of Focus

Camera lenses all have one thing in common. When they focus on an object a measured distance from the lens, everything else in the scene (the same distance from the lens) is also in focus. The optical nature of the spherical shape of the lens makes this happen. When you employ a wide angle lens, you can see everything in the scene in near-perfect focus.

Illusion of Photography Sphere of Focus

The human eye is quite different. Our focus on a subject is actually limited to a very small radius of view, between 7-10° wide. Everything outside that window appears defocused; not blurred, but just out of sharp focus.

While our peripheral vision spans nearly 180°, only a very tight circle of view appears totally focused. The way we perceive entire scenes by our eyes is constantly shifting and sending patches of focus to the cerebral cortex, which paints a momentary scene in our mind.

Try staring at one word on this screen. You’ll notice that unless your attention shifts slightly, the words on either side of that word aren’t really “in focus.” The real magic is that both of your eyes have this agility and they both work in perfect unison, viewing the same exact spot and shifting together at precisely the same moment.


All digital cameras are able to record images using only the luminosity channel producing “black-and-white” images. Monochrome photographic images remove all chrominance (color) information and rely only on single color contrast (luminance) to portray the scene.

Photography’s earliest roots are in black and white photography as the development of film emulsions were able to capture only luminance (monochrome) values with the light-sensitive silver halide particles. Even color films used this same monochromatic process but added color filters to capture individual RGB light waves.

Illusion of Photography Monochrome

Your eyes have never experienced this phenomenon except in photographic reproduction. The eye’s rods and cones that make up the image receptors interpret every scene in full color. Red, green, and blue receptors in your eyes perform this same service for your vision.

This characteristic of photography is perhaps the most bazar example of visual forgiveness, though the eye’s rods (more receptive to the green frequency of light) are most able to perceive forms and shapes under very low lighting conditions. This is why identifying colors in low light is so difficult. Not coincidentally, the green channel of color digital photography captures the most realistic monochromatic information.

Zoom, Wide Angle, and Telephoto

You probably own either a zoom lens, a fixed-focus telephoto, or a wide-angle lens for your camera. These variable distance lenses allow you to capture scenes either closer or farther away than your eyes typically see. Your human eyes are “fixed” on a 1:1 or “real-time” vantage point.

If you want to see a subject at a different distance, you have to adjust your personal distance to the subject or view the world through magnifying lenses like binoculars.


Illusion of Photography Resolution

Here’s another area where photographic systems hold an advantage over human vision. When ultra-sharp lenses are coupled with high megapixel image sensors, the number of pixels available to publish a photo far exceeds the size and magnification capabilities of human vision. When pixels are displayed small enough to escape detection (roughly 100 per inch), image projection and reproduction sizes are nearly limitless.


Your camera can capture a scene in which everything is in near-perfect focus. From an object just feet away to a mountain five miles away, everything is sharp and clear. It is impossible for your eyes to view entire scenes in perfect focus, though photographic prints depend on the brain’s forgiving acceptance of this abnormal interpretation.

Y0ur eyes very rarely maintain the same focus for any period of time. Your brain stays hungry for visual information and your eyes know how to satisfy that appetite. Your eyes shift their attention rapidly to maintain their focus on moving objects.

Try staring at this page for more than 15 seconds and you’ll probably notice your eyes shifting briefly before returning to the word you were reading. Your eyes and brain have an insatiable visual appetite and a boundless curiosity.

Pixels, Dots, and Spots

Illusion of Photography Pixel Dots Spots

And then there’s the whole pixel/halftone illusion itself. Your eyes register nature’s colors as continuous tones, colors that have no stages or gradations. A feat we graphic illusionists have never been able to reproduce. Every image we reproduce has to be broken down into minuscule particles of color so small that human vision cannot readily identify them individually (I’ve exaggerated the pixels and halftone dot sizes for those who don’t know the trick).

Something to think about

For all the similarities between the camera and the human eye, there are just as many (if not more) differences.

But in spite of those differences, we would be much the poorer without the precision of the human eye and the features of the digital camera. Appreciate both systems for what they add to your perception of life.

The post The Illusion of Photography and the Miracle of Sight appeared first on Digital Photography School.

How Color Balance Can Kill Your Color

Sometimes taking a neutral position on things like color balance isn’t really the safe or smart thing to do – sometimes it’s downright dangerous!

Gray Balance Versus White Balance

The camera term for color balance is White Balance, although we measure gray cards rather than white surfaces. Why? The difference isn’t about semantics, it’s about math.

Color Checker Gray - How Color Balance Can Kill Your Color

This is the bottom row of patches from the full ColorChecker chart published (now) by X-Rite.

Neutral gray colors (yes, gray is a color) are all composed of equal measurable parts of each RGB color, while pure white contains no measurable color at all. Photographic gray cards are absolutely color-neutral. We don’t use white cards simply because you can’t measure data that doesn’t exist.

What we perceive as white in a photograph more often than not contains trace amounts of red, green, or blue. Just enough to throw the color balance of the photo way off center if used as a reference (try it and you’ll see).

The Gray Balance tools in Photoshop and Lightroom will neutralize whatever color you click on, so always pick a gray patch rather than a white one. The ColorChecker includes a row of neutral gray patches, none of them being pure white.

Eye Versus Camera

The human eye is very forgiving in this respect. It perceives white in a very assumptive manner. White paper viewed under color light still appears white because of what we call memory colors, a cognitive database of repeated experience. If we associate a color with an object often enough, we establish a link between the two.

Not so with the camera. Its sensors have no such recollection and are not so forgiving. This is why you must balance color in Photoshop and Lightroom by referencing known neutral gray elements in the photo to known values.

Auto White Balance

Your camera’s Auto White Balance, or AWB, is what is relied on by most shooters because the flawed assumption that cameras recognize light like we humans do. Actually, the cameras are dumb electronic devices that evaluate light more clinically than do our eyes. Our brain’s cerebral cortex parses the hues of light according to our memory color catalog.

Memory Colors

White Balance Memory Colors - How Color Balance Can Kill Your Color

Memory colors are logged into our brains. These include grass (green), sky (blue), paper (white), orange (orange), etc.

Whether under candlelight or sunlight, fluorescent or tungsten, sunset or noonday, a white sheet of paper will always appear white because your brain retains the associative reference. Your brain compensates for almost every color of light, delivering a believable impression of what you’ve come to think of as reality.

No matter when you see these memory color items, your brain registers these colors and in a sense, overrides the actual color of the light. Unfortunately, this is not true for (digital or film) cameras.

White Balance Symbols - How Color Balance Can Kill Your Color

How it works

Trusting that the camera’s AWB will correctly diagnose light and set the proper color interpretation is a flawed and risky assumption fraught with problems.

First, in the language of RGB color, equal values of red, green, and blue (like red 128, green 128, and blue 128) light produce an absolutely neutral gray color. This is an absolute of color science.

In order for the camera’s AWB algorithm to deliver accurate color, it must assume that there exists a detectable and absolutely neutral gray component in the scene. A pretty wild assumption considering that there are over 16,000,000 colors in the visible spectrum.

The camera then examines the light reflecting from objects in the scene and locks onto the cluster of pixels whose RGB values are closest to equal (regardless of how dissimilar). The AWB mandate then forces those colors to become absolutely neutral value while twisting all other colors in the scene in a similar manner.

This is all well and good IF that cluster of pixels in the captured scene actually is, in reality, neutral (gray) in color. The corrected values will then actually balance the colors in the image and produce an image that looks “real”.

The issue

But, if the scene does not have an absolutely neutral component – if there is a bluish somewhat-gray item in the scene but is not truly neutral gray (like the snow scene below) – then the image processor in your camera will dutifully and obediently change that bluish color to neutral gray, and shift all the other colors in the scene in the same direction on the color wheel.

While your eyes and your cerebral cortex use memory colors to forgive any color cast in a scene, they do not afford that same corrective assumption to photographic images. If the collection of pixels or printed dots produce off-color results, your perception will register and report “bad color”.

You are smarter than your camera

Your camera is not smart, it is simply efficient and obedient. It will obey anything you tell it to do. It’s a machine, it is not a volitional entity. It has neither reasoning power nor color-compensating algorithms.

Your camera may claim to have “intelligence,” but that intelligence is merely scripted logic, sometimes labeled artificial intelligence (the keyword here is “artificial”). You are the only one with actual intelligence. You must tell the camera what to do, NOT the other way around.

Take control of the situation and set your camera’s white balance setting according to the current lighting conditions. Your options include manual pre-sets for all typical lighting situations: Daylight, Cloudy, Shade, Tungsten, Fluorescent, Flash, and usually a couple of custom setups.

White Balance Genoa Cathedral - How Color Balance Can Kill Your Color

These two images were captured within 5 minutes of one another, under identical lighting. AWB (left) neutralized the color but destroyed the richness of the scene. The camera’s Shade color balance (right) added a slight amount of warmth and captured more closely what my eyes observed.

Color Balance Tools

There is a time to use your white balance tools to reference true neutral gray in the scene to set the gray balance in your photos, and there is a time to keep those items in your camera bag. The truth is, neutralizing every image can literally suck the natural color right out of a scene.

White Balance Tools - How Color Balance Can Kill Your Color

White Balance Tools: A) Digital Grey Card, B) DataColor SpyderCube, C) X-Rite ColorChecker Passport, D) Photoshop Levels, E) Camera Raw, F) Lightroom.

A gray balance tool placed in the scene (for an initial test shot) will serve as the gray balance reference for correcting any color casts in images captured in that scene.

This correction takes place after the capture when the test image is opened in Adobe Lightroom, Camera Raw, or Photoshop. When the White Balance tool is applied to a reference gray in the test image, all photos open at the time can be color corrected automatically.

This is truly a great way to accurately set the lighting balance within a series of photos taken during a single session.

White Balance Sunset Fence - How Color Balance Can Kill Your Color

The sunset light reflecting off this wooden fence would be scuttled if the colors were neutral balanced.


Unless the scene contains “emotional” light such as candlelight, sunrise/sunset, late afternoon or early morning light, nightlife/neon, etc. If the scene to be captured contains this kind of emotional (or mood) lighting, the very mood can effectively be neutered by the white balance process. Shooter beware.

White Balance Disney - How Color Balance Can Kill Your Color

Late afternoon Florida sun added a very warm and rich appearance to the shot on the left. I used the Neutral Balance eyedropper (choosing the most neutral colored surface I could find) to set the White Balance. As a result, the process destroyed the warmth that attracted me to capture the image in the first place.

White Balance Alaska - How Color Balance Can Kill Your Color

This snowy night shot was taken in Fairbanks Alaska on December 28th at 10 PM, capturing the surreal natural lighting that occurs in Alaska at this time of year.

The cool shadows that are evident in the image on the left are typical of moonlight reflecting off the snow. Setting the camera’s color mode to Daylight, allowed the tungsten lamplight to capture warm lighting amidst the cold snow, recording the scene exactly as I experienced it.

In the picture on the right, the camera’s White Balance was set to AWB, assuming that this “automatic” setting would capture the colors of the image faithfully. Oops! In truth, AWB actually lost the shivering cold lighting altogether.

In both of the above cases, white/neutral balance routines were employed, and the ambiance of both scenes was dutifully destroyed. By forcing each unique lighting to be neutralized, both the warmth of the sun and the frigid look of the night snow were lost.


There is no single, always-right color balance setting on the camera. In fairness, most times the AWB setting in the camera and gray balance in the editing software work out very nicely.

But occasionally the “intelligent” camera and the powerful editing software need smarter input. That means you. Using a known neutral color element in the picture as a reference allows you to become the color expert.

White Balance Kids - How Color Balance Can Kill Your Color

Using the aluminum window panel (top right) as a gray reference allowed me to automatically color correct this picture with a single click.

So what have we learned? There is a time for White Balance just as there is a time for political correctness. BUT to force the strict application of either in every situation can destroy the spirit of free expression.

Use gray balance only when emotional/mood lighting isn’t present and when a good gray component is in the scene. Too many dramatic scenes get neutered (or neutralized) in the name of neutrality.

The post How Color Balance Can Kill Your Color appeared first on Digital Photography School.

Color Management Can Be Easy

Color Management is the starchy, techie term assigned to a complex set of issues facing photographers every day. How to accurately capture the colors in a scene, display those same colors on a computer monitor and then print those colors successfully on paper.

While this is a very complicated challenge (on the level of herding cats), the answer is a lot easier than you might think.

The Problem in a Nutshell

Color photography is a visual communications system that attempts to equalize the differences between three utterly different technologies.

Red Green Blue - Color Management

Imagine three people trying to discuss a difficult topic while speaking different languages. Words and phrases in one tongue have no equivalence in the others. Cultures and behaviors clash as convictions and meanings get misinterpreted. The result is frustration. This scenario pretty well describes the complications of color reproduction.

Chromacity Luminosity - Color Management

Cameras record light in one color language, monitors interpret that same light in a different language, and printers try to explain the monitor’s interpretation in yet another language. All three are doing their level best, but collectively they aren’t communicating.

Is it any wonder why accurate color reproduction sounds more like an oxymoron than a truthful description?

Further, cameras are influenced by the color of the light in a scene, monitor colors appear different based on technologies and brands, and printing inks and papers alter how colors are reproduced. Cameras record light frequencies, monitors transpose those frequencies into numbers and printers translate the numbers into colored dots and spots. There is unity but not harmony.

Learn more with the Datacolor complimentary Color Management eBook.

Vive la Différence

camera monitor printer - Color Management

Just as foreign languages and international currencies require accurate translation and timely exchange rates, cameras, monitors, and printers interpret colors uniquely. Like both spoken languages and currencies, color reproduction requires an accurate translation of values.

It would be wonderful if all the systems spoke the same visual language, but they simply don’t.


World history notes that in 1878 an attempt was made to unify all national languages and adopt a new common language called “Esperanto.” This proposal was initiated by an ophthalmologist named L. L. Zamenhof in an effort to reduce the “time and labor we spend in learning foreign tongues” and to foster harmony between people from different countries.

While the concept is quite noble and though the movement still exists, the monumental undertaking to reduce all spoken languages into a single world language has proven impractical.

Color Management - Color Management

Accurately translating the varied languages of color is a challenge, but one that can be easily handled by adopting a straightforward process. That process is called color management.

The Gray Standard

Every conflict can be resolved when all differences are accurately acknowledged and clearly defined. In the case of color, defined standards have now been established that align the capture, display, and printing processes so that they individually recognize and pledge allegiance to a single corporate “Gray Standard.”

When each stage in the process has been internally aligned to this universal standard and all three processes are linked, then true color consistency is achieved. It really is that simple.

All color issues for all three individual contributions to color reproduction revolve around this single color of neutral gray. The utter simplicity of the concept of color balance is focused on the unbiased and “uncolored” tint of gray. The science of color is based on the fact that all photographic images are recorded as three channels of colored light; red, green, and blue.

Color Wheel Neutral Gray - Color Management

When these three colors are produced (captured, displayed, and published) in equal values, the result is the combined color of neutral (no color cast) gray. Gray is the Holy Grail standard of all color. In the middle of the color wheel, between all the primary (RGB) and secondary (CMY) colors is the color neutral gray.

When this balance is maintained in a color photograph, all colors remain “balanced,” the ultimate goal of color management. While the complexity of the process is immense, the control involves only a three-stage process, and the system itself is quite elegant and simple.

Once your camera recognizes neutral gray, all the other colors in the visible spectrum will be recorded accurately. When your computer monitor is taught how to display this same neutral gray (as well as an extended range of primary and secondary colors), it will display the full range of spectral colors.

While the myriad of print technologies, inks, and papers available today is staggering, all printing devices can be taught to produce quite consistent and pleasing results – all focused on printing a patch of color inks that appear colorless.

Here’s how it all works.

Camera Capture

The first commandment of color photography:

Thou shalt faithfully capture balanced lighting.

Balanced light is all about neutrality; respecting non-color. When the camera recognizes gray, it automatically orients all the other colors in the scene. Color always obeys gray. Items like automobile tires and shadows cast on white buildings are examples of reliably neutral color.

All digital cameras are predisposed to see colors accurately during daylight conditions, generally between 9 am and 4 pm. Under this lighting, any neutral-colored objects are recorded faithfully.

CheckrCapturePro High - Color Management

The light that illuminates each scene influences the colors captured by the camera. But light is always changing. Even natural sunlight changes (color) temperature constantly.

Each time clouds pass overhead the daylight color of 5500°K – 6500°K changes slightly. When alternative light sources are used (incandescent, fluorescent, halogen, etc.), the colors can change drastically, ranging from 2500°K to 6500°K. These measurements are recorded as degrees of Kelvin (K), with the higher numbers recording whiter light.

SCK100 Product SpyderCheckr dooropened highres - Color Management

There are several ways to ensure that colors are captured accurately in the camera. You can utilize the camera presets (daylight, overcast, cloudy, incandescent, flash, fluorescent, etc.), include a reference “gray card” in a target shot for establishing color balance in post-processing, or establishing a custom color balance (also using a gray reference card).

Monitor Profiling

Computer monitors, like TVs, have a mind of their own. There are a variety of video technologies that use ultra-mini RGB pixels in LCD (liquid crystal display), Plasma, LED (light emitting diode) and OLED (organic light emitting diode) flatscreen displays. Each technology delivers light and color uniquely and has their own spectral qualities.

In addition to the delivery systems, individual monitors of the same technology can display colors slightly differently. There is simply no guarantee that your computer monitor will automatically deliver accurate color straight out of the box, and even less so after it ages a bit.

monitor and device - Color Management

But there is a surefire way to tune-up each of these displays so that they will produce accurate color. The tune-up involves a monitor colorimeter device; a mouse-size instrument that analyzes the color of light as it gives the monitor a visual exam.

spyder5 - Color Management

This colorimeter dangles in front of the monitor while special software makes the monitor flash dozens of variations of RGB light on the screen. The device reads the color temperature and intensity of each of these flashes as it records the three-minute light show.

After the show, the software automatically compares the results of the monitor’s performance to a reference table of ideal readouts. This comparison reveals the difference between what the monitor should deliver and what is actually delivered. The two lists are juxtaposed and a visual color personality or “profile” of the monitor is generated.

This profile contains precision adjustments to the normal monitor output and adjusts the monitor’s display signals to compensate for any abnormalities. The monitor’s color “guns” are monitored and adjusted on the fly to deliver color-accurate signals to the display. What once just looked pretty now looks pretty accurate. It’s pretty nifty!

Printer Profiles

Printers face a multitude of variables based on three factors: printing technologies, ink brands, and paper surfaces. Each of these factors has a significant effect on the way colors print.

There are currently three distinct kinds of color printers that can deliver photographic quality results; inkjets, laser printers, and dye-sublimation. Each of these technologies deals with very unique “ink.” I use the word ink loosely because only one of these actually uses ink, as we know it.

Herb Laser vs Inkjet - Color Management

Laser printing toner-based geometric dots (left) versus inkjet stochastic-style liquid ink pattern (right).

Laser printers deal with toner, which is a colored powder that gets fused into the paper. Dye-sublimation overlays dry sheets of variable-density colored dye which get baked on top of each other. Inkjets are the only printers that actually spray microscopic particles of multi-colored liquid ink onto the paper.

The colorants (inks) used by each of these printing devices can be purchased from multiple suppliers and thus the consistency of color from one batch to another is a concern. Paper shades and surfaces also affect the appearance of colors printed on them. Ink tends to sit on top of coated papers but absorbs into the fibers of uncoated papers, which changes the way light reflects from the surface and changes the color saturation values.

Light Refracting - Color Management

For this reason, printer manufacturers usually provide “printer profiles” embedded into the printer drivers (the software that controls the printer when files are sent from the computer).

Ink Surfaces - Color Management

Side view of paper surfaces. The two top dots illustrated here, demonstrate how differently inkjet inks behave when printed on uncoated (top) and coated (middle) papers. The bottom dot shows that laser toner particles are “baked” onto every paper surface.

Printer profiles are color correction “prescriptions” for specific paper and ink combinations. Because printer profiling is a very specialized process requiring specialized equipment, manufacturers usually provide individual profiles for their own brand of papers and inks.

Printer Dialog - Color Management

They test each of their papers and inks for reproduction accuracy and then supply you with the “prescription” color correction files for those papers. When you select the correctly profiled paper from the print driver, the printer generally delivers accurate color.

Here’s how the profiling process works

A special file is sent to the printer containing thousands of very specific color patches that get printed on a specific paper. A very specialized device called a spectrophotometer then reads the patches on the test file. It analyzes the color patches and compares the results to the actual color values.

Profiling software then uses the difference between the two readings to create a profile; a set of instructions that tells the printer how to color correct any image file printed on that paper.

Color Management Simplified

So here’s the bottom line to controlling (managing) the colors in your photographic process.

  1. Camera – Take note of the color of light illuminating your photo scene and set the camera accordingly.
  2. Monitor – Purchase an inexpensive colorimeter device and run a 3-minute tune-up process every 60 days on your computer monitor.
  3. Printer – Take note of the paper you load in your printer and choose the proper profile when you print your pictures.

Color management is a very complicated science, but thanks to some great products and information from Datacolor, controlling that science is pretty simple. All it takes is an awareness of the issues and three simple actions.

Don’t be intimidated by technical information – learn all you need to know from the Datacolor Color Management eBook. Sign up to download the free eBook here. Each dPS reader who signs up for the Datacolor FREE eBook will receive one chapter per month and will be signed up for the Datacolor informational newsletter.

Ebook 580x400 NSA EN

Disclaimer: Datacolor is a paid partner of dPS

The post Color Management Can Be Easy appeared first on Digital Photography School.

1 2