Understanding the Basics of Color

The post Understanding the Basics of Color appeared first on Digital Photography School. It was authored by Herb Paynter.

You will never realize your full potential as a photographer…until you understand the basic elements of color and luminosity (tonality). I know this sounds scary, a bit geeky and just plain over-the-top – but hear me out.

Color photography is built on the structure of B/W photography.

How is it that some photographers seem to consistently produce great pictures?

Most likely because they understand how to control the primary element in photography – light! You can certainly take great pictures without knowing color theory, and you can get good results by learning to operate your camera, but if you wish to consistently produce powerful and visually-moving images, you’ll need to get a handle on the basic issues of color and light. Capturing light, like capturing anything else in the wild, requires an understanding of habits and behavior.

Pictures versus photographs

There is a difference between documenting an occurrence (shooting a picture) and capturing the emotion of a scene (taking a photograph). Shooting a picture requires little more than pushing a button on a camera, but taking a photograph involves a working knowledge of how light behaves and how illumination builds emotion.

Your camera doesn’t take pictures; it merely captures light. You, the photographer, take the pictures.

There are a variety of unique psychological emotions that can be triggered in the viewer’s mind by learning to master how to use light correctly. The issues of color, light intensity, angle of view, depth of field, internal contrast, highlight, shadow, and mid-tone placement all empower photographers to control emotions and portray stories with great impact. This is why one good picture can be more powerful than a thousand words.

The contrasting colors of green and magenta are opposed on the color wheel, which is why this image delivers subliminal psychological impact.

The color wheel is the most elementary form of color science and demonstrates the basis for all color correction. When a photograph displays a color cast, that cast can be removed by adding an additional amount of the color located directly across the color wheel. The additive primary colors that our eyes and cameras see are all based on red, green, and blue (RGB) light. The three colors directly opposite these RGB colors on the wheel are called subtractive primaries and form the basis for all printed pictures. These colors are cyan, magenta, and yellow (CMY).

In today’s world, we are so immersed in saturated colors that we sometimes forget the important part that light plays in the process. Dull color is not colorful at all. Color without the proper balance of light has no life…it just lays there on the page.

There are three basic components of color – hue, saturation, and brightness (HSB). The brightness element is the life and sparkle element of good color. In essence, good color is all about the quality of the light. Poorly lit subjects don’t hold the viewer’s interest. This doesn’t mean that all pictures must be bright and cheery, but all pictures must be purposely illuminated to deliver the desired reaction.

Moods are set by shaping light

It’s hard to convey good color in poor or insufficient light. Low-key lighting is ideal for creating somber moods just as high-key lighting tends to convey positive and uplifting thoughts. Learn to capture scenes that deliver a specific emotional message. Make it a point to walk around your subject and observe the light striking it from different angles, especially when shooting nature.

The warmth of the orange skies delivers the beauty, calm, and warm stillness of the ocean at the close of the day.

Make it your purpose to set the tenor (or meaning) with each photo, not to simply take a pretty picture. Look at each scene for a theme or message that will address or elicit a human response.

Colors appeal to each of us not only because they are pretty or because they blend, but because each color has a subtle psychological overtone that affects how we perceive the scene. Bright, cheery colors convey lighthearted and positive thoughts, while darker hues can evoke melancholy and even sad thoughts. “Shooting” is a process that involves aiming a weapon at a target while creating a photograph involves conveying a thought and expressing a purpose. Every time you pick up your camera, you have a choice; you can either document an event or convey an emotion.

Chrominance and Luminance

Color is an emotional impression that is comprised of both chrominance (hue and saturation) and luminance. It is luminance that provides the structure to a photograph. Together, chrominance and luminance deliver the full emotional message.

The two elemental building blocks of color photography involve the hue, or color value and the saturation, or purity of that color. These two aspects are the chrominance portion of an image. The third building block of a photographic image is luminance, or tonality, which is perhaps the most critical aspect of all. This is because it is the very structural framework on which the colors (chroma) are built. Hue and saturation offer no form whatsoever. Only luminance provides the framework or form to a photograph. Balancing these three aspects of HSL (hue, saturation, and luminance) is absolutely essential to achieving success in color photography.

The Visible Spectrum

All color is light energy and white is the combined result of all other colors in the visible spectrum.

The visible spectrum is the color portion of the electromagnetic spectrum that human eyes can see. It is visual energy. The light receivers in our eyes (rods and cones) can only observe a limited subset of this energy. These same lightwaves are captured by your digital camera’s image sensor. The colors of the visible spectrum cascade in a particular order, and for a logical reason. ROYGBIV is the acronym given to this order: red, orange, yellow, green, blue, indigo, and violet. All visible colors of light are perceptible because they travel through space at unique frequencies. All colors are basically vibrations or wavelengths of energy; the only energy visible with human eyesight. The highest (or fastest) frequencies of these colors are “observed” as warm colors while the lowest (or slowest) wavelengths are cool colors. These colors are in this order because of the decreasing frequency of the light waves they represent.

The Electromagnetic Spectrum includes both ultraviolet and infrared frequencies, which are technically not colors simply because they are not visible to the human eye. Each individual color in the visible spectrum is energy that oscillates at a specific frequency. The eye receives these frequencies, and the visual cortex in the brain interprets each as a particular color.

The Electromagnetic Spectrum

The electromagnetic spectrum is the known span of energy that exists in the world as we know it. It includes all energy measurements on both sides of the visible spectrum. These same colors appear in every rainbow and refracted white light. Occasionally you’ll see a beveled glass edge in a window or table that catches a strong beam of white light, reflecting it onto another flat surface. That beveled glass acts as a prism that splits the white light into its component parts; always in the same order of ROYGBIV. When all these component colors are viewed at full strength, you see pure white light. As you must realize, all color is just individual expressions of white light. Without color, there is no light, and without light, there is no color. All colors have their origin in pure white light.

Hue is the Color of Color. It is what differs red from green or blue.

Red is the bookend on one end of the visible spectrum just inside the infrared frequency. Violet is the other, located just inside the ultraviolet frequency. Both infrared and ultraviolet are frequencies just beyond and outside the visible portion of the energy spectrum. Both of these wavelengths can be read by instruments but are beyond the scope of the human eye.

Saturation is the strength of color expressed as a range between pure color and no color. The opposite of saturated is colorless or gray.

The warmer side of the spectrum (reds, oranges, and yellows) contains the longest wavelengths in the spectrum and present a particular challenge to photography when the balance between saturation and luminance is not carefully monitored.

Warm colors are easy to oversaturate, and when oversaturated, the luminance values are seriously challenged.

This is a critical issue because it is the luminance aspect that delivers the detail in a photo. The cooler colors (blue, indigo {purplish}, and violet {toward magenta}), are much easier to control in both saturation and tonality. These shorter wavelength “denser” colors can handle the rigors of color editing more robustly than the warmer colors.

Luminance is expressed as brightness, ranging from dark to light.

Color balance

When you think of color balance, you must get beyond the elementary issue of white/gray balance; the neutralizing of colors to eliminate any tints or color shifts.

Color balance embraces a much wider issue that is largely governed by tonality or luminance. Balancing color is as easy as using the eyedropper tool in editing software to identify neutral gray. Tonality shapes the entire framework of the photo and clarifies detail throughout the entire range between highlights and shadows. It is quite possible to produce a technically-correct, temperature-balanced picture that loses detail in the shadow areas and softens the snap in the highlights. Tonality and chroma are equally critical in the accurate reproduction of color photos.

Color pictures are a combination of form, color, and luminance. Digital color images rely on all three of these elements to deliver the illusion of what we call photography.

Conclusion

A clear understanding of the basics of color will open up a world of expression for you. Yeah, color science is a little geeky, but it certainly delivers results.

If you want to show your uniqueness as a photographer, invest a little time with color science. Anybody with a camera can publish their pictures across the planet in an instant, but if you want your pictures (and your reputation) to outlast your friends and likes on Facebook…grow your knowledge of color as much as you grow your camera and editing skills!

The post Understanding the Basics of Color appeared first on Digital Photography School. It was authored by Herb Paynter.

Image Resolution Explained – Seeing the Big Picture

The post Image Resolution Explained – Seeing the Big Picture appeared first on Digital Photography School. It was authored by Herb Paynter.

The very first thing you must understand about photography is that it is totally based on illusion; you choose to believe what you perceive. This concept didn’t originate with photography’s pixels and dots; it is the very basis for human sight. Your brain chooses to believe something to be true well beyond what your eyes can verify or recognize to be true. The very word “resolution” gives light to this concept. The resolving power of a lens is its ability to distinguish small elements of detail. This same issue is true concerning the human eye and its perception of images on a computer screen and the printed page. Each of these “interpretations” relies on a mechanism to carry out an illusion. The eye’s mechanism is rods and cones, cameras use photo receptors, computer screens use pixels, and printing machines use spots and halftone dots. The degree to which each device succeeds in their illusionary quest is dependent upon the resolution of the mechanism and the resolving power of the device.

Each system requires two elements – a transmitter and a receiver. Just as a magic trick requires both a salesman (the magician) and a customer (the viewer), each “visual” process requires a good presenter and a willing observer. The common phrases, “seeing is believing,” and “perception is reality,” pretty much define the benchmark of success. Now let’s get image resolution explained and show you where it’s is most effectively used.

Image resolution

There comes a finite distance when viewing any image where your eye can no longer distinguish individual colors. Beyond that point, your brain must sell the idea that detail indeed exists beyond that point of distinction. The detail you see when viewing an object at close range continues to be perceived long after that object is too far away to verify that detail. There are limitations to the normal resolving power of the human eye with “normal” defined as 20-20 vision.

In the image reproduction process, delivering an image with excess resolution becomes useless when the result of that extra resolution has no purpose. Thus, the gauge of all visual resolution must ultimately be framed by resolving capabilities of the human eye. Producing more image resolution than the eye can perceive doesn’t increase the detail or improve the definition, it just creates bigger files.

While you feel more confident when you pass massive amounts of pixels on to your printer, your printer doesn’t appreciate the excess. It throws all those extra pixels away. More ain’t better; it’s just more.

Dots, Pixels, Lines, and Spots

Beware of the numbers game that is played by manufacturers in the imaging industry. There is ample misinformation and misused terminology floating around that causes significant confusion about imaging resolution. Allow me to clarify some very foggy air beginning with terminology.

DPI (Dots per inch)

The term DPI is probably the most misconstrued acronym in the digital imaging world as it is loosely cast about in digital imaging and applied to just about every device. DPI, or dots per inch, is a reference to printing device’s resolution and describes the dots and spots that each technology uses in various combinations to simulate “tones.” Dots are neither pixels nor halftone dots. We’d all be a bit better off not using this term as it has little practical application.

PPI (pixels per inch)

The basic structure of every digital image is the pixel. Pixels are the square blocks of tones and colors that you see when images are enlarged on computer screens (see the Eye illustration below). The measure of those pixels (typically in a linear inch) determines an image’s resolution and should always be addressed as PPI, or pixels per inch. This setting is affected by the Image Size dialog box in editing software. The higher the number of pixels in an inch, the higher the image resolution. Scanners, digital cameras, and paint programs all use the PPI terminology.

Of all the resolution terms in the industry, this is one that deserves top billing. While the rest of the terms need to be recognized, rarely will they have to enter the conversation.

When viewed in imaging software, these squares are referred to as pixels and should be defined in values of pixels per inch (PPI). This particular dialog defines the size of the “Eye” picture in this article. Internet images are defined by pixel count and concern the linear measurement of horizontal pixels in the image.

LPI (lines per inch)

LPI refers to the halftone dot structure used by laser printers and the offset printing process to simulate the continuous tones of photographic images. LPI refers to the number of “lines” of halftone dots used by various printing processes. “Lines” is a throwback reference to the days when actual lines were etched in glass plates to interpret photographic tones in early printing processes.

This LPI number is specific to the printing industry. Lower numbers refer to larger, more visible halftone dots (newspapers) while higher numbers refer to much smaller and less visible dots (magazines and artwork). I’ll get into the numbers later.

Spots and SPI (spots per inch)

A spot is a rarely used term that refers to both inkjet and imagesetter processes. With inkjet, it is the measure of micro-droplets of ink sprayed during the inkjet printing process. SPI, or spots per inch is a User-Selectable issue concerning the resolution choices when using some inkjet printers. Higher SPI also affects the quality of the printing process by slowing the speed at which the paper is fed through the printer. The spot “marking” size of both plate and imagesetters determines the quality of the shape of halftone dots produced and only applies to high-end lithographers and service bureaus.

Device real-world requirements for optimal resolution

Now we’ll look at each device’s real-world requirements for optimal resolution. How much is too little and how much is too much? The answers require a bit of explanation because there are some variables involved in the projects and the printing devices. First I’ll clarify some misconceptions about digital camera files, then I’ll address three specific printing technologies and give you some concrete examples.

Digital Cameras

The most common reference to camera resolution relates to the camera’s image sensor. These sensors contain a grid of cells called photosites, each cell measuring the light value (in lumens) striking it during an exposure. The actual number of cells contained in an image sensor varies depending on the camera model. When the number of horizontal cells gets multiplied by the number of vertical cells on the sensor, the “size” of the sensor is defined. The Nikon D500 sensor measures 4,288 x 2,848, or 12,212,224 pixels, making it a 12.3 mega (million) pixel camera.

The individual cells in the image sensor are covered by either a red, green, or blue filter called a Bayer array. Each cell records the filtered light, converting the combined values into individual pixel colors.

These pixels can produce any number of different size pictures for various purposes. Each printing process requires a different number of pixels per inch (PPI) to deliver optimal quality prints at a given size. This is because the technology used for each type of printing is different. For example, high-quality inkjet printers spray liquid inks onto paper using very small nozzles (usually 1440 spots per inch).

Laser printers

Most laser printers are either 600 or 1200 dpi devices meaning that a solid line printed horizontally will be composed of either 600 or 1200 dots. Type is printed using all these dots while halftone images can be effectively reproduced from 220-300 pixel-per-inch (PPI) images.

Inside these laser printers is a raster image processor (RIP) that generates halftone dots from square pixels. The value of each image pixel gets transposed into a halftone cell. The formula for exchanging this grid of square pixels into a diagonal pattern of variable-size dots goes way beyond explanation in this article, but it’s kind of like magic.

Laser printers simulate gray tones using the halftoning process provided by the printer’s RIP.

Inkjet printers

Inkjet printers use totally different technology to translate color pixels into printed images. Tiny spray nozzles distribute ink to specific parts of the image to deliver their version of the imaging illusion. The resolution (PPI) required to deliver accurate inkjet images differs from laser printers. This is because they do not use the geometric mechanism of halftone cells but instead, spray microscopic amounts of each ink to precise locations as determined by the pixel values.

Inkjet printers require significantly fewer pixels per inch (PPI) than laser printers to carry the illusion. Typically 150-200 PPI is quite sufficient.

Lithographic printing

Offset printing includes newspapers, magazines, and brochures. Each requires a slightly different lines-per-inch (LPI) pattern of dots. Newspapers are typically 85 LPI, magazines are 150 LPI, and high-end brochures and other collateral material require up to 200 LPI resolution.

Each line screen value is produced by a different PPI formula. While all these types of printing can be produced from 300 PPI files, all that resolution is certainly not required and is technically overkill. Even those high-end brochures technically don’t require this much resolution, but the early-adopted myth of 2xLPI persists yet today. The actual requirement for all high-end printing is only 1.4xLPI. Any more resolution simply gets discarded by the platesetter’s RIP.

In this calculation, newspapers (85LPI) need only 120 PPI, magazines require only 212 PPI, and even the best quality print is ideally produced with just 283 PPI.

In case you’re thinking that this is splitting hairs and irrelevant, consider this… using the 1.4 rule totally meets the mathematical requirement and saves a whopping 50% of the file size in storage real estate and transfer time.

I fully expect to hear some pushback about these numbers, but science and math don’t lie. Phobias about resolution are long entrenched, respected, and expected. However, in the end, it really doesn’t matter that much.

No-nos

There are two unforgivable sins in preparing your images for proper resolution. Low-res and up-res.

Low-res

The biggest sin of all is sending files to the printer/publisher with too little resolution.

That is a certain formula for poor results and shows up in the form of soft detail and bitmapped edges caused by normal sharpening.

Every form of print technology requires a minimum of pixels to produce fully-detailed and sharp images. So do not shortchange your project in this respect.

Remember, size your images for the final appearance and assign the PPI at that final size. If you want to see an 8”x10” image appear in print, make sure you address the issue of PPI in the Image Size dialog and before you save the file.

Monitor the Image Size dialog carefully when you make changes. Resample an image while watching the Image Size figure at the top of the dialog. Try to never let it increase. You can get away with a small increase but do so only when necessary.

Up-res

Make it a rule never to increase your image size as it is a sure-fire recipe for disaster. You can’t create detail; you can only destroy it. Whatever size file (pixel count) you begin with is the largest pixel count you should print unless you’re okay with soft images.

Pixels are not rubber, and you cannot stretch them to a larger size without sacrificing the sharpness of the image. Your digital camera most likely provides you with ample original pixels to print most projects, try to stay within that original ratio.

You can increase the image size, but you can’t increase its detail. Every time you enlarge an image, you distort the pixels. So if you want to print sharp images, don’t enlarge them!

The major advantage to maintaining higher resolution files for an archive is that if an image ever needs to be cropped or enlarged, that extra resolution will undoubtedly come in handy.

It remains standard operating procedure in the printing industry to send all files to the printer with 300 PPI resolution. Cloud services, backup systems, and storage media sales folks certainly want you to continue the 300 PPI trend and rent more parking space on their sites.

Final thought

Make it your goal to make the best of this visual illusion called photography. Your camera, your computer, and your printer provide all the tools you need to perform your magic with great success. Enjoy.

 

The post Image Resolution Explained – Seeing the Big Picture appeared first on Digital Photography School. It was authored by Herb Paynter.

Don’t Create Detail, Just Reveal It – How to Reveal the Hidden Details in Your Photos

The post Don’t Create Detail, Just Reveal It – How to Reveal the Hidden Details in Your Photos appeared first on Digital Photography School. It was authored by Herb Paynter.

Just as cleaning the lenses of your eyeglasses clarifies what you see, cleansing your pictures of dull lighting will put the sparkle in your photos.

Have you noticed how many individual tools are available in your favorite editing software for changing the values of pixels? The array is dazzling, and most of this editing involves “localized” procedures (dodging, burning, painting, cloning, masking, etc.) affecting specific areas.

But here’s something to consider.

Unless the image you are working on is either damaged (either completely blown-out highlights, plugged-up shadows) or just contains too much unwanted clutter, you rarely need to create specific detail with these tools. The detail is usually right there just below the surface waiting for discovery. You need only make global adjustments to the tones within the darker and lighter ends of the range to achieve pretty amazing results.

When I took this shot of my wife Barbara fifteen years ago, I put it in the reject file because it was so dark. But carefully adjusting and lightening the shadow and middle tones in the picture separated the deep shadow tones from the middle tones. Now both she and the picture are definite keepers. No local editing was necessary, and there is no tell-tale evidence of a touchup. The image contained all the necessary lighter tones – they simply had to be uncovered.

Push tones instead of pixels

Post-processing digital images is usually a process of subtraction; removing the visual obstacles that are covering the underlying detail in a photographic image. This detail will reveal itself if you merely nudge the tonal ranges instead of the pixels.

The fact is…all the detail in every subject has been duly captured and is hiding in either the shadows or the highlights, waiting to be discovered.

The digital camera’s image sensor sees and records the entire range of tones from black to white within every image it captures. What is hiding within this massive range of tones is the detail. Unfortunately, the camera sensor has no way of knowing the detail that may be under (or over) exposed within that range. It simply captures everything it sees inside the bookends of dark and light.

Camera image sensors can capture a range of tones up to 16,000 levels between solid color and no color. This doesn’t mean that all 16,000-pixel values are actually present in the picture; it just means that the darkest to the lightest tones are stretched out over the significant detail that is hiding in the middle.

Adjustments made to the image in Alien Skin’s Exposure X4.5 revealed detail in the sunlit walkway and darkened archway that appeared lost in the original capture. No painting or cloning tools were necessary.

The purpose of this article is not to get geeky about the science, but to assure you that there is an amazing amount of detail that you can recover from seemingly poor images.

A basic JPEG image can display more than 250 tones in each color. While that doesn’t sound like much, you should know that the human eye can only perceive a little over 100 distinct levels of each color. No kidding! Technically, 256 tones are too many.

The balancing act

Here’s a sobering truth. Your camera can capture more detail than your eye can detect and more tones than your monitor can display. As a matter of fact, it can capture up to 16,000 levels of tones and colors. That’s more than any publishing resource (computer monitor, inkjet printer, Internet, or even any printed publication) can reveal. Each of these other outlets is limited to reproducing just 8-bits (256 levels) of each color. The camera’s light-capture range is even beyond the scope of human vision. The range (light to dark) of your camera is immense compared to any reproduction process. What this means is that the editing part of the photography process needs MUCH more attention than the image capture process.

This introduces a complex but interesting phenomenon. Your post-production challenge is to emphasize the most important details recorded inside the tones captured by your camera and then distinguish them sufficiently for the printer, your monitor, or the Internet to reveal.

Your camera captures an incredible amount of detail in each scene that isn’t initially visible. However, with the right software, this detail can be uncovered just as an electron microscope can reveal detail buried deep inside things that the naked eye cannot perceive.

Image editing is all about discovering and revealing what is hiding in plain sight.

Image clarity

Bringing a picture to life doesn’t always require additional touchup procedures. Sometimes, just massaging the existing detail does the trick. The Highlights, Shadows, and Clarity sliders were all that were required to transpose this shot from average to special.

Clarity is the process of accentuating detail. The dictionary defines clarity as “the quality of being easy to see or hear; sharpness of image or sound.” When we clarify something, we clear it up. We understand it better. We view an issue from a different perspective.

Many image editing software packages have a slider called “clarity.” The function of this slider is to accentuate minor distinctions between lighter and darker areas within the image. Each of the other tone sliders (Exposure, Contrast, Highlights, Shadows, Whites, Blacks, Clarity, and Dehaze) all perform a clarifying process on specific tone ranges.

The real beauty of shooting with a 12/14-bit camera is the level of access you receive to the detail captured in each image. If you want to think “deep,” you can start with the editing process of your digital images. You’ll be amazed at what you will find when you learn to peel away the microlayers of distracting information in well-exposed photos.

Just as cleaning the lenses of your eyeglasses clarifies what you see, cleansing your pictures of dull lighting will put the sparkle in your photos.

Adobe Camera Raw controls reveal significant detail in the darker portions of the image by simply adjusting the Basic slider controls.

Learning to expose images correctly

The information you learn from excellent teaching resources like Digital Photography School teach you how to correctly set your equipment to capture a variety of subjects and scenes. Study the articles in this amazing collection and learn to shoot pictures understanding the basic tenets of good exposure. Poorly-captured images will hinder your discovery of detail. However, correctly exposed images will reward you with, not only beautiful color but, access to an amazing amount of detail.

Learn to harness the power of light correctly for the challenge that each scene presents by balancing the camera controls of ISO, Aperture, and Shutter Speed. The more balanced your original exposure, the less post-processing will be necessary.

Conclusion

Every scene presents a unique lighting situation and requires a solid understanding of your camera’s light-control processes to capture all possible detail. Any camera can capture events and document happenings, but it takes a serious student of photography to faithfully capture each scene in a way that allows all that information to be skillfully sculpted into a detailed image.

 

The post Don’t Create Detail, Just Reveal It – How to Reveal the Hidden Details in Your Photos appeared first on Digital Photography School. It was authored by Herb Paynter.

How to Prepare Images For Publication – Part Two

The post How to Prepare Images For Publication – Part Two appeared first on Digital Photography School. It was authored by Herb Paynter.

In part one of this series, I presented the reasons why images printed in magazines and publications can appear lackluster, dark, and dull rather than detailed and vibrant as when printed on an inkjet printer. In this follow-up article, I address the unique requirements and limitations of printing presses and some ways to produce rich and detailed images in print.

Fine Tuning the Process for Print

Paper surfaces

The depth and detail that a press can reproduce in the darkest (shadow) portions of an image are limited by several print-related factors, with the paper grade (quality) being the biggest factor. Printing papers come in various grades, textures, and shades of white.

White is a relative term, and newspapers are a prime example. Newsprint isn’t actually pure white and the ink printed on it never appears black.

Printing inks

Newspaper inks are nearly in a liquid state as opposed to other forms of print. The tack level (stickiness) of these inks must remain very low since the newsprint paper composition is quite soft. Full-bodied inks printed at high speed would tear the paper apart. Instead of appearing as black ink on white paper, newspapers appear more like charcoal colored ink on light gray paper. This factor alone dismisses the visual contrast in pictures. Newsprint absorbs ink like a paper towel, which is why pictures in the newspaper lack contrast, punch and depth.

Magazine paper surfaces

Publication (magazine) presses fair much better. However, they still have limitations. Paper grades for publications are still lower quality than those of brochures and coffee table books because of the economy of the project. Most publication stocks are made from recycled papers in which many of the whitening agents and glossy coatings used in higher grade papers are absent. This results in less reflective surfaces and varying shades of off-white colors. While the recycled paper is good news for the environment, it’s bad news for print quality.

The challenge

High-speed presses must also reduce the tack level of their inks to keep these papers flowing through the presses. When the tack level goes down, so does the opacity of the inks, and when the tack level of translucent inks is reduced, the contrast in the images (and image detail) is also reduced. You can see where this is headed…

The creative solution

Thus the challenge is to maintain as much apparent contrast in each image as possible under less than ideal circumstances. Here is where the creative magic of contrast “compensation” enters the picture. Prior to the era of digital editing, this creative level of tonal manipulation was simply out of reach. While adjusting the overall contrast (white, middle, and black points) of printed images has always been possible, serious contour shaping was not. But within current digital image editing software, the entire internal range of tones can be tuned and cajoled with great precision. Success simply takes a clear understanding of the limitations and a good knowledge of the tools in the digital tool chest.

The Sun backlit the subjects in this photo causing the darker areas to hide significant detail. If sent to press without compensating adjustments, the printed results would have looked even darker and important detail would have been lost.

Pictured here are the settings that produced the civil war reenactment photo above. Information contained in the middle tones and shadow tones was recovered by powerful tonal adjustments available in each of the four software applications. Very similar settings produced very similar results. The panels (from left to right) include Adobe Camera Raw, Adobe Lightroom, On1 Photo RAW, and Alien Skin X4.5. Camera Raw and Lightroom produced identical results from identical settings for obvious reasons, while the development engineers from On1 and Alien Skin used unique routines and algorithms in their software to affect very similar results.

 

The secret to success in adjusting the internal contrast of an image is in developing a distinct visual difference between the whites and highlights and the shadows and black tones. This is best addressed within the six major tonal sliders provided by most RAW editing software (Lightroom, Camera Raw, On1 Photo RAW and Alien Skin’s Xposure X4.5) best address this.

Don’t let the term RAW scare you away. These editors can open and process just about all image file types (RAW, JPEG, TIFF, etc.). Each of these packages provides very similar tonal area adjustments (Exposure, Contrast, Highlights, Shadows, Whites, and Blacks) though each maintains a slightly different range for each. Additional controls to fine tune the tonal values include the Tone Curve adjustments of Highlights, Lights, Darks, and Shadows.

The beauty of all these controls is the fact that they are nonlinear, meaning they can be adjusted in any order and at any time during (and during follow-up) editing sessions. Using these editing packages, truly non-non-destructive image editing can be made to RAW, TIFF and JPEG image files.

Backlighting and a black cat provided a serious challenge in this image. These adjustments were needed even if the picture was not going to press.

Three aspects of tonal controls

Familiarize yourself with these three general aspects of tonal controls to prepare your photos specifically for the printing press.

One

Since camera image sensors capture very little shadow detail, digital images require significant internal contrast adjustments to the lower portion of the tonal scale.

Shadow tones of each image are the most challenging areas to print cleanly on press. Therefore, you must create a sharp distinction between the darkest darks (Black slider) and the three-quarter tones (Shadow slider).

Use the Exposure slider in conjunction with the Blacks slider to bring out all the detail in the darkest portion of the image. Reference the histogram to gauge the actual pixels that will print darkest.

Two

Lighten the middle tones and accent the difference between the quarter tones and the highlights.

Use the Curves tool to affect the middle tones while adjusting the Shadows slider and Highlight Slider to define the middle tones further.

Three

Reference the histogram again to monitor the lightest tones (White slider). White is a misnomer in the labeling of this slider as its influence is on the extreme highlight tones. Draw a distinction between the light tones and absolute white by using the Highlights slider and the White slider.

The Exposure slider and the Contrast slider play an important part in this tonal ballet. Choreograph these controls to achieve the best balance of internal tones and check your progress by occasionally tapping the “P” key to preview the composite effects of all your adjustments against the original image.

Seemingly lost detail in the darker areas was completely recovered by some severe adjustments to individual tonal areas throughout the tonal range. The image was recovered with only the use of the sliders shown. No further editing (dodging, burning, etc.) was required.

This article is hardly an exhaustive explanation of how to prepare images for publication inasmuch as it does not address the critical issues of color, sharpening, resolution, etc. But it will get you started on the most critical issues of tone sculpturing images for reproduction. In every example shown, ONLY global adjustments to the seven sliders was required to bring full life back into lackluster photos. The most critical aspect of post-production involves an image’s internal tonality.

Shape each image’s internal contrast specifically for the press and paper being printed. If you don’t, the printed image will probably hide shadow detail, lose their “snap” in the highlights, and produce muddy middle tones. Slight but deliberate accenting of the tone curve will produce significantly better images in print.

Working on images in these RAW Interpreter software applications provides amazing latitude in recovering both shadow and highlight detail. This example shows how On1 Photo RAW found significant detail in what appeared to be blown out highlights of a JPEG image.

Chasing light

At the core of the issue is light.

Everything about photography concerns light, and that includes viewing photos in print.

The reason images appear more vibrant and colorful on a monitor is because the background “white” is projected light, not paper. Images printed on paper will ALWAYS appear less vibrant. Paper is only as white as the light reflecting from it. The darker the paper and the dimmer the reflecting light, the less bright the picture appears. Images in print will never look as good as images on your monitor simply because reflected light cannot compete with projected light.

Conclusion

Preparing images to print correctly is a serious challenge, but one that delivers an amazing result. If you want to test your image editing skills, it doesn’t get more challenging than this. The reward for all your print-editing efforts will last a whole lot longer than a post on the Internet and will be seen by thousands (if not millions) more than a print hanging in a gallery. People collect well-produced publications and display them for others to see.

Virtually all images deserve thoughtful preparation before presentation. The camera can’t evaluate tonality balance by human standards. Learning the reproduction habits and limitations of different devices and understanding how to best compensate for each will pay serious visual dividends.

Of course, the final challenge in preparing images for publication is converting the color mode from RGB to CMYK. Check with your publication about this matter before you arbitrarily choose CMYK from the Image/Mode menu. There are a number of workflows that publications use to produce their final files for the printer. I suggest you leave the color conversion decision up to the magazine’s production staff. The conversion process is a complex issue that deserves much more attention than I’m addressing in this series.

Please feel free to comment and question what you’ve just read. Life is a collaborative effort, and we’re all learning.

The post How to Prepare Images For Publication – Part Two appeared first on Digital Photography School. It was authored by Herb Paynter.

How to Prepare Images For Publication – Part One

The post How to Prepare Images For Publication – Part One appeared first on Digital Photography School. It was authored by Herb Paynter.

It is a known fact that images viewed on computer monitors don’t always match what comes out of inkjet printers. This is because the color pixels captured by digital cameras are defined quite differently than the pixels portrayed on the computer monitor and the monitor’s pixels differ quite significantly from the ink patterns that are literally sprayed onto the paper.

But even though both inkjet printers and printing presses both use CMYK inks, the images printed on inkjet printers usually don’t produce the same appearance when printed in publications. This is quite true, but why?

Color images are displayed differently on each device because the technologies for each medium use different processes; monitors (left), inkjet (middle), and halftones (right).

The answer to this mystery eludes many of today’s magazine publishers and even many publication printers. This is a problem that the digital imaging community (photographers, image editors, and pre-press operators) have struggled with for decades. Color Management Professionals (CMPs) undergo rigorous color science studies to understand how to maintain the same look in color images that are reproduced on different substrates and a variety of printing processes. Since you may want to produce your images in print, we’ll look at a synopsis of what the challenges are and some surefire ways to produce the results you’re looking for.

First and foremost, cameras and monitors capture and project color images as RGB light but all ink-based printers must convert these RGB colors into CMYK colors behind the scene! Even though you send RGB files to your inkjet printer, the printer doesn’t rely on RGB inks to produce all the colors in the prints. RGB colors are for projecting colors while CMYK colors are used to print colors.

Projected colors are always viewed in RGB while printed colors are always produced from some formulation of CMYK inks. That’s simply how color science works. Printers don’t print the RGB colors directly. While you send RGB images to your inkjet printer, it converts those colors into some form of CMYK during the printing process. Even when you send an RGB file to your eight-color printer, the base CMYK colors are augmented by slight amounts of Photo Cyan, Photo Magenta, Red, and Green colors. However, there has been one printer (the Oce´ LightJet) that produced color prints from RGB, but it didn’t use printing inks… it was a photographic printer that exposed photographic paper and film using RGB light. This printer is no longer manufactured.

Each printing process utilizes a unique pattern to express the variable tones between solid and white.

Viva le difference

The inkjet printing process is completely different from the print reproduction process. As a matter of fact, the two systems are overtly dissimilar. If your images are headed for print and you are not sure of which printing process will be utilized, you might be headed for trouble. Here’s why.

The possible surfaces for inkjet printing vary wildly and include everything from paper to wood, from metal to fabric, and on virtually every surface and texture in-between. To accommodate this range of printing applications, inkjet “inks” are liquid rather than solid, so they can be applied to varied surfaces and substrates.

Dots versus spots. The peanut butter consistency of press inks and the well-defined shapes of the halftone dots used by the printing industry differ significantly from the liquid inks and less defined “micro-dot” dithering used by the inkjet printing process.

The color spots produced by inkjet printing systems may include more than a dozen colors and are liquid to accommodate almost any surface. Printing press dots are well-defined symmetrical shapes and are much thicker consistency to accommodate the high-speed transfer to paper. Both inks are translucent because they must blend to create other colors.

The extremely small inkjet droplets appear more like a mist than a defined pattern; each pixel value (0-255) creating a metered amount of microscopic spots so small that the human eye perceives them as continuous tones. Due to the smoothness of the tones and graduations of color, inkjet images require a bit of sharpening to deliver detail (detail remember is a product of contrast, and contrast is not a natural inkjet strength).

Dot structure of halftone images (left) and color dither pattern (right).

Both the inkjet and publication systems convert the RGB (red, green, and blue) values of each pixel into equivalent CMYK (cyan, magenta, yellow, and black) values before printing those colors onto paper. However, after the color conversion, the two processes take decidedly different paths to deliver ink on paper.

While printing presses use grid-based, well-defined dots that are impressed into paper surfaces, inkjet printers utilize micro-dot patterns sprayed onto surfaces. The same image may appear in several different forms during the reproduction process. Original image (far left), digital pixel (near left), printed halftone (near right) and Inkjet dither (far right)

Publications use the geometric structure of halftone dots to interpret pixel values as tonal values on paper surfaces. Each pixel produces up to four overprinted color halftone dots. These halftones dots translate darker values of each color into large dots and lighter values into smaller dots. The full range of darkest-to-lightest tones produce dots that vary in size depending on the press and paper being printed.

To avoid the visually annoying conflict that occurs when geometric grids collide (called a moire pattern), each CMYK grid pattern is set on a very carefully calculated angle. The positive advantage that inkjet images have over halftone images is that the image resolution required for inkjet prints is significantly less than the resolution required by the halftone process employed by publication images.

However, the most important issues to address with print have to do with color fidelity and tonal reproduction. The difference in the way inkjet images and publication images are prepared makes a huge difference in the way the images appear when they come out the delivery end of the process.

Inkjet printers are like ballet dancers while printing presses are more like Sumo wrestlers; not unlike chamber music versus thunder roll. One is quiet, graceful and articulate, the other noisy, violent and powerful.

The biggest difference between the two processes can be seen in the highlight and shadow areas. Inkjet inks are sprayed onto substrates through a very controlled matrix of 720-1440 spots per inch using a slow and measured inches-per-minute process. Publication presses smash ink into the paper under extreme pressure, at speeds measured in images-per-minute, translating the entire tonal range into a limited geometric matrix of just 150 variable-size dots per inch. Publication presses are huge, high-speed, rotary rubber stamps.

Inkjet printers carefully step the paper through the machine in an extremely precise manner while the printing press shows no such restraint. Presses display an amazing ability to control the placement and transfer of images in spite of the blazing speed of the process.

You might be able to dress a Hippopotamus in a tutu, but you can’t expect it to pirouette. There are simply physical limitations. At production speeds, the shadow details suffer, delicate highlights tend to drop-off rather abruptly, and the middle tones print darker. The printing industry is aware of this dot gain issues and compensates for them with G7 process controls and compensation plate curves, but the beast remains a beast.

There’s a pretty good chance that both color and tonal detail will be unwittingly lost in the printing process if nominally prepared images are sent to press. Having spent many years of my career in both photo labs and the pressroom, I can assure you that detail in both the lightest portions and darkest areas (and placement of the middle tones) will need special attention to transfer all the detail on the press. Highlights get flattened, and shadows get closed more easily because of the high speeds and extreme pressures involved.

This means that images destined for print must exhibit more internal contrast in the quarter tones (between middle tones and highlights) and three-quarter tones (between middle tones and shadows as well as a slight adjustment to the middle tones to reproduce at their best. I’m sure I will hear some disagreement about this from some publishers, but as a former pressman, I know that images that do not get some special attention usually print somewhat flat.

The image on the left might look good as a print, but it would reproduce poorly on a press. The shadow areas would get even darker and lose all detail. The image on the right will darken slightly in the lower tones producing an excellent result in print. White balance is also critical in publication printing. Compensating for the unavoidable effects of the press always pays off.

There is a cardinal rule in printed publications that states that even areas of the whitest whites and darkest darks must contain dots. The only “paper white” should be specular (light reflecting from glass or chrome) and even pure black doesn’t print solid black; everything contains dots. Unlike inkjet printers, printing presses cannot hold (or print) dots smaller than 2-3% value (247). Dots smaller than this never make it onto the paper. This is why additional internal contrast is needed on both ends of the tonal range.

Photographers certainly know their way around cameras and software (Lightroom or Photoshop), and they understand color and tonality as it relates to mechanical prints. They are also accustomed to references to RGB (red, green, and blue) colors and may even understand how inkjet printers work, but very few are familiar with the behavior and limitations of huge printing presses. The analogy of ballet dancers versus Sumo wrestlers is an accurate one.

Photographers understand fine art prints and image editing software though few see their photos through the eyes of pressmen. But perhaps they should!

There is a significant difference between preparing photos for inkjet printers and preparing images for publication presses. The publication RGB-vs-CMYK conversion thing differs significantly from inkjet conversion in color gamut, image saturation, and tonal reproduction.

When an image is captured, it can potentially possess more than 4000 tones per (RGB) color. That’s a whole bunch of possible colors. But the sobering factor is that all printing processes reduce those possible 4000 tones down to a mere 256 tones per RGB color before any ink hits the paper. Obviously, the post-processing tone and color shaping of camera images are super-critical! Simply put, how the photographer shapes all that data before it is ready for print determines how much detail and clarity will get printed on the pages of the magazine.

Once again, the top picture would print great on an inkjet printer but would lose very critical detail on a press. Compensation for the unavoidable effects of the press is always advised. In Part 2 of this series, I’ll show you exactly what adjustments were made to this photo. Additional sharpening also helps compensate for the slight blurriness of the halftone process.

The old adage “start with the end in mind” comes clearly into focus here. No matter how much data is captured by the digital camera, the publication press is the ultimate arbiter of tones and colors, and deserves the loudest voice in the conversation. The color gamut of CMYK conversion is even more restricted than the basic sRGB gamut of Internet images, making this post-processing exercise perhaps the most precarious scenario of them all. If you ignore the special attention needed for magazine images, don’t expect the images to pop off the page. Ignore the press’s advice, and you’ll pay the price in both detail and color reproduction.

In the follow-up article entitled “Preparing Images for Publication Part 2,” I’ll reveal the literal “trade secrets” for producing great publication images.

 

Preparing Images for Publication - Part one

The post How to Prepare Images For Publication – Part One appeared first on Digital Photography School. It was authored by Herb Paynter.

Understanding Imaging Techniques: the Difference Between Retouching, Manipulating, and Optimizing Images

The post Understanding Imaging Techniques: the Difference Between Retouching, Manipulating, and Optimizing Images appeared first on Digital Photography School. It was authored by Herb Paynter.

Understanding Imaging Techniques

Three distinct post-production processes alter the appearance of digital photographs: Retouching, Manipulating, and Optimizing. These terms may sound similar enough to be synonymous at first glance, but they are entirely different operations. Once you understand the difference between these three processes, your image editing will take on new meaning, and your images will deliver powerful results.

Image retouching

Photo retouching is image alteration that intends to correct elements of the photograph that the photographer doesn’t want to appear in the final product. This includes removing clutter from the foreground or background and correcting the color of specific areas or items (clothing, skies, etc.). Retouching operations make full use of cloning and “healing” tools in an attempt to idealize real life. Unfortunately, most retouching becomes necessary because we don’t have (or take) the time to plan out our shots.

Our brain tends to dismiss glare from our eyes, but the camera sees it all. A slight change of elevation and a little forethought can save a lot of editing time.

Planning a shot in advance will alleviate much of these damage control measures but involves a certain amount of pre-viewing; scouting out the area and cleaning up items before the camera captures them. This includes “policing” of the area… cleaning mirrors and windows of fingerprints, dusting off surfaces, and general housekeeping chores. This also includes putting things away (or in place), previewing and arranging the lighting available and supplementing the lighting with flash units and reflectors where required, checking for reflections, etc.

Benjamin Franklin coined the phrase “an ounce of prevention is worth a pound of cure,” which pretty much sums up the cleanup chores. We also use the phrase “preventative maintenance;” fixing things before they break and need repair.

Admittedly, we don’t often have the luxury of time required to primp and polish a scene before we capture it, and retouching is our only option. However, sometimes all we need to do is evaluate the scene, move around and see the scene from another angle, or wait for the distraction to move out of the scene.

Sometimes a small reposition can lessen the amount of touchup and repair needed.

We can’t always avoid chaos, but we could limit the retouching chore with a little forethought. It takes just a fraction of a second to capture an image, but it can take minutes-to-hours to correct problems captured.

Image manipulation

Manipulation is a bit different, though it occasionally is a compounded chore with retouching. When we manipulate a photo, we truly step out of reality and into fantasyland. When we manipulate an image we override reality and get creative; moving, adding elements to a scene or changing the size and dimension. When we manipulate an image, we become a “creator” rather than simply an observer of a scene. This is quite appropriate when creating “art” from a captured image, and is ideal for illustrations but perhaps shouldn’t be used as a regular post-capture routine.

Photo-illustration is an excellent use of serious manipulation, and can be quite effective for conveying abstract concepts and illustrations.

Earlier in my career, I worked as a photoengraver in a large trade shop in Nashville Tennessee during the early days of digital image manipulation. The shop handled the pre-press chores for many national accounts and international publications. On one occasion in 1979, we were producing a cover for one of these magazines. On the cover was a picture of Egypt’s President Anwar Sadat set against one of the great pyramids. Unfortunately, the pyramid was in a position that interfered with the titles on the magazine’s cover.

While this is not the exact picture used in the magazine, you see the challenge.

The Art Director for the magazine sent instructions for us to shift the pyramid in the picture so that the titles would not interfere with it. Moving that thing was an amazing feat back then. Normal airbrushing would have left obvious evidence of visual trickery, but digital manipulation opened a whole new potential for near-perfect deception. We were amazed at the potential but a bit nervous about the moral implications of using this power.

This venture was accomplished (over a decade before Photoshop) on an editing machine called a SciTex Response, a workstation supported by a very powerful minicomputer. Nobody outside that small building knew that from Nashville, we pushed an Egyptian pyramid across the desert floor until revealed years later. Shortly thereafter, digitally altered images were prohibited from use as evidence in a court of law by the Supreme Court of the United States. Today, this level of manipulation lets you routinely alter reality and play god on a laptop, sitting on a park bench.

Manipulation is powerful stuff and should be used with serious restraint; not so much for legal reasons, but because of diminishing regard for nature and reality. Fantasyland is fun, but reality is where we live. We quite regularly mask skies and replace boring clouds with blue skies and dramatic clouds, and even sunsets – all without hesitation. We can move people around a scene and clone them with ease using popular photo editing software. Reality has become anything but reality. Photo contests prohibit photo manipulation in certain categories, though a skillful operator can cover their digital tracks and fool the general public. However, savvy judges can always tell the difference.

Typical manipulation consisting of a clouded sky to replay lost detail.

Personal recommendation: keep the tricks and photo optics to a minimum. Incorporating someone else’s pre-set formulas and interpretation into your photos usually compromises your personal artistic abilities. Don’t define your style by filtering your image through someone else’s interpretation. Be the artist, not the template. Take your images off the assembly line and deal with them individually.

Image optimization

Photo optimization is an entirely different kind of editing altogether and the one that I use in my professional career. I optimize photos for several City Magazines in South Florida. Preparing images for the printed page isn’t the same as preparing them for inkjet printing. Printing technology uses totally different inks, transfer systems, papers, and production speeds than inkjet printers. Each process requires a different distribution of tones and colors.

Since my early days in photoengraving, I’ve sought to squeeze every pixel for all the clarity and definition it can deliver. The first rule (of my personal discipline) is to perform only global tonal and color adjustments. Rarely should you have to rely on pixel editing to reveal the beauty and dynamic of a scene. Digital photography is all about light. Think of light as your paintbrush and the camera as nothing more than the canvas that your image is painted on. Learn to control light during the capture and your post-production chores will diminish significantly. Dodging, burning and other local editing should be required rarely, if at all.

Both internal contrast and color intensity (saturation) were adjusted to uncover lost detail.

Even the very best digital camera image sensors cannot discern what is “important” information within each image’s tonal range. The camera’s sensors capture an amazing range of light from the lightest and the darkest areas of an image, but all cameras lack the critical element of artistic judgment concerning the internal contrast of that light range.

If you capture your images in RAW format, all that amazing range packed into each 12-bit image (68,000,000,000 shade values between the darkest pixel and the lightest) can be interpreted, articulated, and distributed to unveil the critical detail hiding between the shadows and the highlights. I’ve edited tens of thousands of images over my career, and very few cannot reveal additional detail with just a little investigation. There are five distinct tonal zones (highlight, quarter-tones, middle-tones, three-quarter-tones, and shadows) in every image, and each can be individually pushed, pulled, and contorted to reveal the detail contained therein. While a printed image is always distilled down to 256 tones per color, this editing process lets you, the artist, decide how the image is interpreted.

Shadow (dark) tones quite easily lose their detail and print too dark if not lightened selectively by internal contrast adjustment. The Shadows slider (Camera Raw and Lightroom) was lightened.

The real artistry of editing images is not accomplished by the imagination, but rather by investigation and discernment. No amount of image embellishment can come close to the beauty that is revealed by merely uncovering reality. The reason most photos don’t show the full dynamic of natural light is that the human eye can interpret detail in a scene while the camera can only record the overall dynamic range. Only when we (photographers/editors/image-optimizers) take the time to uncover the power and beauty woven into each image can we come close to producing what our eyes and our brain’s visual cortex experience all day, every day.

Personal Challenge

Strive to extract the existing detail in your images more than you paint over and repair the initial appearance. There is usually amazing detail hiding there just below the surface. After you capture all the potential range with your camera capture (balancing your camera’s exposure between the navigational beacons of your camera’s histogram), you must then go on an expedition to explore everything that your camera has captured. Your job is to discover the detail, distribute the detail, and display that detail to the rest of us.

Happy hunting.

The post Understanding Imaging Techniques: the Difference Between Retouching, Manipulating, and Optimizing Images appeared first on Digital Photography School. It was authored by Herb Paynter.

Photo Finishing – Challenge Yourself to Reveal the Personality in Every Image You Capture

The post Photo Finishing – Challenge Yourself to Reveal the Personality in Every Image You Capture appeared first on Digital Photography School. It was authored by Herb Paynter.

Many folks think that photography takes place in the camera, but that’s not the whole truth. Photography is a two-part process that involves 1) capturing the light from a scene, and 2) shaping that captured light into a form that matches what your mind saw when you took the picture. The capture process does happen inside the camera, but the shaping part happens on your computer.

The Capture, or Photo Process

We give the camera credit for things that it doesn’t actually do. Don’t get me wrong, capturing all the light in a scene is a monumental undertaking. Keeping track of millions of points of light is a very critical and specialized responsibility. However, the camera is not so much an artistic tool as it is a capture device with a single purpose – to accurately record the light from the surfaces of objects in a scene. While that purpose can get complicated with lighting challenges, the camera is still just box with a round glass eye and a single function: to record light.

When the light of a scene enters the camera lens, it gets dispersed over the surface of the camera’s image sensor, a postage-size electrical circuit containing millions of individual light receptors. Each receptor measures the strength of the light striking it in a metric called “lumens.” Each receptor on this sensor records its light value as a color pixel.

The camera’s image processor reads the color and intensity of the light striking each photoreceptor and maps each image from those initial values, producing a reasonable facsimile of the original scene. When this bitmap of pixels gets viewed from a distance, the eye perceives the composite as a digital image.

The real magic happens after the storing of light on the memory card. The image that first appears when you open the file is the image processor’s initial attempt at interpreting the data recorded by the camera’s image processor. Most times, the initial (JPEG) image interpretation of this data is an acceptable record of the original scene, though not always.

Presets

Your camera provides several pre-set programs that adjust the three settings in the camera that affect exposure: aperture, shutter speed, and ISO.

Three main controls determine your exposure: the shutter speed, the aperture, and the ISO. The camera presets (A, S, and M) allow you to determine the depth of field and/or speed with which the camera captures the light.

The A (aperture priority) mode allows you to set the size of the lens opening (f-stop) while the camera automatically sets the shutter speed. The S (shutter priority) mode lets you set the duration of the lens opening (shutter speed) while the camera adjusts the size of the lens opening. The letter P (program mode) allows you to determine the best mix of aperture and shutter speed while your camera retains the correct balance of light for the exposure. The letter M (manual mode) gives you complete control over all settings but requires to balance the overall exposure.

Your camera’s variable ISO (International Standards Organization) setting adjusts the light sensitivity of the camera’s image sensor, allowing you to capture scenes in dim or bright light; the higher the number, the more sensitive the light receptors become, allowing you to capture images in lower levels of light.

The Histogram

Your camera provides a small graph that roughly indicates how well the camera is set to correctly capture the light in the current scene.

This graph displays the range of light coming through the lens and approximates the current light distribution that captured under the current settings. By adjusting the three settings mentioned above, you can shift and somewhat distribute this range of light to best record the full range of light.

Color balancing the light

Every scene’s color cast is influenced by the temperature of the light illuminating that scene. When the scene is captured outside, the Sun’s position in the sky and the influence of cloud cover alters the color of the light. Your camera offers at least two ways to compensate for the differences in color temperature (Auto White Balance and Pre-set Color Balance).

Auto White Balance

The Auto White Balance (AWB) sensor in your camera seeks any prominent white or neutral subject in the scene and shifts the entire color balance of the scene in an effort to neutralize that element. But there is an assumption with AWB that you desire the current lighting to be perfectly neutral in color.

Any clouds interfering with the sunlight will have a slight influence on the neutrality of 6500° (natural sunlight) lighting. AWB takes that slight shift out of the equation. Most of the time, this is a great idea. However, to record early morning or late afternoon (golden hour) lighting accurately, AWB will neutralize those warm colors and completely lose that “warm” mood.

Pre-Set White Balance Settings

Your camera offers several pre-sets to offset any known color casts caused by specific lighting situations. These settings appear in every digital camera “Settings” display and may appear in a slightly different order or wording. Daylight sets the camera to record scenes under typical mid-day outdoor lighting. Cloudy/Overcast shifts the colors toward orange to compensate for the bluish cast caused by light filtering through nominal cloud cover.

Shade offers a stronger orange shift to compensate for completely overcast (stormy) skies. Flash provides a very similar color temperature lighting as Daylight and is intended to prepare the image sensor for artificial daylight or “Speed light” type flash devices.

Tungsten/Incandescent shifts the colors toward the blue end of the color range to compensate for the warmer shift of incandescent lights. Fluorescent attempts to compensate for the greenish cast of gas-charged fluorescent lights.

Kelvin/Custom permits the user to set a custom color balance setting, essentially teaching the camera what “neutral” gray color looks like. All of these pre-sets attempt to correct non-neutral lighting conditions.

The Sculpting, or Finishing Process

While the camera does capture the full range of reflected light in a scene, it has no way of knowing the best tonal curve to apply to each image. Many times the five tonal ranges (highlight, quarter, middle, three-quarter, and shadow) need to be reshaped to best interpret the light captured at the scene. This tonal contouring process is the magic of sculpting the light into a meaningful visual image.

This little fella perched outside my front door and caught me off guard. I didn’t have time to fiddle with the controls to optimize the lighting situation. My first click got his attention and the second got this expression. Fortunately, I capture my images in both jpg and RAW formats simultaneously. Doing so allowed me to post-process the tones and display to you what I actually saw that morning.

I use the term “sculpting” when talking about image editing because it best describes the rearranging of tones in a digital image. Only ideal lighting balance looks great when rendered as a “stock” JPEG camera image.

This sculpting or finishing process amounts to the clarification of tones and colors in a digital image; making the image appear in final form the way the human mind perceived it in the original scene. While the color balancing aspect of this process is a bit more obvious, the tonal recovery is actually more critical to the final presentation.

The digital camera cannot capture all of the dynamics of the visible spectrum on a sunny day, nor can it determine the best balance of those tones. The camera’s image sensor simply captures all the light possible and presents the data to the camera’s image processor to sort out. Under perfectly balanced lighting, this works out just fine, but occasionally detail hides in the shadows and gets lost in the highlights, requiring help from the photographer/editor to balance out the tones.

This is where the individual tone-zones come into play, and the sliders available in RAW processing software (Camera Raw, Lightroom, On1 Camera Raw, Exposure X4) are invaluable. The internal contrast of every image (Whites, Highlights, Middle tones, Shadows, Blacks) can be pushed around and adjusted in a very non-linear manner (in no particular order) to reveal detail that otherwise remains hidden.

Conclusion

Photo finishing isn’t complete until both color and tones are correctly adjusted for maximum effect, matching the emotion of the original scene. Only then is your image ready for viewing. Challenge yourself to squeeze the detail and reveal the potential personality out of every image you capture. It’s well worth the extra effort.

The post Photo Finishing – Challenge Yourself to Reveal the Personality in Every Image You Capture appeared first on Digital Photography School. It was authored by Herb Paynter.

What Your Camera Can’t See

The post What Your Camera Can’t See appeared first on Digital Photography School. It was authored by Herb Paynter.

For all the incredible technology packed into cameras, there is one missing element that will remain missing perhaps forever. The missing element? The combination of human eyesight and the brain’s image processor called the Visual Cortex.

1 - What Your Camera Can't See

The Visual Cortex

The Visual Cortex is located in the lower rear of your brain. It is here the real color perception magic happens – magic that goes way beyond the analysis capabilities of any camera on the planet. As you understand this human version of the camera’s image processor, your understanding of the photo process will come into clearer focus.

Medical experts tell us that more than 80% of what we experience enters our brain through our vision. Your eyes capture light’s amazing array of colors as the eye’s lens focuses light beams onto the panoramic viewing screen in the back of your eyeball called the retina.

2 - What Your Camera Can't See

Your brain is very forgiving. It focuses light entering through your eyes, and automatically color corrects almost every lighting condition and color cast en route to the Visual Cortex. Within seconds, your eyes and brain adjust to a wide range of lighting intensities and color influences and deliver very believable images to your mind. And it all happens without you even realizing it. No white balance to set, no color shifts to neutralize. Your brain’s magic intuition and forgiving nature do a crazy-good job of color correction for you.

Your camera records colors a bit more objectively. However, even when shooting RAW files, decisions about color still have to be made in the editing process. Your camera simply doesn’t have cognitive or reasoning skills and thus must be tutored to interpret what it “sees” accurately. You might say that your camera sees, but it doesn’t observe.

White balance and memory colors

When you visually observe a white sheet of paper in a daylight lighting (preferably outside, in natural light), the paper looks… white. Even when you observe that same white paper indoors under tungsten light, your brain recognizes that the paper is really white. This is because the human brain possesses what we call “memory colors;” a basic set of colors that are so familiar that even lighting variances cannot confuse.

Your camera cannot remember what color white is when it is captured under different types of lighting. It must be told every time. What your camera calls “memory” isn’t the same “memory” that your human brain possesses.

3- What Your Camera Can't See

When you set your camera’s White Balance to Daylight and take a picture of the white paper outside, it indeed appears white. That is merely the way the camera’s image sensor is biased to record light under daylight (6500° Kelvin) color conditions. However, when you move inside and shoot the same white paper under tungsten lighting (using the same Daylight WB), the paper appears to the camera to be somewhat yellow.

4- What Your Camera Can't See

Auto White Balance (left) and Tungsten (right)

Changing the camera’s WB setting to Auto White Balance (AWB) and shooting the paper under a typical table lamp light, the picture still appears slightly yellow.  Even when you set the camera’s WB to Tungsten, the paper still fails to appear perfectly neutral white, though it appears much closer to white.

5- What Your Camera Can't See

The truth is, there are colors in the visual spectrum that digital cameras record differently than film cameras did in the past. And neither technology captures and records the exact colors that the human eye sees or the mind perceives. This is why most captured images, for all their beauty, still lack the full sense of authenticity and depth that the human mind experiences from light observed in every scene.

Technically (and spectrally), in each case, the camera is telling the truth, just not the “truth” that we perceive with our eyes. This is, of course, a good example of why we shoot in raw format. When captured in raw format, all regimented color categories get ignored. Any color shifts can be corrected and lighting variances addressed in the post-processing stage.

As mentioned earlier, the can’t the camera see the white paper as white (the way our eyes do) regardless of the lighting situation because the camera doesn’t have an onboard reference registry of “memory colors” the way our brains do.

6- What Your Camera Can't See

The brain automatically remaps each scene’s color cast to your brain’s “memory colors.” Think of these memory colors as preference presets in your brain’s color interpreter. These memory colors automatically compensate for variable lighting situations. The infinite Look Up Table (LUT) variables that would be needed for a camera to replicate this basic, natural brain function would have to be both immense and incredibly complex. No matter how smart digital devices become, they’ll never replace the magic of human interpretation.

Conclusion

So what have we learned? Your camera, for all its sophistication, cannot automatically correct color casts. It simply isn’t human. That means that your camera ultimately benefits from and makes use of your understanding of the behavior of light and color. Armed with this knowledge, you’ll produce images that more closely replicate color as your mind perceived it. Photography is a two-part process that requires the camera to do its job and for you to do yours. What is defined by the clinical term as “post-processing” is merely finishing the job that your camera started.

Moreover, this is a good thing. Your judgment and interpretation of the colors your mind saw when you captured the image can guide you as you tweak and make minor adjustments to your images. Don’t think of this as a burden. Recognize this as a gift. You, the photographer, are the producer of the image. Your camera is merely a tool that provides all the “raw” materials you’ll need to share what your mind observed when you captured the scene.

This is why photography is an art, and why this art requires an artist. You are that artist.

Celebrate the partnership you have with your camera. Together, you produce visual beauty.

The post What Your Camera Can’t See appeared first on Digital Photography School. It was authored by Herb Paynter.

You Are Your Own Best Teacher – Learning From Your Photography Mistakes

The post You Are Your Own Best Teacher – Learning From Your Photography Mistakes appeared first on Digital Photography School. It was authored by Herb Paynter.

Personal experience is the very best teacher. Reading tutorials, studying the professionals, and mastering the fundamentals will certainly incrementally improve your photographic skills, but you’ll grow exponentially when learning from your photography mistakes. This is most true when you study your mistakes. You only learn when you make a mistake and know why.

Learning from your photography mistakes

Conversely, if you don’t seriously study the shots that you captured from each outing (both good and bad), you’ll be more prone to make those mistakes again and again and never clearly understand why. Discovering how camera settings and scene lighting produced specific results can give you real insights that even a private tutor may not deliver. You are your own best teacher because this kind of lesson is concentrated on you alone and concerns you alone. You aren’t competing with anybody else, nor are you being judged by anyone else.

Metadata and EXIF Information

Metadata is the techno-term for the settings your camera uses to capture digital pictures; which includes File Properties and Exif (camera capture data). Every camera collects facts that describe just about everything your camera knows about the pictures it takes.

Metadata and Exif information accompanies every image captured and is disclosed by a variety of different software applications, and it is exhaustively disclosed in Adobe’s Bridge software. The illustrated examples in this article have were captured from Bridge. While Lightroom delivers a small subset of this information, Bridge lists virtually everything and acts as a “bridge” (clever name) between the files and other Adobe software to catalog and process the images.

1 - Learning from your photography mistakes

Metadata reveals that this photo was set up in Auto mode with AWB (Auto White Balance) and Matrix metering which opened the Aperture to 3.5, evenly exposing the scene and allowing the camera to correctly balance the colors based on the neutral gray elements in the scene.

2 - Learning from your photography mistakes

This shot illustrates the danger of setting the camera for full Manual operation but incorrectly selecting Tungsten lighting as the light source which biases the colors toward the cooler (blue) side of the spectrum. Tungsten setting expects the yellow cast of tungsten lights, however, the outdoor lighting was shaded sunlight. The Aperture was set manually to f/22 which did not allow enough light to expose the darkened scene.

Discover what works and what doesn’t

Get hard on yourself and discover what works and what doesn’t. Then try to repeat the results you received from your best shots. If you make this exercise a habit, and seriously analyze why some shots worked, and others didn’t, you’ll improve with every outing. Learn to appreciate the “keepers” but don’t view the rejects as failures… they are merely lessons from which to learn.

Note the difference that the time of day makes and the angles (and severity) of the shadows produced during different hours of the day. Take notes on why some shots are 5-star picks, and some others are rejects. Become a student of your work and watch your learning curve shorten.

This metadata also teaches you the limitations and restrictions of specific settings. Sometimes processes that fail are caused by equipment failure rather than judgment error. Here’s an example of the camera being set up for a flash image but encountering an entirely different lighting condition when the flash failed to fire. The ripple effect of a flash misfire caused a massive failure in the camera’s exposure, focus, and color.

3 - Learning from your photography mistakes

The metadata reveals that this image was captured correctly. All processes functioned as expected, resulting in a color-correct, well-exposed picture.

4 - Learning from your photography mistakes

The metadata in this file reveals why the image is overexposed, grossly discolored, and blurry. While the flash was instructed to fire, it failed (probably because the flash was fully charged and ready to fire). This resulted in an image that the camera’s settings (Aperture Priority and Auto exposure) forced the camera to compensate the lack of flash lighting with extremely slow shutter speed. The yellow cast was the result of tungsten lighting in the room while the image sensor’s color balance expected daylight (flash temperature) settings.

Develop a routine

Develop a routine and a personal discipline that forces you to shoot during the same time of day for a full week. Note that I said “force,” rather than try. Personal discipline is a wonderful trait and one that can improve your photographic skills very quickly. Who knows, it might actually affect other areas of your life that need improvement too.

If you only shoot occasionally, you’ll develop skills at a slower pace. Moreover, if you only critically review your work occasionally, you’ll learn at a snail’s pace. Make the review process a regular exercise, and it becomes habit… a good one. I once had a professor who stated in almost every class, “repetition is the exercise of your mental muscle.” The advice sounded strange back then, but it makes perfect sense now.

Every session you shoot produces winners and losers. Make it a habit to examine all metadata from your session to deduce what went right and what didn’t. More importantly, you’ll learn why. Take ownership of your mistakes, especially errors in judgment. You only grow when you recognize a mistake and work to overcome it. While you’ll always be very proud of the great shots you take, you’ll learn more from the shots that didn’t work!

5 - Learning from your photography mistakes

The metering used in this shot was Pattern or Matrix, which averages light readings from the entire frame to influence the shutter speed. The average exposure was based on middle-tone (18%) gray. The sunlight reflecting from the sand on the ground and the black feathers in the bird’s wings established the outer parameters of the exposure, producing an unacceptably dark overall exposure. Had I chosen Spot metering, the picture would have considered only the tones in the middle of the frame, thus lightening the overall exposure.

More often than not, this examination shows you how your camera reacts to specific lighting in a scene. It sometimes produces profound shifts in exposure from small differences in the framing of a scene. Weird but true. While cameras are thought to have “intelligence,” in reality they have no intelligence or no judgment capabilities of their own. They’re merely algorithms that affect settings based on the lighting observed in the scene.

6 - Learning from your photography mistakes

The camera angle was shifted to reduce the amount of sunlight reflection in the frame which, in turn, changed the lighting ratio and lightened the resulting exposure. Reviewing this result taught me to carefully evaluate a scene for content before choosing a metering system.

There are many ways to learn

There are many ways to learn. Taking courses online, reading tutorials and technique books, and tips and tricks columns all teach us a little something more. Years ago I decided to learn how to play the game of golf. After shooting some very embarrassing and humbling rounds, I realized that I desperately needed help. I bought many golf magazines and tried to mimic the stance and swings pictured in the exercises. I watched a large number of video tutorials and listened to advise from everybody, but my game remained poor.

Nothing improved and I only became discouraged. It was when I practiced the disciplines on a regular basis and took serious notes on what worked and why that my game began to improve. I continued to fail simply because I didn’t analyze (and learn from) my mistakes. You learn a lot when you expose yourself to the valuable experience of others, but you’ll only truly grow in your photography skills after you study your own results. So here’s an exercise:

An exercise to help you learn

Open any of the excellent software packages that display both the Metadata (aperture, metering type, ISO, color mode, and shutter speed) and Camera Data, or Exif information (exposure mode, white balance, focal length, lens used, light source, flash behavior, etc.) from both RAW and formatted photos.

Set the View in the software so that you can observe the images in browser or catalog mode, allowing you to see thumbnail views of the files in each session. Also, set the window to display the settings for each image as you step from one image to another.

Whether you shoot in Manual, Aperture or Shutter priority, or even Auto mode, the software lists the individual camera settings exhaustively for each image.

Next: note the variations in lighting between the images and recognize what changes in the camera settings cause the small shifts in the results. Each variation gets linked to one or more of the camera settings; sometimes just a small shift in ISO.

If you allow Auto to control any aspect of your shots, the camera makes subtle changes to shutter speed, ISO, or aperture. Using Auto can be very beneficial in this learning stage because you’ll see how each of these controls affects the appearance.

Make a short columned note card and enter the basic settings for the keepers. Add the weather and lighting conditions that existed at the time of the shot.

Keep this note card in your camera bag and try to replicate the results from the keepers.

Repeat this exercise regularly and watch your results, judgment, and predictability improve.

Conclusion

You are your best teacher and your camera’s metadata and EXIF information recorded automatically with every shot is the notebook recording detailed information about every shot. Your confidence and efficiency should improve along with your photography when you study your notes. Who knows, this could be the shot-in-the-arm that pushes you forward.

Share with us how you have learned from your own mistakes in the comments below.

The post You Are Your Own Best Teacher – Learning From Your Photography Mistakes appeared first on Digital Photography School. It was authored by Herb Paynter.

RAW Files: Digital Manifestations of the Emperor’s New Clothes

What’s all the fuss and hype about RAW files? Let’s look at a little story as a comparison.

The Emperor’s New Clothes

The Hans Christian Andersen story of an incredibly vain King is an amusing tale with an interesting moral.

One day the king, who was very fond of fine clothing, was approached by two slick-talking swindlers. They posed as weavers, and they said they could weave the most magnificent fabrics imaginable. Not only were their fabrics uncommonly fine, but clothes made of this fabric were invisible to anyone who was unfit for office, or who was unusually stupid.

“Those would be just the clothes for me”, thought the Emperor. “If I wore them I would be able to discover which men in my empire were unfit for their posts. And I could tell the wise men from the fools.” As the story goes, the king bought into the story and the clothes. As a result, the people of the kingdom discovered more about their king than they ever cared to know.

RAW Files: Digital Manifestations of the Emperor’s New Clothes

Ignorance of the truth sometimes comes at an embarrassing price.

RAW Files

The truth occasionally gets lost in marketing hype, even in photography. How many times have you heard the claim that vast amount of visual information can be seen in RAW image files? There’s a major problem with that claim, the same problem that “exposed” the king in all his vanity. The claim ain’t exactly accurate.

RAW files do indeed contain all the information collected by a digital camera’s image sensor. But the file’s information itself cannot be viewed because the RAW data is not an image at all, it’s merely numbers.

Only when these numbers are parsed (interpreted) as colors and tones by special software can they display any visual information. RAW Interpreter software builds an initial visual image from the data in the file.

The RAW image, just like the ill-informed Emperor’s clothes, doesn’t actually exist until the file data is interpreted. There is no such thing as a RAW image, only RAW data.

RAW Files: Digital Manifestations of the Emperor’s New Clothes - interpretor software

RAW Interpreter software includes Adobe’s Camera Raw and Lightroom, ON1’s Photo Raw 2018 and Alien Skin’s Exposure3, among others.

When you do open a RAW file in Camera Raw, Lightroom, ON1 Photo Raw, Alien Skin Exposure 3, etc., the image you initially see on the screen is actually based on the camera’s built-in JPEG expression of the RAW data; a mere rough draft of the file’s potential information. The camera’s exposure settings (recorded along with the RAW image) determine the file’s initial appearance on the computer monitor.

Once this initial image appears on the monitor, each RAW Interpreter software provides a fairly exhaustive array of color and tonal sliders that can shape the data into a variety of interpretations. Each interpretation can be saved in JPEG format and published for others to see. Folks who shoot and publish JPEGs directly out of the camera are really shortchanging the file’s potential and leaving important color and detail on the cutting room floor.

Emperor Raw beef With vegetables - RAW Files: Digital Manifestations of the Emperor’s New Clothes

The RAW Truth

The term RAW is not an acronym for some technical phrase nor is it a reference to some uncooked food. It is merely a coined word describing the collection of undeveloped (latent) image data from the camera’s image sensor. This data file contains all the raw chroma and luminous data extracted from millions of light buckets called image receptors located on the camera’s image sensor. Each light bucket is covered by a blue, green, or red filter.

Emperor 4k bayerarray - RAW Files: Digital Manifestations of the Emperor’s New Clothes

Individual image sensors are like small light meters, each covered by a red, green, or blue filter. The Bayer filter array uses more green filters than red and blue, relying on the camera’s image processor to interpret the correct light color and intensity for each pixel.

These RGB filters split the incoming light into three channels of information. Each receptor records the strength of the filtered light as an individual color that will eventually form a single pixel in the image.

While the initial grid of receptors is covered with more green filtered buckets than red or blue, the purpose for this imbalance is a bit too complicated for this article. Suffice to say, the image processor in the camera performs some very complicated math to determine each pixel’s color value and brightness.

Emperor Nikon Camera Back - RAW Files: Digital Manifestations of the Emperor’s New Clothes

A digital camera’s image processor sends RGB pixel values to the camera’s viewing screen to preview each scene prior to capturing the image.

This light capture process begins even before the display is visible on the back of the camera. Every time you reposition the camera to frame your shot, the image processor does its magic again and delivers a new preview of the composition. If your camera is set to display a pre-capture histogram of the scene, this processor data is used to simulate the graph on the histogram.

But the real heavy-lifting happens when you push the shutter button and the image is captured. Once all the individual colors are recorded on the sensor and delivered to the processor, the final image information is preserved on the camera’s hard drive.

Emperor Purple Iris CameraRaw

The individual tonal values (luminosity) of the RAW file were fine-tuned in Adobe’s Camera Raw software to reveal detail not visible in the JPEG file.

In a RAW file, the value of each pixel can be extensively adjusted for hue (color), saturation (intensity), and luminance (brightness). JPEG files record pixels with the same initial color values but the JPEG file format significantly restricts the ability to adjust those values in the editing process. The latitude of JPEG adjustments is significantly limited.

Emperor BahamaBlue Exposure3

The controls in Alien Skin’s Exposure3 Raw Interpreter software provide extensive control over hue and saturation color adjustments.

File Types

JPEG files record each color pixel as an initial luminance (brightness level) and chroma (color) value. When all the pixels on the grid (bitmap) are collectively interpreted in imaging software, a visible replica of the original scene appears on the monitor. If that same image is also captured as RAW information, the values of luminance and chroma are captured in the context of a larger color space and can be interpreted in a wide variety of expressions of the original scene.

Emperor 35mm Negatives

Color negatives are produced from latent images when exposed films are fully developed in photo chemical solutions.

RAW files have been likened to photographic color film negatives in that when they are “developed” (viewed in RAW Interpreter software), the image can be “printed” (published) in a number of unique colors and tonal versions.

But the truth is that because this RAW file is not an image per se, but a record of the light characteristics captured by each of the camera’s light buckets, the original image data contained in the RAW file never gets altered, it only gets interpreted.

The interpretations are records of the luminous and chroma adjustments made to the RAW bitmap pixels. These interpretations are what gets saved as JPEG images.

Unlike the yarn spun by the king’s “couturiers,” RAW data files deliver custom-tailored results and can make you look really smart in a couple of ways. Dress your images for success.

The post RAW Files: Digital Manifestations of the Emperor’s New Clothes appeared first on Digital Photography School.

1 2 3