Filed Under News: Digital Photography Review
Calibration company Datacolor has updated the Spyder5 software package that accompanies its Spyder5 monitor calibration devices for Pro and Elite customers. The update sees the packages renamed Spyder5Pro+ and SpyderElite+ to indicate that both now feature automatic ambient light switching and what the company calls 1-Click Calibration. The ambient light feature relies on the Spyder5 calibration device recognising that the lighting conditions in the monitor room have altered to trigger a different monitor profile. In previous versions users had to remember to make the adjustment manually.
The new 1-Click Calibration system is really a 1-click solution for monitors that have already been calibrated by the device before, and allows the re-calibration with a single click without having to go into the menu system. Both versions of the software also provide more in-depth control of saved profiles through an extra profile management tool.
The Elite package additionally offers a better soft-proofing workflow as well as Enhanced StudioMatch which helps users to calibrate multiple monitors so they all look the same.
For more information, including pricing for upgrades for each of the packages, see the Datacolor website.
Datacolor Announces Spyder5+ Software Upgrade with Enhanced Display Calibration Features
Datacolor®, a global leader in color management solutions, today announced the release of Spyder®5+, the next generation software expertly designed to build upon its popular color calibration tools for photographers, designers, videographers and imaging professionals.
“Spyder5 is already an amazing tool for getting the best color out of your display. With the Spyder5+ upgrade, Datacolor has added several really nice features that are an absolute no-brainer for the price,” said David Cardinal, professional photographer and Datacolor Friend with Vision. “I’ve been using the new capabilities and am really pleased with how much time they’ve saved me, as well as the additional productivity they’ve provided.”
The software upgrade is now available for all existing and new Spyder5 customers, with the option to purchase Spyder5PRO+ or Spyder5ELITE+. Spyder5+ adds unique features to the Spyder5 calibration tools by enhancing users’ digital color workflow, including:
* Automatic Room Light Switching ensures users’ monitor profile changes as the room light conditions shift, with no user interaction required
* 1-Click Calibration streamlines a user’s workflow with a single click to start the calibration without having to re-select saved settings
* Profile Management Tool gives users the ability to edit, remove, rename, locate, and activate each display profile for ultimate control and flexibility
Users who purchase the Spyder5ELITE+ upgrade will have access to all of the above features, in addition to:
* Spyder SoftProof improves “Screen-to-Output” matching with a new workflow to simulate how photos will look on any printer or device – including home printers, online or retail printers, and certain mobile/tablet devices
* Enhanced StudioMatch verifies precise monitor matching and takes the guess work out of making all connected displays look the same – including a new visual verification step that assists you by fine tuning your results
“We’re very excited to add this upgrade to our Spyder5 product line. This new software offers unique tools to ensure color management across all devices, so our customers can remain confident in their decision to choose Datacolor for their color calibration needs,” said Stefan Zrenner, Director Global Sales & Marketing Imaging, Datacolor. “With a competitive set of features, Spyder5+ is the perfect tool for creatives that rely on consistency in their work.”
New and existing Spyder5 customers wishing to purchase the Spyder5+ software add-on can find out more and buy via the Datacolor website. Upon software purchase, customers will receive a software serial number and a step-by-step guide for easy download.
Filed Under Digital Photography School, Photography Tips and Tutorials
Juxtaposition – it’s one of my favorite words, and also one of the most important aspects of successful photography. It’s used in portraiture, outdoor adventure, and frequently in travel photography. In images of the landscape, however, juxtaposition is often overlooked.
I say overlooked because many photographers integrate juxtaposed elements in their landscapes without even being aware of them. You see, juxtaposition, or the way different elements conflict and contrast, is a key feature in most good landscape photographs.
Though there are a dozen or more different ways juxtaposition can occur in an image, in this article I’m going to concentrate on three; color, texture, and subject matter.
Juxtaposition – Color
You are probably familiar with the color wheel. Likely you were introduced to the concept in grade school when you learned the difference between primary and secondary colors. More recently, if you have selected a new font color on your word processing program you’ve likely seen some form of the color wheel.
Simply, a color wheel shows the three primary colors (red, blue, and yellow) occupying three slices of the circle with all the mixing iterations of color blending together between them. The result is a continuous blur of colors, encompassing just about everything on the visible spectrum.
Many landscape images will have multiple juxtapositions. In this case, color is foremost with the warm tones on the salt mounds against the deeper blues of the water and sky. But the shape and texture also stand out. (Salar de Uyuni, Bolivia).
Colors that are opposite from one another (complementary) on the wheel like; blue and yellow, red and green, or orange and purple, for example, will juxtapose. That is, they will stand out from one another; some in a pleasing way, some in a conflicting way. Both can work in photography, depending on your goal, but you need to be aware of the way colors communicate in an image to assure your final result is what you intend.
In this aerial image of the Baird Mountains in northwest Alaska, the turquoise tarn in the foreground stands out as the brightest patch of color in the frame, juxtaposed from the muted grays and browns of the mountains.
Reds and blues, for example, are very commonly blended in landscape photography; blue water with sunset sky, red flowers on a bluebird day, autumn colors against a dark backdrop, etc. Color plays an important role in landscape photography, and we recognize pleasing color combinations as soon as we see them. But recognizing WHY they are pleasing, is different from seeing that they are. Look for those relationships in your compositions, and concentrate on their placement. Some colors, red for example, are extremely effective at drawing the eye. But to be most effective, red needs to be counteracted by cooler tones, balancing the image. Mind how the colors are distributed in your image. It matters.
Juxtaposition – Texture
A long exposure softened the water which creates a juxtaposition with the rough stones of the cliff. (Grand Canyon National Park, Arizona.)
In this image, both the color and the rounded texture of the autumn Bearberry in the foreground creates a juxtaposition with the blue sky and the sharp, upright trees in the background.
Juxtaposed textures are abundant in any landscape; spiky bushes against a smooth landscape, water flowing over rough rock, or just a jagged boulder in the middle of an otherwise soft, grassy meadow, etc. Textures, as like color, are easy to observe in the field. Like bright colors, aggressive textures too need to be used in moderation. Like reds and oranges, sharp, rough, textures will dominate an image if used too liberally.
The antlers of this caribou skull and the bright white against the dark tundra make the subject leap out from the image.
Overwhelming textures, just like overwhelming colors, might be exactly what you want. Just be aware of that decision when you make the image. Make the harsh textures the point of your image, because the wrong balance, or aggressive textures placed too dominantly by accident, can ruin the balance of an image. Consider how they relate, the story you want to tell with their use, and place them in the frame accordingly.
This is a tough one to put to use because there are no clear rules about texture. You may not always realize when you’ve gotten this balance right, but you’ll definitely know when it’s wrong.
In this image of Denali, in Denali National Park, Alaska, the two rounded forms, one green and spiky, one blue-white and more smooth echo one another, while providing wildly different textures, colors, and implications for the image.
Juxtaposition – Subject
Bright flowers on a gray day on a barren dune. Few things can create more juxtaposition in this image.
Without context, this image would not have an obvious juxtaposition, it’s just an image of a lightning strike. But, when I tell you this photo was made on the arctic coastal plain of northern Alaska where thunderstorms are as rare as unicorns, then the juxtaposition of location and lightning are more clear.
The first two examples, color and texture, are more nebulous and tougher to apply in the field than the subject of the image. In landscapes, juxtapositions within the subject matter are easier to apply, and will almost always add interest to your images.
A rare rain storm in the Altiplano of Bolivia catches the last rays of sunlight. Both color and subject juxtapose here.
As I sat down to write this article, the first thing that came to mind was the weather. Storm light, that rare sunlight that appears despite the dark clouds, is a perfect example of subject juxtaposition. Few things contrast as much as a stormy day, and sunlight.
Rainbow in the dry desert, another clear example of the way juxtaposed subject matter can add interest to an image.
Tying weather to elements of the landscape is another way to create juxtapositions. A few years ago, I was hiking in the Chisos Mountains of Big Bend National Park, Texas when I was treated to a rare thunderstorm. As the very brief storm cleared the mountains, a rainbow appeared. The desert landscape, topped by a rainbow against a blue sky, leads to an undeniable juxtaposition.
Similarly, last summer I was leading a wilderness photo tour in the Brooks Range of northern Alaska. On the summer solstice, it snowed four inches overnight, and the following morning the blooming flowers were covered in snow. Summer flowers and fresh snow juxtapose nicely.
Summer flowers the day after a snow storm.
Juxtaposition, the way elements compare and contrast each other, is as important in landscape photography as it is in any other discipline of the art, even if it is more difficult to use. Pay attention to the way color, texture, and your subject interrelate within your image and you’ll find greater success with your landscapes.
Have you explored juxtapositions in your landscape photographs? Tell me about it in the comments, and share some of your successes.
The post Using Juxtaposition for More Compelling Landscape Photography by David Shaw appeared first on Digital Photography School.
Filed Under News: Digital Photography Review
ExoLens has announced the launch of its new ExoLens Case for iPhone 7, a case designed to protect the phone while also supporting the company's ExoLens PRO with Optics by Zeiss accessory lenses. The case is made from clear impact-resistant materials that, says the company, offer 'high-end aesthetics' while keeping a low profile. The case can be used with and without the Zeiss PRO lenses.
ExoLens PRO owners are able to switch between the line's various lenses without removing the case, the latter of which is described as 'ultra durable' with soft Black TPU material along the outer edges. ExoLens has launched the case for the iPhone 7 ($49.95 USD) on its website and through select global retailers, and will launch a version for the iPhone 7 Plus model later on in 2017.
Filed Under News: Digital Photography Review
Together with its new Galaxy S8 flagship smartphones Samsung has also announced an updated version of its Gear 360 spherical camera. A new design, with some controls moved to the handle, allows for a smaller distance between the two 8.4MP sensors with F2.2 fisheye lenses and therefore better image stitching results. At the bottom of the handle there is now also a standard tripod-mount.
On the video side of things resolution has been upped to 4096 × 2048 video at 24 fps. Still images are still captured at a 15MP size. A dedicated app allows for seamless sharing, viewing and editing of your captured content. In addition the new Gear 360 comes with real-time content sharing and supports live broadcasting and direct uploading to platforms such as Facebook, YouTube or Samsung VR.
In addition to most recent Samsung flagship devices the latest edition of the Gear 360 is now also compatible with iOS devices including the iPhone 7, iPhone 7 Plus, iPhone 6s, iPhone 6s Plus and iPhone SE, as well as Windows and Mac computers.
- Two CMOS 8.4-megapixel fish-eye cameras
- F2.2 apertures
- 15MP still images
- 4096×2048 video at 24fps
- microSD card (Up to 256GB)
- IP53 Certified Dust and Water Resistant
Filed Under News: Digital Photography Review
Samsung has today announced its new flagship smartphones Galaxy S8 and S8+ at simultaneous events in London and New York. The new devices' outstanding feature is the new infinity display which combines curved display edges with minimal bezels, allowing for a screen that covers almost the entire front of the devices. This means the home button is now implemented underneath the display but works in the same way as before.
Display size is pretty much the only difference between the new models. The S8 comes with a 5.8" screen, and at 6.2" the S8+ is a touch larger. The 2960 x 1440 resolution is the same on both new phones, though.
While the new displays looks impressive, the camera department has unfortunately less innovation to show off. From a hardware point of view the S8 generation is, at least on paper, identical to its predecessor. A 1/2.5" 12MP sensor with dual-pixel AF is combined with a fast F1.7 aperture and optical image stabilization.
There is some news on the software side of things, though. A new multi-frame technology captures three photos and then selects the clearest image and uses the other two to reduce motion blur. Samsung says the merging of frames also results in better detail and exposures in low light. A new camera user interface allows for easier one-handed operation. The camera resolution at the front has been upped from 5 to 8MP and there is now also a face-detection AF. At F1.7 the aperture is the same as in the main camera.
In terms of processing power the S8 and S8+ offer the very best. Android 7.0 is, depending on region powered by Qualcomm's latest flagship chipset Snapdragon 835 or Samsung's own Exynos 8895. 4GB of RAM and 64GB of expandable storage are on board as well. The new models are also IP68 certified for environmental protection and come with both a fingerprint reader on the back and an iris scanner for increased security. Samsung's new Bixby voice assistant is on board as well and the optional DeX dock converts the devices into a Windows Continuum-style desktop. The Galaxy S8 and S8+ will be available from April 21st. No pricing information has been made available yet.
- 12MP 1/2.5" CMOS sensor with 1.4-micron pixels
- F1.7 aperture
- On-sensor phase detection
- 4K video
- 1080p@120fps slow-motion
- 8MP, F1.7 front camera with AF
- 5.8" (S8) / 6.2" (S8+) display with 2960x1440 resolution
- Android 7.0 Nougat
- Qualcomm Snapdragon 835 or Samsung Exynos 8895 chipset (depending on region)
- 4GB RAM
- 64GB storage
- microSD-slot up to 256GB
- 3000mAh (S8) / 3000mAh (S8+) battery
- Fingerprint sensor and iris scanner
- IP68 certification
Filed Under News: Digital Photography Review
It's unlikely Kodak's Bryce Bayer had any idea that, 40 years after patenting a 'Color Imaging Array' that his design would underpin nearly all contemporary photography and live in the pockets of countless millions of people around the world.
|It seems so obvious, once someone else has thought of it, but capturing red, green and blue information as an interspersed, mosaic-style array was breakthrough.
Image: based on original by Colin M.L Burnett
The Bayer Color Filter Array is a genuinely brilliant piece of design: it's a highly effective way of capturing color information from silicon sensors that can't inherently distinguish color. Most importantly, it does a good job of achieving this color capture while still capturing a good level of spatial resolution.
However, it isn't entirely without its drawbacks: It doesn't capture nearly as much color resolution as a camera's pixel count seems to imply, it's especially prone to sampling artifacts and it throws away a lot of light. So how bad are these problems and why don't they stop us using it?
There's a limit to how much resolution you can capture with any pixel-based sensor. Sampling theory dictates that a system can only perfectly reproduce signals at half the sampling frequency (a limit known as the Nyquist Frequency). If you think about trying to represent a single pixel-width black line, you need at least two pixels to be sure of representing it properly: one to capture the line and another to capture the not-line.
Just to make things more tricky, this assumes your pixels are aligned perfectly with the line. If they're slightly misaligned, you may get two grey pixels instead. This is taking into consideration by the Kell factor, which says that you'll actually only reliably capture resolution around 0.7x your Nyquist frequency.
|A sensor capturing detail at every pixel can perfectly represent data at up to 1/2 of its sampling frequency, so 4000 vertical pixels can represent 2000 cycles (or 2000 line pairs as we'd tend to think of it). This is a fundamental rule of sampling theory.
But, of course, a Bayer sensor doesn't sample all the way to its maximum frequency because you're only sampling single colors at each pixel, then deriving the other color values from neighboring pixels. This lowers resolution (effectively slightly blurring the image).
So, with these two factors (the limitations of sampling and Bayer's lower sampling rate) in mind, how much resolution should you expect from a Bayer sensor? Since human vision is most sensitive to green information, it's the green part of a Bayer sensor that's used to provide most of the spatial resolution. Let's have a look at how it compares to sampling luminance information at every pixel.
|Counter-intuitive though it may sound, the green channel captures just as much horizontal and vertical detail as the sensor capturing data at every pixel. Where it loses out is on the diagonals, which sample at 1/2 the frequency.
Looking at just the green component, you should see that a Bayer sensor can still capture the same horizontal and vertical green (and luminance) information as a sensor sampling every pixel. You lose something on the diagonals, but you still get a good level of detail capture. This is a key aspect of what makes Bayer so effective.*
|Red and blue information is captured at much lower resolutions than green. However, human vision is more sensitive to luminance (brightness) information than chroma (color) information, which makes this trade-off visually acceptable in most circumstances.
It's a less good story when we look at the red and blue channels. Their sampling resolution is much lower than the luminance detail captured by the green channel. It's worth bearing in mind that human vision is much more sensitive to luminance resolution than it is to color information, so viewers are likely to be more tolerant of this shortcoming.
So what happens to everything above the Nyquist frequency? Well, unless you do something to stop it, your camera will try to capture this information, then present it in a way it can represent. A process called aliasing.
Think about photographing a diagonal black stripe with a low resolution camera. Even with a black and white camera, you risk the diagonal being represented as a series of stair steps: a low-frequency pattern that acts as an 'alias' for the real pattern.
The same thing happens with fine repeating patterns that are a higher frequency than your sensor can cope with: they appear as spurious aliases of the real pattern. These spurious patterns are known as moiré. This isn't unique to Bayer, though, it's a side-effect of trying to capture higher frequencies than your sampling can cope with. It will occur on all sensors that use a repeating pattern of pixels to capture a scene.
Sensors that use the Bayer pattern are especially prone to aliasing though, because the red and blue channels are being sampled at much lower frequencies than the full pixel count. This means there are two Nyquist frequencies (a green/luminance limit and a red/blue limit) and two types of aliasing you'll tend to encounter: errors in detail too fine for the sensor to correctly capture the pattern of and errors in (much less fine) detail that the camera can't correctly assess the color of.
'the Bayer pattern is especially prone to aliasing'
To reduce this first kind of error most cameras have, historically, included Optical Low Pass Filters, also known as Anti-Aliasing filters. These are filters mounted in front of the sensor that intentionally blur light across nearby pixels, so that the sensor doesn't ever 'see' the very high frequencies that it can't correctly render, and doesn't then misrepresent them as aliasing.**
|The point at the center of the Siemens star is too fine for this monochrome camera to represent, so it's produced a spurious diamond-shaped 'alias' at the center instead.
||This image second was shot with a very high resolution camera, blurred to remove high frequencies, then downsized to the same resolution as the first shot. It still can't accurately represent the star, but doesn't alias when failing.
These aren't so strong as to completely prevent all types of aliasing (very few people would be happy with a filter that blurred the resolution down to 1/4 of the pixel height: the Nyquist frequency of red and blue capture), instead they blur the light just enough to avoid harsh stair-stepping and reduce the severity of the false color on high-contrast edges.
|With a Bayer filter, you get a fun color component to this aliasing. Not only has the camera tried to capture finer detail than its sensor can manage, you get to see the side-effect of the different resolutions the camera captures each color with.
||Again, if you compare this with a significantly over-sampled image, blurred then downsized, you don't see this problem. However, look closely you can still see traces of the false color that occurred at the much higher frequency this camera was shooting at.
This means that, a camera with an anti-aliasing filter, you shouldn't see as much false color in the high-contrast mono targets within our test scene, but it'll do nothing to prevent spurious (aliased) patterns in the color resolution targets.
|Even with an anti-aliasing filter, you'll still get aliasing of color detail, because the maximum frequency of red or blue that can be captured is much lower.
||This image was shot at the same nominal resolution but with red, green and blue information captured for each output pixel: showing how the target could appear, with this many pixels.
At the silicon level, modern sensors are pretty amazing. Most of them operate at an efficiency (the proportion of light energy converted into electrons) around 50-80%. This means there's less than 1EV of performance improvement to be had in that respect, because you can't double the performance of something that's already over 50% effective. However, before the light can get to the sensor, the Bayer design throws away around 1EV of light, because each pixel has a filter in front of it, blocking out the colors it's not meant to be measuring.
'The Bayer design throws away
around 1EV of light'
This is why Leica's 'Monochrom' models, which don't include a color filter array, are around one stop more sensitive than their color-aware sister models. (And, since they can't produce false color at high-contrast edges, they don't include anti aliasing filters, either).
It's this light loss component that may eventually spell the end of the Bayer pattern as we know it. For all its advantages, Bayer's long term dominance is probably most at risk if it gets in the way of improved low-light performance. This is why several manufacturers are looking for alternatives to the Bayer pattern that allow more light through to the sensor. It's telling, though, that most of these attempts are essentially variations on the Bayer theme, rather than total reinventions.
These variations aren't the only alternatives to the Bayer design, of course.
Sigma's Foveon technology attempts to measure multiple colors at the same location, so promises higher color resolution, no light loss to a color filter array and less aliasing. But, while these sensors are capable of producing very high pixel-level sharpness, this currently comes at an even greater noise cost (which limits both dynamic range and low light performance), as well as struggling to compete with the color reproduction accuracy that can be achieved using well-tuned colored filters. More recent versions reduce the color resolution of two of their channels, sacrificing some of their color resolution advantage for improved noise performance.
'The worst form... except all those others that have been tried'
Meanwhile, Fujifilm has struck out on its own, with the X-Trans color filter pattern. This still uses red, green and blue filters but features a larger repeat unit: a pattern that repeats less frequently, to reduce the risk of it clashing with the frequency it's trying to capture. However, while the demosaicing of X-Trans by third-party software is improving, and the processing power needed to produce good-looking video looks like it's being resolved, there are still drawbacks to the design.
Ironically, devoting so much of the sensor to green/luminance capture appears to have the side-effect of reducing its ability to capture and represent foliage (perhaps because it lacks the red and blue information required to render the subtle tint of different greens).
Which leaves Bayer in a situation akin to Winston Churchill's take on Democracy as: 'the worst form of Government except all those other forms that have been tried from time to time.'
40 not out
As we've seen before, the sheer amount of effort being put into development and improvement of Bayer sensors and their demosaicing is helping them overcome the inherent disadvantages. Higher pixel counts keep pushing the level of color detail that can be resolved, despite the 1/2 green, 1/4 red, 1/4 blue capture ratio.
And, because the frequencies that risk aliasing relate to the sampling frequency, higher pixel count sensors are showing increasingly little aliasing. The likelihood of you encountering frequencies high enough to cause aliasing falls as your pixel count helps you resolve more and more detail.
Add to this the fact that lenses can't perfectly transmit all the detail that hits them, and you start to reach the point that the lens will effectively filter-out the very high frequencies that would otherwise induce aliasing. At present, we've seen filter-less full frame sensors of 36MP, APS-C sensors of 24MP and Four Thirds sensors of 16MP, all of which are sampling their lenses at over 200 pixels per mm, and these only produce significant moiré when paired with very sharp lenses shot wide-enough open that diffraction doesn't end up playing the anti-aliasing role.
So, despite the cost of light and of color resolution, and the risk of error, Bryce Bayer's design remains firmly at the heart of digital photography, more than 40 years after it was first patented.
Thanks are extended to DSPographer
for sanity-checking an early draft and to Doug Kerr
, whose posts helped inform the article, who inspired the diagrams and who was hugely supportive in getting the article to a publishable state.* Unsurprisingly, some manufacturers have tried to take advantage of this increased diagonal resolution by effectively rotating the pattern by 45°: this isn't commonplace enough to derail this article with such trickery, so we’ll label them ‘witchcraft’ and carry on as we were.
** The more precocious among you may be wondering 'but wouldn't your AA filter need to attenuate different frequencies for the horizontal, vertical and diagonal axes?' Well, ideally, yes, but it's easier said than done and far beyond the scope of this article.
Filed Under Composition Tips, Digital Photography School, Photography Tips and Tutorials
In photography terms, composition can make the difference between a good image and a fantastic one. Yes, you need all the other components; the light has to be dramatic, the subject compelling, and the colours vibrant. All of these will add to the final result. If you have all that, but your composition is not great, the image will fall flat.
Jay Maisel has a quote that goes like this, “As the photographer, you are responsible for every inch of the frame”. This is true, and one of Jay’s other mantras is that he prefers to speak about framing and not cropping. His view is that framing is done at the time of making the image. Cropping is done afterward in post-production. He maintains that cropping changes the original intent of the image. If you frame an image in a particular way and then crop it afterward, it really is a different image.
Frame your scene correctly in camera
I don’t think Jay is saying that you shouldn’t crop, but rather that you need to compose with intent and purpose, not simply hope for the best and try and “fix” the image later by cropping. Good composition can really be impactful on your image. Changing your composition is free. You don’t need any special equipment or lenses. There’s no need to wait for a specific type of light. You can shoot at any time of day. Composition is the one thing in photography that is easiest to fix, yet it is most often overlooked.
There are many articles on DPS and other sites about composition and the best techniques for improving composition, so I won’t try to reinvent the wheel. What I want to talk about here is visual flow. This is more about the visual journey you are taking your viewer on than the destination. In this article, we aren’t going to discuss the rule of thirds and powerpoints, but we will discuss how framing, removing distractions, and how light, shape, and texture will all contribute to your composition.
We will look at how someone’s eye will travel through your image. You want the viewers of our images to look at them longer, to find them interesting and to be captivated and inspired by what they see.
Framing not cropping
As the photographer, you need to take responsibility for everything in the frame. That means, you decide what will be in the shot and sometimes more importantly, what will NOT be in the shot. Your subject needs to be in the frame obviously, but what else absolutely needs to be included? Ask yourself if all the elements in the frame are adding to the narrative or story you are trying to tell. If not, get rid of what is not working.
In this case, less is definitely more (and usually better). Be aware of visual clutter in the frame, objects that are distracting or drawing the viewer’s full attention away from the subject. This is really tough to get right and it takes time and practice. But once you become aware of this and work hard on fixing it, it will become much easier.
Focus on your subject
This sounds obvious but is not always easy. There are many things that can cause your viewer to be distracted when they look at your image. Any words in your photograph will automatically draw they eye. Signposts, graffiti, street signs…anything with words or letters will cause the viewer to look at that part of the image. If the wording is not the reason for the image, then try and remove that item from the frame as it may be distracting.
Color can cause the eye to wander. If your scene is full of color, that’s great, but if it is largely monochromatic and there is only one color in the frame, that color will become the focal point. Warm colors like yellow or red will very quickly pull the eye across to them, so be aware of the colors in your image.
The human form will also draw the eye. Again, if the person in the frame is a key part of the image, that’s great, leave them in the shot. But if not, then wait until they leave the scene or reframe the scene without them. As humans, we tend to find the human form in an image very quickly and this will become the main focus of the image.
Be aware of distractions, words, powerlines etc
Using light, shape and texture
These three elements (there are more) will greatly help you in your visual flow.
Light is key to making any image. Without light, we cannot do photography. Light also informs so much in your image. You can use side light to emphasize texture in your image. You can use front light to create a silhouette, which will emphasise shape. These three elements are important tools in making sure your image compels people to look at it.
Shapes in your image add a dynamic feel. Get in close and emphasize the shape of an object. If it has a curve, make that curve fill the frame. Shapes can make a great subject too. They are all around you too, you just have to start looking.
Texture is a great way to emphasize your subject. To get great texture images, your light needs to come from the side. Side light enhances texture and each granular detail can be seen if the light is right. Texture will make your images seem three dimensional. Using texture is a great way to communicate more information about your subject.
Use side light to emphasize texture.
Get in close
To make sure that you get the most out of the scene, you can do a few things. First, move in closer and fill the frame with your subject. This is especially useful if you are doing abstract or creative images. If you are not going to fill the frame, then decide where to put your subject. Yes, you can use the rule of thirds for this (this would be my last choice), but you can also use the Fibonacci Spiral (Golden Ratio) or any number of other compositional techniques.
The most important part of an effective composition is to make sure that your viewer knows what they are supposed to look at in your image. If your subject (the reason for the image) is unclear, your image will have little impact. You have likely seen this happen. You show someone photos from your last trip and they simply glance at them in passing. Then suddenly, something catches their attention in a particular image and they stop and look intently at the scene. That’s when you know your image has hit the mark.
As I said earlier, all the elements need to come together to make a great image, but if you have good light, great exposure and bad composition, chances are, people will just flip past the image.
Fill the viewfinder with your subject.
So, how else can you improve your composition? It is deceptively simple but easily overlooked. Some of the things I do is get inspiration from the top photographers in the genre I want to shoot. If it is street photography, then I am looking at Henri Cartier-Bresson, Jay Maisel, Ernst Haas, and others. If it is landscape photography, then I will be looking at Ansel Adams, Charlie Waite, and Koos van der Lende. I look at photographers who inspire me. I also make a point of visiting art galleries whenever I can.
Photography is not even 200 years old as an art form. Much of the techniques we use as photographers have been learned from the painters and artists of old. Spend time looking at the composition of master painters. Look at how they placed subjects in their scene. See how the light works in their paintings, is it hard light or soft light? Spend time taking note of how they used color and shapes in their images. Then, go out and apply that to your photographs. Over time you will begin to see your eye and your images improve.
Work hard at improving your compositional eye.
The post Visual Flow – How to Get the Most out of Composition by Barry J Brady appeared first on Digital Photography School.
Filed Under News: Digital Photography Review
Are you shopping for a new camera? Or just looking for some advice about how to use your current favorite model? We've been working on a series of product overview videos for a couple of years, and we've just added a new series of informational videos to our YouTube channel.
Called 'Getting Started Guides', these videos are intended to give you a quick breakdown of the key features of several recent releases, and some quick tips on how to get the most out of them. You can find all of our recent overview and getting started guide videos from the links below, and subscribe to our YouTube channel to ensure you never miss a new video!
Watch our series of product overview videos
Watch our new 'Getting Started Guides'
Filed Under News: Digital Photography Review
Hasselblad has announced that commercial photographer and blogger Ming Thein has been appointed its Chief of Strategy. Thein is known for his popular blog, and is no stranger to Hasselblad as a former ambassador for the company. In addition to his photography chops, Thein brings a degree in Physics from Oxford and years of experience working in finance and private equity firms to Hasselblad. Plus, we think he's got some good ideas about how cameras should function.
Hasselblad has been going through a transitional period lately – the company never denied reports that DJI became a majority stakeholder, and recently announced the departure of CEO Perry Oosting. Certainly Oosting had a hand in modernizing the company's offerings and righting the ship after some unfortunate missteps. There's more work ahead, however, as the company works to meet demand for its X1D mirrorless camera.
Filed Under News: Digital Photography Review
Fujifilm has announced the Instax Mini 9, a new instant camera that has launched in five colors: Lime Green, Flamingo Pink, Smoky White, Ice Blue, and Cobalt Blue. The Instax Mini 9 builds upon the company's Instax Mini 8, bringing with it a selfie mirror as well as a new close-up lens attachment enabling photographers to snap photos as close as 35cm / 14in.
Fujifilm says the 'popular' features from the previous model are rolled over into the Instax Mini 9, including auto exposure. The camera chooses the optimal brightness setting for any given snapshot, highlighting the chosen setting by illuminating one of four lights corresponding the following settings: Indoors, Cloudy, Sunny (overcast), and Sunny (bright). The user then manually switches the dial to that setting.
Other features include a 0.37x viewfinder with target spot, an automatic film feeding system, flash with an effective range from 0.6m to 2.7m, and support for two ordinary AA batteries. A pair of AA batteries can power the camera through approximately 10 Instax Mini film packs before needing replaced.
The Instax Mini 9 will launch in the U.S. and Canada next month for $69.95 USD and $99.99 CAD, and then in the U.K. in May for £77.99.