This page is an accompaniment for the presentation I made on 13 August 2016. Many of the
images are overlaid: run the mouse cursor over them to see the alternative.
Optimum image quality? I thought this talk was about raw image processing!
The thing is, raw images are just a way to get better image quality. They're nothing of
importance in themselves. Let's look at the issues you have in getting a technically
good image.
Digital image representation
The traditional representation of digital photographs is a series of square dots arranged in
a rectangular pattern. Here's an image at three different scales:
Each dot, called a pixel (“picture
element”), represents the three primary colours red, green and blue, so the format is
called RGB.
Not red, yellow and blue? No. That works for art, but for optics it's red, green and
blue, and it has been since the dawn of colour photography over 100 years ago.
The dimensions of an image is called the
“resolution”, the number of
pixels on each side of a rectangular image. Each pixel is represented as three numbers (one
for each colour) between 0 and a maximum value which depends on the format. This is
sometimes referred to as the “pixel
depth”, and for technical reasons is often measured in bits. Common ones are:
Pixel
Used
Number of distinct
Number of distinct
depth
in
brightnesses per colour
brightnesses total
6
Monitors
64
262,144
8
Monitors, JPEG
256
16,777,216
12
raw data
4096
68,719,476,736
14
raw data
16,384
4,398,046,511,104
16
raw data, TIFF
65,536
281,474,976,710,656
Every bit of depth doubles the number of colours that can be represented. This corresponds
to an extra EV of dynamic range.
There are a number of standardized RGB formats, but in photographic use only two are widespread:
TIFF (Tagged Image File Format) is the
oldest, and it's very flexible. It can normally represent up to 16 bits, or a maximum
of 65,536 gradations per colour. It's useful if you're doing lots of modifications to
your image and saving them in between.
TIFF images are typically a little bit larger than the storage requirements above, but
they can be compressed, which can reduce the file size by up to 90%. Standard
techniques for TIFF are
called lossless
compression, meaning that the entire information is preserved, and the original
full-sized image can be extracted if desired.
JPEG (Joint Photographic Experts Group)
is by far the most popular, and every digital camera produces JPEGs. Nearly all photos
on the web are JPEGs.
JPEGs can only represent 8 bits (256 gradations) per colour. In addition, JPEGs have
what's called lossy
compression: to make the images smaller, they use compression techniques that make
the image fuzzier and fuzzier. I'll get back to this below. JPEGs from my camera can
be between 300 kB and 11 MB in size, depending on quality. Here's an example of an
image with high and low quality:
The first image has a size of 1.5 MB, and the second one is 176 kB.
Apart from JPEG and TIFF, a number of other formats are in common use, including
PNG
and GIF. Both can be lossless and much
smaller than TIFF.
Real-life hardware
As shown above, cameras, monitors and photo printers all have different resolutions. A
typical modern camera will have a resolution of 4800×3200 pixels and a pixel depth of 12 or
14 bits. But a normal monitor has only 1920×1080 pixels, often less, and a pixel depth of 6
or 8 bits. Even the aspect
ratio (the ratio of the sides) is different: 3:2 or 4:3 for cameras, 16:9 for monitors,
so a photo can't exactly fill a monitor screen.
In addition, neither cameras nor monitors have square pixels that can represent all colours
uniformly. Modern camera sensors don't do RGB: each pixel represents a single colour, and
monitor “pixels” are really stripes of the individual colours next to each other. Here's an
example of a camera sensor and a photo of a monitor:
So taking photos and displaying them requires continual changes in format. We have at
least:
Take the image from the sensor and convert it to RGB. This requires some clever
decisions, since the original pixels are all a single colour depending on the location.
This process is
called demosaicing. Depending on
the desired image size, you may also need to reduce the size of the image.
Take the RGB image and change its size, resize to the resolution of the monitor, and
rearrange the pixels so that they match the positions of that colour on the monitor.
Normally the camera takes care of step 1, and the display software (web browser, for
example) takes care of step 2. And everybody's happy. You can go home now.
What's wrong with this picture?
As long as you're happy with the way the picture comes out of the camera, you can really go
home. But if you want to improve the images in just about any way, there are more things to
think about:
Exposure compensation
What if the image is underexposed? The typical check for exposure is
the histogram, a display of the spread
of brightness in the image. Here's a typical (boring) photo and its histogram:
The histogram shows how much of the image is at each brightness level. There's a fair
amount of darkness (under the covers), not much in the middle, and a fair amount of
brightness (the sky). The right edge of the histogram doesn't drop to the bottom, which
means that parts of the image (the sky) are overexposed. There's a lot more to say about
histograms, but some other time.
It's taken from the same position at the same time, but at 1/500s instead of 1/60s, a
difference of 3 EV. No wonder it's so dark, and that's what the histogram shows:
Most of the image is the area at the extreme left. Then there's nothing for a while, then
the sky. If it's the sky you want, then this is your image, as you can see by looking at
it. But if you want the foreground, it's as good as useless.
OK, our clever software allows us to compensate for underexposure. Here's the best I can
get, along with the histogram:
The best you can say is that it's better. But look at the histogram! Why is it so jagged?
At 3 stops underexposure, all the pixel values are between 0 and 31 instead of 0 and 255.
We're running out of numbers. And despite everything, the overall histogram is further to
the left (dark). The bright parts (the sky) are all lost, and it's completely white.
Enter raw images
But the camera sensor has 12 bits, maybe even 14. Where did they go? They got lost when
the image got converted to JPEG. They didn't have to: we could have kept the original.
And finally: that is the raw image, the one that hasn't been processed more than
absolutely necessary. To start with it looks the same as the JPEG image, but let's see what
happens if we try the same exposure compensation:
It's not perfect, but it's a whole lot better than what you can achieve with JPEG. Not only
are the shadows better; you can now see differences in the sky too. Arguably it's even
better than the properly exposed image:
You'll frequently see the word “raw” written in capitals as “RAW”. This is a
misunderstanding in the assumption that it's an acronym. As we've seen, names like JPEG
and TIFF are acronyms. “Raw” is an English
word you all know, and the Oxford English Dictionary includes this definition:
Uncooked; unprocessed, unrefined.
Raw images are ones that haven't been (completely) processed yet. So the correct spelling
is “raw”. Note also that just about every manufacturer has its own raw format, since it's
intimately coupled to the way the camera is built.
Is it worth it?
Of course, you don't go around underexposing things like that all the time. But raw images
improve other processing too, though it's not as spectacular. Let's look at the pros and
cons:
Contra raw images
Raw images take up more space than JPEGs. On my camera, the best quality JPEG uses
about 11 MB, while a raw image uses about 17 MB.
Raw images always need to be processed before you can use them. You only need to
process JPEG images if you don't like the way they come out of the camera.
Pro raw images
We've already seen a big advantage of raw images above. In general, any processing involves
some quality loss, even with raw images. But it's much more with JPEGs because of the
limited quality in the first place. Let's look at some of the things you might want to do,
and how they compare with JPEG.
White balance adjustment
Raw images don't have a specific white balance, so one of the things that happens when
creating JPEGs is that you make an assumption about the white balance. Modern cameras
are quite good, but not perfect. If you have to change it, you change twice: first when
converting from raw to JPEG, then correcting the JPEG. If you start with a raw image,
you only have to do it once.
Finer white balance tuning
I'm not quite sure why, but the software I use (DxO Optics “Pro”)
offers more tuning for raw images:
Apart from the adjustment in the green/magenta direction, there are also various preset
settings at the top.
Cropping. Even if you frame things perfectly every time, how many of your motives have
the same aspect ratio as your camera? Of course, too much cropping is bad, because it
reduces the number of effective pixels. Here an example from a few weeks back:
Clearly the second image is much sharper. Why? Look at the exposure details: the first
was cropped by about 8 times in each direction, giving a result that was only 0.24 MP
in size. The second wasn't cropped, so it had a size of 16 MP.
Compensation for lens distortion. Most modern lenses have distortion (straight lines
become curved). Normally this is more common with wide fields of view. Here an example:
Chromatic aberration (or
CA) is a lens imperfection where the colours don't line up properly. Most lenses suffer
from a little CA, particularly in the corners. It's one of the most difficult things to
fix. Here an example from some tests I did in October 2015:
The first is an uncorrected JPEG, and the second has been processed with DxO Optics “Pro”.
Clearly there's a big difference. We'll look at this example again below.
Image gradation. This is related to exposure compensation, but there's a subtle
difference: the changes in brightness aren't constant. Here's an example that we've
seen before:
Noise is inaccuracies in the image due to random fluctuations in brightness.
It's most obvious at high ISOs. Here's an example of a visitor in my office:
This was taken at 6400 ISO, and it's just a small section of the original photo (the
cable is the data cable for my camera, to give an idea of scale). The first image shows
the image as taken. The second shows what DxO made of it with standard settings, and
the third with extreme noise reduction.
To summarize: nearly all of these modifications can be done with JPEG images too. But the
results are better if you use the raw images.
How to set your camera
For best results, set your camera like this:
Raw images. Most makers call this Image Quality.
Nikon offers a “Compressed” option for raw images (“NEF images are compressed using a
non-reversible algorithm”). That's lossy. Don't use it: apart from loss of quality,
you might find that some software can't handle it.
Nikon also offers a choice of 12 or 14 bit pixels in raw images. I'm not sure why. In
principle you should choose 14 bits, but check that it's compatible with your software.
Canon also offers smaller raw images, which doesn't make sense to me. Again, you might
run into trouble with software compatibility.
Most cameras offer an option to take both raw and JPEG images, which you
might prefer if you don't want to process every single image.
If you select raw and JPEG, select the highest quality and full resolution JPEGs.
That's what you paid for.
Set the ISO sensitivity to the default for your camera, usually 100 ISO.
Which software?
Lots of software can now handle raw images. I've taken a look at some and used even fewer.
Each requires getting used to its own way of doing things, which makes it difficult to
compare things. At the meeting, it turned out to be a choice between three packages:
Photoshop with Adobe Camera Raw. The obvious choice for Paul, since he already has it.
It seems that it's a learning experience. His first result looked much worse than the
corresponding JPEG (low output quality; see the JPEG conversion photo above).
Lightroom. As I said at the beginning, I've tried it and I hate it (see this review). Paul said that
everybody uses it, but later also that everybody hates it. It's clearly more than a raw
converter, but it's also very expensive, though it's difficult to find out how much; the
only price they show is $57.99 per month, which is clearly not right. On
the other hand, you can find it on eBay for as little as $22, which also sounds suspect.
DxO Optics
“Pro”, which is what I use and what I used for the demonstrations. It has the
advantage of the biggest library of lens distortion parameters, and it's much cheaper.
The standard price is $US 129 or $US 199, but they frequently have half-price sales. It
also has the great advantage that if you run into trouble with it, you can ask me.
So what to do? I suggested to Carol that she try both Lightroom and DxO (both available for
a 30 day free trial) and make up her own mind. I'll be interested to see how she decides.
Here's the overview I prepared before the meeting:
Canon supplies Digital Photo Professional for its cameras. Like all manufacturer-supplied
software, it's free, but it only works with their cameras. Canon wants you to tell you
what model you have (don't worry; it's the same software for all cameras) and to give the
serial number. And then you have to download the complete software suite.
I've tried this briefly. It seems to do the work, and it compensates for distortion,
though I haven't compared how well it does it.
Nikon offers Nikon ViewNX-i, which is at least easier to install. Despite the name, it can be
used for a certain amount of photo editing. I haven't looked at it much.
Olympus offers “Viewer” 3. Like Nikon, it allows editing and raw conversion. Like
Canon, it wants a valid serial number. I've found it pretty bare-bones, and the
geometry correction isn't spectacular. I'll show an example below.
UFraw is a free program. It
supports pretty much all raw formats, but the results aren't spectacular. It's also
difficult to use.
Adobe Camera Raw comes with Photoshop. It handles all raw formats, and it does lens
correction for some of them. It's not very well represented for my camera and lenses,
so I haven't investigated it much.
RawTherapee is another free program that
handles all formats. It looks very good, but getting lens parameters is a bit
touch-and-go: you have to get the profiles from Adobe Camera Raw. It's the only one I
know that gives you the choice of different methods for converting from raw to RGB, and
the results look good. If I had more time, I'd look at it in much more detail.
DxO Optics
“Pro” is the package I use. It has lens profiles for most lenses, and it
automatically loads them when needed. I've found the results to be some of the best,
but it is horribly slow.
I tried Capture One some time ago. It works, but I couldn't find any good reason to use
it.
Here a couple of examples from a photo we've already seen. They're a tiny section, about
45×60 pixels, so this is considerably enlarged. Here's the original size:
The image converted by Olympus' own converter. Note that it hasn't done a very good job
of the CA, and like the original image, there are bands of CA either side of the
downpipe.
The image converted with DxO Optics “Pro”.
The image converted with UFraw. It
seems to have overcompensated for CA.