Tuesday, April 7, 2015

Odessy over

About 10 months ago I began thinking about how to calibrate my film, much as one might a printer or a display. Yesterday that journey reached its destination. I now have a process that generates a digital file that fairly accurately represents the incident light at exposure, and backs out the characteristics of the particular emulsion use to capture it.
Kodak TMX 100 for 7 stops either side of middle grey
After working on the problem on and off for a while I have now managed to think through how to use this information to 'linearize' the scanned response for any given film and any exposure.

Initially, my approach had been to measure the scanned output for each 'zone' and create a series of correction profiles for any given exposure. If, for example, an image had tones ranging from 4 stops under to 2 stops over middle grey, I'd create a profile for that specific exposure range - in other words for every combinations of black at one end and white at the other, I'd produce a profile for the particular exposure on that film.

Clear this was unwieldy. To start with it quickly filled up the scanner software with a huge number of profiles for just one film. Second, it took a long time to generate all the profiles. Each had to be scanned, the peaks found and put into into a spreadsheet from where the individual profiles were built. Third, it was limited to incremental zones (exposure stops) and didn't cater for exposure ranges that lay between stops. This wasn't so much of a a problem in the shadows but was critical in the highlights where densities are very close together. So I spent several months wondering if there was an better way. In the process I wrote a program for detecting peaks in a test image which helped a great deal, but it was still cumbersome.

Ideally I wanted to simply take the 'master' profile for the film and generate a correction curve for the portion of the emulsion that was used in any given exposure. That would avoid having to choose between two profiles neither of which exactly matched thee exposure I was scanning. The first step was to fit a 5th order polynomial to the film's response curve. The fit was excellent (R-sq = .9997) and allowed me to predict the emulsion density for any given level of exposure.

However what made the end-to-end generation of a correction profile difficult was the logarithmic mapping the scanner used to turn the densities into output levels. The profile of that mapping depended on its breadth and, importantly,  how close to the white end of the range the image input range was set to.

Realizing the mapping was a log function was a breakthrough and allowed me to define a function that accurately captured Epson's mapping. Armed with this function I could now accurately predict, for any given range of exposure in the image, where the scanner would put a given zone on an 8 bit intensity scale (0-255).  Knowing what the output would be I could compare it to what it should be; even increments for each zone.

Once a film has been tested and measured (once only) the master profile function and the log mapping function are used together to generate a new correction profile - on demand - for each new negative to be scanned.

The process works like this; an image is previewed on the scanner. The density values of the darkest and brightest points are noted, defining the tonal range in the emulsion that has been used. Using the log mapping from the film's master profile, the scan levels this would generate are calculated for 15 equal steps from the darkest to the lightest values. These are then used to create the correction curve which is written to the Windows registry. The scanner software is restarted (it only reads from the registry at start up) and the newly generated correction profile can then be applied to the image.

This is one example taken from the underexposed end (-6 to + 2). The top strip show the scan without correction.


 The next show the same strip with the correction applied.


This experiment was done at the low end of the tone curve where the effect is most noticeable. Middle grey without the correction is at 102 which is about a zones below where is should be  at 128  After applying the correction it appears at 126, almost exactly in the right position.

Some might suggest that photography isn't a science and this is way too clinical. And that's probably quite right; after applying this approach to a portrait recently, I preferred the version that I'd done without the aid of all this manipulation.

So why do it? Partly it was the challenge of seeing whether I could indeed generate an exact tonal representation of the scene by correcting for the characteristics of the film. And it may be completely pointless without taking into consideration the different way the eye registers brightness at different light levels; and most people don't see the world around them in black and white anyway, so a black and white photograph is already a distortion, the creation of something one doesn't normally see. To a photograph is as much about the affective response as it is about the exact replication of the  original scene.
      
Nevertheless, in the process I've learnt a bit about properties of TMX100 (which is a really wonderful film - I really hope Kodak Alaris doesn't stop making it). It's also a more reliable (or at least constant) starting point for any manipulation done by eye. I know that if I over or under expose, I won't be having to deal with the way the film has altered the tonal balance differently in the high registers from the low ones. Even if I seldom use it, at least I now know that if I want to, I can faithfully reproduce the light levels in the original scene in a digital file.

It's the odd-shaped spanner sitting in the tool box, the one you bought for a particular job; you may never use it again but it's nice to know it's there.  

No comments:

Post a Comment