How much data is really captured in a digital image?

A few weeks back, I dusted off my Nikon 4000 ED film scanner and scanned some film for some competition entries. I was pretty impressed with the results (once I’d worked out the best scan settings to use) but confused by the file sizes.

According to Digital Photography Review, my Nikon D70 has a 6.0 million effective pixels from a total of 6.3 sensor sites. At the largest setting, each image is 2000×3008 pixels (around 6016000 bytes or 5.7MB). There’s not an exact match between pixel count and image size (my raw files vary slightly in size but are each around 5.3MB) but we can work on a rule of thumb where each pixel accounts for around 1 byte of uncompressed image data.

Nikon Scan showing a file size of 65.3MB with a 3946x5782 imageWith the scanner though, things were different: the scan size for a 35mm frame was 5959 x 3946 pixels (around 23514214 bytes or 22.4MB), but the scan sizes reported by my scanning software were 67.3MB for an 8 bit scan and 134.5MB for a 14-bit scan. I could see that a 14-bit scan would actually use 16-bits (2 bytes) for each pixel but why were the file sizes three times the size they would be for a digital camera sensor?

After a lengthy discussion with Nikon’s European Customer Support team, I found that, whereas the Bayer mask on the digital camera limited each pixel to one colour (red, green or blue) – and software may be used to interpolate more values if required – the scanner actually captures three colour values (red, green and blue) for each pixel (instead of measuring the light falling on photo sensor sites and using a mask for the various colours, it shines red, green or blue lights through the film to measure the resulting values for each colour in turn).

On that basis 5959 x 3946 pixels x 3 channels = 70542642 bytes or 67.3MB in 8-bit mode (twice that in 14-bit mode) and the scanning software values suddenly make sense.

Leave a Reply