cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

ISO and downsampling

Tedphoto
Apprentice
I have a question: It used to be that lower mp cameras had better high iso performance because of larger photosites on the sensor. Now I am reading that is no longer true because a down sampled image from a larger mp camera can take advantage of the increased photosites to create an image with less noise when downsampled. I’m wondering if this only applies to an image that is processed as a jpg in the camera? Is there a way this would also work when editing a raw image in LR or Canon’s software?
27 REPLIES 27

What little I know of this but substantially more about editing, especially in PS, this is important, "It would be better to downsample from say a 16-bit Photoshop or some other 16-bit per component file (something with uncompressed or lossless compressed data)."

EB
EOS 1DX and 1D Mk IV and less lenses then before!

In general, you are correct about the benefits of more data, e.g. 16 bit tiff.

JPEG can do lossless compression. So far as I can remember from more than 25 years ago it has always been part of the standard. I guess, but do not know, that the quality "10" compression in DPP must either be lossless or very little loss. (I am no longer a member if ACM or IEEE and do not now have access to all of the standards documents, but some information is available for free at https://en.wikipedia.org/wiki/JPEG .)

The down sample of any sensor data should be done after noise mitigation and after any scaling or curve fitting and for color images after white balance correction or gamma adjustment. All of those will lose detail and should be done with what is as close as possible to the raw sensor data. A 16 bit tiff would make sense here. I ususally do a small amount of unsharp mask before resizing an image smaller and again before displaying the image, but this is controversial and should be questioned.

Lanczoz resampling is usually the algorithm used by graphicsmagick when resizing and is available in gimp: https://en.wikipedia.org/wiki/Lanczos_resampling , but I do not know what photoshop might use.

Down sample throws away data. There is no need for more that would only be thrown away. A median filter or trimmed mean or gaussian blur before downsampling will remove some of the noise, but also remove some detail. Since the image is be down sampled, there will be less space for detail so that losing detail while losing noise can improve the look of the resulting lower resolution image.

Unless one intends to keep high dynamic range, it does not help the down sampling algorithm to have more dynamic range or more colors than one will need when one is done. 

The JPEG exported from DPP at quality "10" is close to lossless, but the manual does not say it is lossless, and it is much larger than the JPEG produced by the camera even if no edit changes were made. 

I have not used photoshop in decades. I have in the past read and written much image manipulation source code, but not recently. What I know might be out of date. Not everyone will be fascinated by resampling algorithms 🙂 



@ebiggs1 wrote:

What little I know of this but substantially more about editing, especially in PS, this is important, "It would be better to downsample from say a 16-bit Photoshop or some other 16-bit per component file (something with uncompressed or lossless compressed data)."


 

---
https://www.rsok.com/~jrm/

"Down sample throws away data."

 

So does just saving as a jpg, 10 irregardless, and each and every time you save it..

EB
EOS 1DX and 1D Mk IV and less lenses then before!

Plus the camera also deleted data if the original photo was saved a s a jpg.

EB
EOS 1DX and 1D Mk IV and less lenses then before!

You are correct that going from 14 bits per photo site on the sensor to any RGB bitmap loses some data. Raw development always discards data, but the raw data is not pretty to look at. Some sort of interpolation must happen during raw development. If the RGB bitmap being produced is 16 bits per color channel, then less data is lost than if it is 8 bits per color channel assuming that the scene being captured is high dynamic range or has a large number of colors and assuming that the photo sites on the sensor were able to capture that much noise free data. Only that which was captured might be lost. If a large number of individual colors or a large dynamic range were not captured in the raw data, then they cannot be lost. 

You are incorrect that the process of JPEG compression always loses data. The JPEG standard has always included an option for lossless compression, but not all software has implemented it. It depends upon the software used and the options specified. I have no idea what photoshop does. I can easily discover what gimp or graphicsmagick does because I have the source code and can look at archived developer comments.

Since the original subject was downsampling to reduce noise, it is given that data will be discarded. The point is to discard as much noise (measurement error) as possible while keeping enough good data to make an attractive image. Thus, the order in which steps are taken is important as well as an idea of how the final version of the image will be used. The noise reduction methods that work with downsampling include median filter, trimmed mean, and gaussian blur. For film, the only one of those methods that was available in the old days was gausian blur followed by unsharp mask and then printing to a smaller piece of paper, but this accomplished something similar.

Here is a simplified explantion of downsampling an image: https://www.cambridgeincolour.com/tutorials/image-resize-for-web.htm with some mention of the various algorithms.

If instead, one wished to produce a high dynamic range image, then one should never go from 16 bits per color channel to 8 bits per color channel. 

It appears to me that quality "10" in DPP maps to the JPEG lossless compression or to such a high quality setting that there is no noticeable loss if information, in contrast to the in camera JPG file which always discards data even at the highest quality setting.  DPP will use all of the settings that produced the in camera JPG and save a higher quality version of the same image if one does nothing in DPP but open the CR[23] file and save the JPG file at level 10 (I assume this is because more time and a more powerful CPU is available on the computer than in the camera).

Also, I found information on the bicubic interpolation method that I remembered: 
https://en.wikipedia.org/wiki/Mitchell%E2%80%93Netravali_filters 
This chart shows the tradeoffs, but they are more important for enlarging than for downsampling: 

<disclaimer>I am old and I have forgotten much. One would be wise to research this for oneself instead of trusting my memory</disclaimer>


@ebiggs1 wrote:

Plus the camera also deleted data if the original photo was saved a s a jpg.


 

---
https://www.rsok.com/~jrm/

It is true you have me at an intellectual disadvantage about down sampling since I have only rarely done it in PS.

 

I do know how jpg work.

"You are incorrect that the process of JPEG compression always loses data."

 

Every time a jpg image is saved, compression algorithm is run to reduce the file size. This means that some data is lost every time you make a change to the photo and save it. The point is, you don't know what data is discarded. You hope it is simply unnecessary close color matches. That is what your tolerance settings do.

 

Although I doubt I will ever use down sampling as deeply as you have become, I do thank you for the info if I ever do need to.

 

I went from knowing very little to knowing a very little more! Smiley Happy

EB
EOS 1DX and 1D Mk IV and less lenses then before!

"I used graphicsmagick on a Debian Linux computer to do the downsample after converting the raw CR3 file to JPEG using DPP on an iMac."

 

Didn't catch this earlier... It would be better to downsample from say a 16-bit Photoshop or some other 16-bit per component file (something with uncompressed or lossless compressed data).  By first going to JPEG, you'd end up with just 8-bits per component as well as dealing with lossy compression.   The final output (also assuming JPEG) would then be second-generation, so more info lost.

--
Ricky

Camera: EOS 5D IV, EF 50mm f/1.2L, EF 135mm f/2L
Lighting: Profoto Lights & Modifiers

Good observation.

Graphicsmagick has a variety of resampling algorithms built in, but i always use the default as good enough. When one is downsampling, one is throwing away data so there is less need to have more data. There will be less pixels afterwards, so fewer distinct colors are possible. The DPP 4 quality 10 JPEG has very little compression and works fine for me if I intend to resize to a smaller image. I think that the JPEG is good enough for this purpose.

If I am creating a mosaic, or changing colors or brightness, or compositing in another program, then I export a 16 bit TIFF from DPP instead of a JPEG. If I were to do further editing in gimp or photoshop or hugin, I would use a 16 bit TIFF.

To improve the noise reduction, one might use a trimmed mean instead of a median filter but median filter is built into a lot of software. I do not know whether DPP uses a trimmed mean, a median filter, or gaussian blur. I guess not a gaussian blur because that would be undone by a Richardson/Lucy deconvolution in the "Digital Lens Optimization" or in correct diffraction blur. My preference when I have written code to deal with noisy sensor measurements is a running trimmed mean. All of these will remove detail from the image as well as removing noise. After resizing to something smaller, before printing or display, an unsharp mask will make it look better.

This is possibly superstition instead of science from what I did decades ago, but I usually try to resize smaller using an integer ratio, often one that will represent exactly in binary. The "37.5%" in my example is 3/8 and I also commonly use 25%, 30%, 32%, 40%, 50%, 60%, 75%, 80%.

<disclaimer>It has been decades since I read IEEE Computer Graphics or ACM Siggraph and much of what I "know" is likely out of date.</disclaimer>


@rs-eos wrote:

"I used graphicsmagick on a Debian Linux computer to do the downsample after converting the raw CR3 file to JPEG using DPP on an iMac."

 

Didn't catch this earlier... It would be better to downsample from say a 16-bit Photoshop or some other 16-bit per component file (something with uncompressed or lossless compressed data).  By first going to JPEG, you'd end up with just 8-bits per component as well as dealing with lossy compression.   The final output (also assuming JPEG) would then be second-generation, so more info lost.


 

---
https://www.rsok.com/~jrm/
Announcements