After a comment on our de-noising explanation, I finally got around to something I had been looking at doing for a long time, attempting to remove line artefacts in high ISO images.
For each line of the sensor there is an analogue amplifier – where the amount of amplification is determined by the ISO setting. The amount of amplification of each line may however vary slightly, making line artefacts across the image.
Canon and a few other manufacturers have a part of their sensor outside the image covered, so it doesn’t receive any light. It has therefore been assumed that you could use the information from this black area to estimate the amount of amplification applied to a single line. This has been accepted by the community for some time, so I decided to research this to find the best method for compensating for this issue.
For the research test, I took a test image of a fairly uniform lit surface - a defocused shot of a blue sky. The aim wasn’t to get a completely uniform surface, just get it uniform enough that we can expect adjacent lines to have the same light values.
I then modified RawSpeed to print out comma separated values for one colour component (green) on every second line in both the black area, and 1000 pixels into the image. For each line I have 68 data points in both the black and image area.
After importing the data into a Spreadsheet, I began analysing the data. For estimating the black and light level for each line, I used a median function. This function should be better than a plain average, since it will ignore any values that are way out of range, like hot and black pixels.
I tried the two most obvious methods – first the most commonly accepted method, simply subtracting the black value median for each line, and secondly estimating an amplification factor of the black area and use that to compensate the image data.
For this image, we can expect adjacent lines to have approximately the same image data lines, since there shouldn’t be any significant change between each line. Right from the start it seemed like there was something wrong, because the data didn’t support the theory, and there didn’t seem to be any clear correlation between the black data and the image data. Here is what I’ve found using the two methods above for calculating image values:
Average absolute difference between adjacent lines with no per-line compensation: 67.9 (Raw value)
Average absolute difference between adjacent lines with per-line compensation (subtract median black): 76.0 (Raw value)
Average absolute difference between adjacent lines with per-line compensation (normalize value based on black amplification): 107.2 (Raw value)
So using black area information actually makes the image look worse, making the line artefacts more visible than they were before. I have checked the code dumping the data several times, and I cannot find any defects. This is pretty discouraging, so it would seem that doing this kind of artefact removal automatically is out of the question.
So for now I have decided not to pursue this method anymore, unless I find evidence for the opposite.
Does anyone have test data or know cameras where this kind of artefact removal actually helps?