Page 1 of 1

Removing Shadows from Images

Posted: 09.11.2004, 14:07
by Slalomsk8er
I work on a texture for Basel(Sol/Earth/Europa/Swiss/Basel), made out of orthophotos and have a problem with shadows.

I think this is a problem not only I have. Planet textures in general have unwanted shadows, if you use a normalmap.

I found by google search the way to do it, but I do not understand the science code in it. So please if you undersand the formulas can you post me some pseudo code. I will useing it then for to make a python script, witch I post back on this forum.

This is the link http://www.comp.nus.edu.sg/~xiaoyg/shadowless.pdf(PDF)

Thanks, Dominik

Posted: 09.11.2004, 18:15
by maxim
Interesting paper.
Unfortunately it is obviously not possible for you to use this method to remove shadows in planetary image maps.

First central point is, that the proposed method definitly needs a camera calibration (resp. parameters derived from such a calibration) for generating the ilumination invariant image. The calibration is either adressed by getting the cameras sensor parameters, or by making a set of images of a fixed scene with variant lighting conditions. Using a planetary image map, none of these methods are available for you, simply because you don't have access to and don't know anything of the used camera system(s).

Second, this method is used for video image processing only, as it adds an amount of artifacts to the image and can only adress prominent shadows. Small shadow patches aren't processed as you can see in the example pics. Planetary image maps on the other hand do inherit mostly small and very small shadow patches.

----------------------------------------------------------------

I tried inviting a handmade method for this shadow problem some time ago, but canceled further efforts after some time, because I found it too time consuming for the spare time I have.

The procedure is about as following, and it might be neccessary to use it regionwise depending on the image map:

As shadows are usually the darker parts of the image, you have to retrieve a luminosity threshold which masks only the shadow parts in that image, and which not masks the lit parts. If you've found a good value you can generate a separate mask layer out of the shadow part.

You then add luminosity and saturation to the mask layer, until adjacent parts showing the same object, but lit, have identical color values. You may also need to use gamma correction values, different for the R, G and B channel.

If the result looks equal colored and flat, you can remerge all layers and judge it by generating a bumpmap lit version out of it and compare to the prelit original.

The whole process may take some time and require some experience or training.

maxim

Posted: 10.11.2004, 01:17
by Guest
I think, your two points are valid but IMHO they don't render this method useless.

Point one: The Calibration
If you are making your own textures, the source is mostly known. So you can search the NASA sites for the cam specs (in my case I can call the government agencie for to get the specs of the cam in the plane). Plus I think you could just guess the specs and look on a preview of the map how good it works (if it ever uses a GUI some dropdownlists could make the guessing easyer).

Point two: The Artifacts
I would say we actualy have to test it on orthophotos and planet maps, the examles are not mutch saying for the needs we have to skratch.
Don't think to low of the potezial this method have. In a first glimse IMHO the artifacts can be supresst by a great extend. Some ways would be:

Define colors witch are never shadowed, like if no water is present ignor all realy blueish pixels (not of great use for us)
Or better don't touch any pixels brighter as what I say, just patches not gradients (gradients can be part of a shadow)

There is a point that can makes it easyer for us then for the others, the direction of the camera is known. Maybe there is a possibility to use this, no fantasy now to think of it (2:08 am).

Dominik

Posted: 10.11.2004, 14:58
by maxim
Dominik,

you can of course try to get some results from the described method. I'm only saying that I'm not sure that it's worth the effort - if you go on, you can surely make a new paper out of it.

If you look at the trees in the example pics, you'll see that the leaves keep their shadows, and that only the shadows on the lawn, not too deep inside the trees are removed. That 'leave-shadow' situation you will have if you work on planetary maps - small patches that can be badly processed and separated by edge detection algorithms.

See, the given method was developed specially for real time processing of moving pictures with respect to object recognition. You can try to develop an own method out of the given one, or you can investigate in further search for a better method that was designed for still pictures and no limit to processing time. I think you'll make the fasted improvement by doing the later one.

maxim

re

Posted: 11.11.2004, 21:52
by John Van Vliet
so far the best methed i have found is to air-brush the shadows and hilights out

Posted: 13.11.2004, 18:48
by Slalomsk8er
so far the best methed i have found is to air-brush the shadows and hilights out
So I am the coder type and thus would like to do it not manualy.
I have the equipment to do it by hand (gimp and WACOM-Tablet) , but I will do it a lot (have done it in the past) and it is a boring task (repetitive), so lets code it and let the computer do the boring stuff.

Yes man, maxims and Slalomsk8ers thesis and proof of removing shadows on orthophotos and planetmaps, we may get an honorary doctorate in Computing Science ;-) (could need one).

If you look at the trees in the example pics, you'll see that the leaves keep their shadows, and that only the shadows on the lawn, not too deep inside the trees are removed. That 'leave-shadow' situation you will have if you work on planetary maps - small patches that can be badly processed and separated by edge detection algorithms
Look at the Grey-scale illuminant-invariant images.
Edge detection is evil, I think we can do better by compare the Grey-scale illuminant-invariant image pixels to the original ones directly (possiby the calibration can be done this way to).

See, the given method was developed specially for real time processing of moving pictures with respect to object recognition. You can try to develop an own method out of the given one, or you can investigate in further search for a better method that was designed for still pictures and no limit to processing time. I think you'll make the fasted improvement by doing the later one.

I see it, you are right with this and so I will do a modification of it as soon as I understand the encrypted instructions (formulas) for invariant image calculation in the paper (if you can help please).

Thanks, Dominik

Posted: 16.11.2004, 11:22
by maxim
Slalomsk8er wrote:I see it, you are right with this and so I will do a modification of it as soon as I understand the encrypted instructions (formulas) for invariant image calculation in the paper (if you can help please).
(6), (7) and (8) are the central formulas here. But I think they won't help you without reading [1] first. I couldn't get access to that paper. I think you will have to pay for it. The formulas do explain the basic, but they don't explain how exactly to get the c1 and c2 constants from calibration data. And there is only a vague explanation of how the mentioned 2D-vector projection is done.

Slalomsk8er wrote:Edge detection is evil, I think we can do better by compare the Grey-scale illuminant-invariant image pixels to the original ones directly (possiby the calibration can be done this way to).

Edge detection is cruical here. And I think you won't get along with a simple high-band filter. The text mentions the SUSAN edge detector and points at two papers referred as [14] and [15] for more information.

And you surely won't have a problem to get the calibration from comparing the illuminant-invariant image with the original. The problem is that you prior need the calibration parameters in order to generate the illuminant-invariant image at all.

And the paper also points out that the method doesn't fit for weak edges - but your planetary maps will be full of weak edges.

maxim

Posted: 17.11.2004, 17:07
by Slalomsk8er
(6), (7) and (8 ) are the central formulas here.
Can you write, what you understand of them (auf deutsch ist m?glicherweise besser verst?ndlich)?

More info:
http://www.cs.sfu.ca/~mark/ftp/Cic10/cic10.pdf
http://www.mi.tj.chiba-u.jp/~tamura/paper/(2004)CGIV.pdf


Edge detection is cruical here. And I think you won't get along with a simple high-band filter. The text mentions the SUSAN edge detector and points at two papers referred as [14] and [15] for more information.

What I would do, is use the grayscale illuminant-invariant image as base and add the chromaticity from the original and voila no shadows and color (OK the colors in the ex shadows will be a bit different (other light source). I think I can live with that ;-) )
And the paper also points out that the method doesn't fit for weak edges - but your planetary maps will be full of weak edges.


This is why I think edge detection is evil, there is no other way for a good result then to do it on a per pixel basis (no filtering no mapping, just doing it for every pixel of the image).

Thanks, Dominik