Normal/Bump maps have several problems for Celestia's typical usage:
1) Normal/Bump maps have limited dynamic range.
2) Objects are usually being observed from such large distances that the true surface topography is insufficient to yield a noticeable shadow effect except when the sun lies nearly tangential the surface.
3) Aesthetically, most Celestia users, including even the most unabashed purists in this forum, prefer exaggerated surface relief when observing objects from large distances. In fact, to my knowledge no true norm maps are currently being distributed for the Earth. This is one of the only, easily solved, purity issues that is not currently being addressed within celestia.
4) At low altitudes, the true norm is sufficient to produce 'realistic' shadowing effects, and hence, exaggerated relief is unnecessary.
Solution:
Add a set of keys or sidebar for adjusting the 'scale' of the topographic map; with the lowest (and default) setting being the true norm value. Given one normal map, it is not difficult to create another normal map of another relief.
The only significant problem is that this would require a master copy of the normal map to remain in ram; but this wouldn't impact video ram. This is necessary because truncation errors would ruin the working copy if the master were not preserved.
A bonus of having scaling topography is that the master normal map can be 'normalized' by the texture artist (or software when starting with a bumpmap) to provide maximum dynamic range for the surface; something that otherwise cannot currently be done without using extreme scaling that would not be aesthetically pleasing to most users. Of course each normal map distribution would require a carefully calculated scaling factor for the .ssc file in order to account for the upscaling that is done by the texture artist to maximize the dynamic range of the master normal map.
Thoughts?
Walton
Feature Request: Increase/Decrease topography keys/slide
-
- Site Admin
- Posts: 4211
- Joined: 28.01.2002
- With us: 22 years 10 months
- Location: Seattle, Washington, USA
I think a topography scale would be a good feature to have, but I see a couple problems with actually manipulating the texture data. Maxim mention one: it's rescaling the normal map takes some time. A bigger problem is that compressed normal maps would have to be decompressed, scaled, then recompressed. Keeping the original around would defeat the purpose of compression (not entirely though--you'd burn lots of system memory, but you'd still save video memory.)
On hardware that supports pixel shaders, you can multiply the normal map value by the scaling factor (actually, it's inverse) and then renormalize. With this scheme, you need only adjust a single constant in order to change the relief. Rebuilding textures isn't required. The drawback is that it will only work on GeForce FX/Radeon 9500+ hardware. I think it's the only sane solution, however.
--Chris
On hardware that supports pixel shaders, you can multiply the normal map value by the scaling factor (actually, it's inverse) and then renormalize. With this scheme, you need only adjust a single constant in order to change the relief. Rebuilding textures isn't required. The drawback is that it will only work on GeForce FX/Radeon 9500+ hardware. I think it's the only sane solution, however.
--Chris
-
Topic authorwcomer
- Posts: 179
- Joined: 19.06.2003
- With us: 21 years 5 months
- Location: New York City
Hi maxim,
The reason going from a bump map to a normal map takes so long is due to all the floating point operations required to renormalize the vector field. This is including a square root operation which might account for the majority of the clock cycles.
If the normal map for the object affected is a virtual texture and if we are willing to accept an approximate scaling transformation, then I think it could be done fairly quickly on the fly.
The first assumption is tricky because there are still a lot of large (i.e. 8k) textures in distribution. A 2k texture I don't see causing any problems, but an 8k or 16k would be. Therefore it might be best to not dynamically scale anything 4k or bigger by default (but with a pref option to do so for those willing to wait it out.)
The second assumption isn't really that tricky. I'd be happy to describe an algorithm that would scale a master normal map to decent approximation with 3 or less integer multiplications or divisions per pixel. Of course such an approximation would deviate from being a unit normal vector map; but hardly enough to matter. If we limited ourselves to scaling by factors of 2, then this turns into bit rotations which are even faster. There would be a very simple case structure based on the z byte only. So for example a single 512x512 tile could be renormed with less than a million bit rotations and another half million jump-if's. Near as I can tell most of the time will be spent getting the data in and out of cache and the register. Of course in full screen mode you might have 10-12 tiles which need to be modified simultaneously so it would be noticeable but might not take more than a half second on a half gig processor. Maybe I'll do some benchmarking over the weekend to test the feasibility of this algorithm.
cheers,
Walton
The reason going from a bump map to a normal map takes so long is due to all the floating point operations required to renormalize the vector field. This is including a square root operation which might account for the majority of the clock cycles.
If the normal map for the object affected is a virtual texture and if we are willing to accept an approximate scaling transformation, then I think it could be done fairly quickly on the fly.
The first assumption is tricky because there are still a lot of large (i.e. 8k) textures in distribution. A 2k texture I don't see causing any problems, but an 8k or 16k would be. Therefore it might be best to not dynamically scale anything 4k or bigger by default (but with a pref option to do so for those willing to wait it out.)
The second assumption isn't really that tricky. I'd be happy to describe an algorithm that would scale a master normal map to decent approximation with 3 or less integer multiplications or divisions per pixel. Of course such an approximation would deviate from being a unit normal vector map; but hardly enough to matter. If we limited ourselves to scaling by factors of 2, then this turns into bit rotations which are even faster. There would be a very simple case structure based on the z byte only. So for example a single 512x512 tile could be renormed with less than a million bit rotations and another half million jump-if's. Near as I can tell most of the time will be spent getting the data in and out of cache and the register. Of course in full screen mode you might have 10-12 tiles which need to be modified simultaneously so it would be noticeable but might not take more than a half second on a half gig processor. Maybe I'll do some benchmarking over the weekend to test the feasibility of this algorithm.
cheers,
Walton
-
Topic authorwcomer
- Posts: 179
- Joined: 19.06.2003
- With us: 21 years 5 months
- Location: New York City
DXTs might be scalable while compressed.
Chris,
I forgot about compression. Ughhh.
It may be possible to scale DXT's without actually decompressing them by simply scaling the paletted vectors. Since DXT decompression is interpolated with respect to these vectors, the results might be agreeable. This would require quite a bit of testing though and may introduce some additional artifacts that I haven't considered; but DXT already has so many artifacts with respect to normal maps that I hardly see the additional noise being an issue. Similarly anything paletted could be scaled in compressed form. Obviously .jpg .png and .gif would in general not be scalable in compressed form.
Walton
I forgot about compression. Ughhh.
It may be possible to scale DXT's without actually decompressing them by simply scaling the paletted vectors. Since DXT decompression is interpolated with respect to these vectors, the results might be agreeable. This would require quite a bit of testing though and may introduce some additional artifacts that I haven't considered; but DXT already has so many artifacts with respect to normal maps that I hardly see the additional noise being an issue. Similarly anything paletted could be scaled in compressed form. Obviously .jpg .png and .gif would in general not be scalable in compressed form.
Walton
-
- Site Admin
- Posts: 4211
- Joined: 28.01.2002
- With us: 22 years 10 months
- Location: Seattle, Washington, USA
Re: DXTs might be scalable while compressed.
Walton,
What do you think about my idea of doing it in the pixel shader? It solves the compressed textures problem, the storage problem, and the processing time problem, at the cost of not working on all hardware. What sort of graphics card do you have?
--Chris
What do you think about my idea of doing it in the pixel shader? It solves the compressed textures problem, the storage problem, and the processing time problem, at the cost of not working on all hardware. What sort of graphics card do you have?
--Chris
-
Topic authorwcomer
- Posts: 179
- Joined: 19.06.2003
- With us: 21 years 5 months
- Location: New York City
Hi Chris,
Sorry to drop the main point of your comment. The pixel shader approach seems plainly superior for those users with the necessary graphics card. But it just obviates the fact that my graphics card (GeForce 2 MX/MX400) is going the way of the vacuum tube. It’s probably high time to upgrade; this would be the second generation of graphics card for me to purchase solely for use with Celestia.
So that said, if a clumsy version of scaling could be added for the broader audience, I believe it would be valued.
Walton
Sorry to drop the main point of your comment. The pixel shader approach seems plainly superior for those users with the necessary graphics card. But it just obviates the fact that my graphics card (GeForce 2 MX/MX400) is going the way of the vacuum tube. It’s probably high time to upgrade; this would be the second generation of graphics card for me to purchase solely for use with Celestia.
So that said, if a clumsy version of scaling could be added for the broader audience, I believe it would be valued.
Walton