Page 1 of 2
Learning to do virtual textures
Posted: 15.03.2004, 11:45
by rthorvald
To understand how VTs work, i had to make my own (with great help from Bob Hegwood?s Dummy Guide). That worked well, exept that i don?t have any good tools for it: i used more than a half hour manually cutting up a 4k image in Photoshop. Obviously, that is impractical for the next level up.
I have a few questions:
- Is there any OSX tools for making tiles? Anything will do, but a Photoshop plugin would be optimal.
- i am not sure i really understand how Celestia uses the tiles yet: if i make jpg tiles, is there any value in compressing the jpgs? If so, how to calculate the gain in freed up memory vs loss in quality?
- Will _all_ tiles in, for example, level2, have to be present for _any_ tiles to work in level3? How does this work?
- rthorvald
Posted: 15.03.2004, 12:19
by selden
Please read
http://www.lns.cornell.edu/~seb/celestia/textures.html
The tools mentioned at the end of the VT section should work fine under MacOS X..
Learning to do virtual textures
Posted: 16.03.2004, 10:12
by rthorvald
Thank you for a very informative resource!
One question left, though: i still don?t know about jpg compression: if, when and why...
Does anyone know more about that?
- rthorvald
Posted: 16.03.2004, 10:45
by Buzz
If I remember well, the jpg compression level will only have an effect on disk space, not in graphics memory while running Celestia, as they will be decrompressed during use.
Posted: 16.03.2004, 12:35
by selden
Buzz is right. That's why people usually use DDS format if their cards can handle it. It has a more limited color pallette, but the file is loaded directly into the card's memory and used without being expanded.
For the best color renditions, you need to use PNG format. It provides 24bit color and doesn't have the compression artifiacts seen with jpeg. Unfortunately, its files are larger and have to be decompressed, so it's slower to load and it uses a lot of graphics memory. A 4K PNG image uses 32MB of graphics memory, just as a Jpeg image does.
Posted: 16.03.2004, 21:03
by Don. Edwards
selden,
I think you have that backwards. PNG textures don't have to get decompressed they have to be compressed. They are normaly to big for your video cards memory. So they have to be compreesed on the fly before they are loaded into the graphics memory. I believe this is the same for JPGs as well. Thats why Celestia has the text switch "Compress Textures True". This is also why useing a larger jpg or png causes Celeatia to slow down as it loads. It is compressing the graphic files before they can get into the memory and then drawn on your screen. That is the main reason we use .dds textures. They are pre-compressed so they can go directly into your video cards memory for a faster load time. I for one don't really notice the lack of color range in a .dds texture versus a png. Of course PNG is the better format but until video cards commonly have a Gig of VRAM and run in the Gigabyte speed ranges .DDS is the only way to go. It is allot more portable too, meaning easier to tranfer over the web.
Don. Edwards
Posted: 16.03.2004, 21:32
by selden
Don,
It seems to me that you've reversed the meanings of the words "compressed" and "decompressed."
For example, one starts in a paint program with an image that's (say) 1024x512 pixels with 3 channels of color (red, green and blue) and 1 channel of alpha transparency information. That's 1024x512x4 bytes = 2MegaBytes (or even more, if one is processing 12 or 16bit color channels).
One now writes that picture out to a file in PNG format. If the image is (to pick an extreme) a picture consisting of just a single shade of color -- a polar bear in a snowstorm at the north pole or a black cat in a coal bin at midnight
-- then the output file will be tiny, less than 16KB. Reducing 2MB to 16KB is compression.
When that image file has to be drawn by Celestia, the 16KB PNG file has to be expanded (decompressed) back up to 2MB and then loaded into the graphics card -- which has to have room for all 2MB. The graphics engine has to have a record of the color and transparency of each individual pixel.
My understanding is that when one specifies "CompressTexture true" to Celestia, it keeps the image in main memory in its compressed format, but then expands it on-the-fly as necessary when it has to be loaded into the graphics card. Celestia uses less main memory, but runs more slowly.
Does this make sense?
Posted: 16.03.2004, 22:29
by Don. Edwards
Selden,
If we go back to the earliest posts on the subjects of .dds versus jpg or png I think you will find that Celestia does compress the textures on the fly and always has. Chris would be the best to ask but if you compress and image in Photoshop to say from a tiff that weighs in at 100MB and you save it in png format at 35MB that is it. The texture is 35MB from now on. If this wasn't the case a 64MB video card couldn't load that texture because according to your statement it is then decompressed back to a 100MB file. This is totally backward to the whole way a GeForce series card works with Celestia. The video card compresses that 35MB texture on the fly into a DXT compliant compressed texture. That is the whole reason why NVidia adopted and licensed S3’s texture compression technology. Another example, an average 4k texture in .PNG format weighs in at 24MB. Now add a 4k .png specmap at 1MB, a 4k normalmap at 4MB, and a 4k cloudmap at 20MB. That is a total of 73MB, if that was loaded into the memory of a 64MB video card it would crash. These textures are compressed before they get into the memory of the video card on the fly using the S3 DXT compression format. That is how it is done and that is why using .dds textures is faster because the video card does not have to compress the graphics because they are already pre-compressed. To your way of thinking our video cards would already have to have over 512MB of VRAM just to load a 16k PNG and associated other textures.
It is true that the PNG format is a compressed format but once the compression has been done it doesn't get undone. I just checked this in Photoshop. I loaded a large tiff file and saved it as a png. This took the file down to 24MB. Now if you reopen that file in Photoshop it doesn't change back to 85MB. But if you do this with a 24MB jpg it does decompress to a larger size.
So to clarify this.
Any graphic file being loaded into your video cards memory is being compressed on the fly using the S3 DXT compression format. The only exception is a true .dds file as it is pre-compressed to the S3 DXT format to decrease load times and to use your video cards memory more efficiently. I am sure there are still good recourses around the web that you can read and verify this. I am simply trying to impress on you that your video card is not decompressing the jpg or png file but is in reality is compressing it down using the technology that NVidia and ATI licensed from S3.
Also by using the line switch “Compress Texture True” you are telling Celestia to force the use of DXT compression on said texture. This was useful in the early days of Celestia or for a video card with a limited amount of VRAM. It doesn’t really get used to much anymore. The use of .dds texture has really nullified the need for the most part.
Don. Edwards
Posted: 16.03.2004, 22:41
by selden
Don,
Thanks for the clarification.
The problem is that I've had reports from people with smaller, older cards that they *cannot* use the larger textures. These smaller cards are those which predate the DDS standard. From what you wrote, this suggests to me that while those of us with cards made after DDS came into use can use lots of big textures, people with older cards are shafted both because they don't have much memory and because it isn't used effeciently.
Is that right?
Posted: 16.03.2004, 22:58
by Don. Edwards
Selden,
You hit the nail on the head. Any NVidia video card that pre-dates the original GeForce cards can't handle the S3 DXT compression spec.
Of course those old S3 Savage cards should be able to do it at some level but there 3D video quality was never very good. I don't know when or what generation of ATI cards started to use the S3 DXT spec but I guess we could find out if we really wanted to. I suspect it came in with the Radeon 8500 series cards but I believe it was limited to use with Direct X only. S3 originally wrote the spec for Direct X 8 but as far as I can tell we NVidia users got lucky because NVidia was able to work it into being used with OpenGL as well. Of course the list of video cards chipset makers and the cards that they made that do not support the S3 DXT spec is a long one.
I have no idea how the Matrox Perihelia works so that is one I can't answer. I used to be a real loyal Matrox user but as we know they fell way behind the competition and even with the release of the Perihelia never made it back.
It might be a good idea to try and compile a list of video cards that work with Celestia and what features they can and can't support. Again the list would be very large but I really think it would be a great asset to the Celestia community as a whole. Well at least I think so.
Don. Edwards
Posted: 16.03.2004, 23:37
by maxim
If the textures are always DXT compressed before loaded into the video memory, why do people complain about quality loss of DDS compared with PNG and JPG? Is the hardware compression algorithm producing better results than the software algorithm? Shouldn't the chain TIFF->DDS be a better way than TIFF->JPG->DXT?
I'm puzzled
maxim
Posted: 17.03.2004, 06:59
by Don. Edwards
Hey Maxim,
I know it is a confusing mess isn' it. I am sure we can find this info out if we want to.
Here is how I think things are working. Small textures say 2k or lower may not get much in the way of compresion simply because they are smaller. But as texture size increases and it does this exponentialy by the way, the video card starts compresing textures as it sees fit. A good example was when we started to play with 4K cloudmaps. If your card couldn't handle the 4k cloudmap it was resized on the fly to 2k. Of course if you were using a 4k.dds cloudmap then the card could just throw out the 4k mitmap and use the 2k mitmap instead. But a 4k.png cloudmap would have to be resized on the fly.
I also think this is why many of us are really not seeing a real big difference in image quality when it comes to say a 16k .jpg and a 16k .dds file. If both were created and saved wright with the highest quality settings there will not be much of a difference except of course for both formats own teltale traites in the image as minor distortions or blockiness. I read somewhere here in the forum that a normalmap made using the DXT1 format would cause more blockiness than using the DXT 3 or 5 format. Now I have tried both and I can only see a minor differences at a smaller texture size. Of course the larger the texture get the less importance this plays.
Now for the big question. If our cards are indeed compressing our .png and .jpg textures is the output better than what we see with the software version we get when converting to .dds textures. I guess it depends on how the card is doing the compression. If it is built into the hardware than it very well could give a better result than the results we get with the likes of NVDXT.exe. But if it done in software mode at the driver level than I think we may need to go back and do some serious research. Because if the the compression is being done at the driver level than there is simply no need to use .png or high quality .jpg files any longer. This would allow for a massive shift to useing the .dds format exclusively except for those with low end cards. I think this may be an issue to take up with Chris as he wrote the program he should be able to tell us what it is doing in the background while it is loadig textures into the video cards VRAM.
Hmmm sounds like it is time to get Chris into this conversation.
Don. Edwards
Posted: 17.03.2004, 18:04
by maxim
Hi Don,
I've noticed that you have started the other thread for clarifying the question. Let's see what others will say.
I recognize things are not very straigthforward, but for some point I do have a slightly different viewpoint. As for hardware/software tradeoffs I've learned that hardware stands for (very) hight processing speeds, while software stands for actuality and improved algorithms. So software should always produce (in a time independent view) better results as hardware. Of course this is of no help, if time is the cruical point. On the other hand, the NVIDIA tool offers so many different parameter settings for texture compression, that an improper use may cause much worse effects than letting things do by buildin procedures.
As for a compression threshold (starting at 2k or what ever), I can't see the advantage of such a behavior. As video ram is always a worthy resource, it would make sense to always compress every texture that is coming in. Well, of course I don't know - so it's just my opinion.
maxim
Posted: 18.03.2004, 10:35
by Redfish
Hey i wanted to add a high res Holland map, but i couldn't get the tiles correctly, any tips on this? I found the tiles with the excel file calculator, but the netherlands has a slice through it, so that makes it hard.
What is your prefered program etc?
Posted: 19.03.2004, 22:48
by jim
Redfish wrote:Hey i wanted to add a high res Holland map, but i couldn't get the tiles correctly, any tips on this? I found the tiles with the excel file calculator, but the netherlands has a slice through it, so that makes it hard.
What is your prefered program etc?
Hi Redfish,
I use in such a case level5 tiles of the bluemarble VT and resize this to the needed level. Than I use the deformation tool of Photoshop to fit the new texture.
A bit short but I hope it helps
Jens
Posted: 20.03.2004, 00:06
by maxim
What is the photoshop deformation tool?
maxim
Posted: 20.03.2004, 19:57
by jim
maxim wrote:What is the photoshop deformation tool?
Now I can only say where you find this command on german Photoshop6: "Bearbeiten" -> "Transformieren" -> "Verzerren"
But it works only if you select the image or a part of the image.
Here is a shot:
Bye Jens
Posted: 20.03.2004, 20:02
by maxim
Ok, yes, I knew it before.
It's only because I'm desperately searching for a nonlinear (curved or freeform) deformation tool. I thought I'd overseen something.
maxim
Posted: 20.03.2004, 20:22
by selden
Maxim,
Have you investigated "morphing"?
You specify locations on the initial image which are to be migrated to locations that you specify in the final image. Other areas of the image are distorted proportionately. There are quite a few programs available to do this.
Posted: 20.03.2004, 23:32
by maxim
Yes, proportionately but linear. My distortions are unfortunately nonlinear. what I need is something like a rubber plane/area which I can take at the edges and bend and stretch.
Well, I will find something anytime.
maxim