High resolution Celestia

Discussion forum for Celestia developers; topics may only be started by members of the developers group, but anyone can post replies.
Topic author
chris
Site Admin
Posts: 4211
Joined: 28.01.2002
With us: 22 years 9 months
Location: Seattle, Washington, USA

High resolution Celestia

Post #1by chris » 24.08.2006, 18:53

A proposal for a new high resolution Celestia package . . .

I'm assuming a system with at least 128 MB of video memory and support for DXT compressed textures (which every graphics card with that much memory will have.)

Memory footprints of a DXT1 textures with full mipmap chains:
8k - 21.3 MB
4k - 10.6 MB
2k - 5.32 MB
1k - 2.66 MB
(Uncompressed textures will be eight times the size of the compressed versions.)

At the moment, normal maps have to be stored uncompressed in order to get reasonable quality. I have some ideas for ways to get 2:1 and 4:1 compression and retain good quality, but they will require OpenGL 2.0 support. Perhaps the hi res Celestia should assume this capability, at least for normal maps?

Here are the sizes I propose for each texture in the package:
Mercury - 4k
Venus - 2k (or is there better data available?)
Earth - 8k base, 8k specular, 2k clouds (too many ATI cards limited to 2k textures)
Moon - 4k base, 2k normal (4k if we can get better data)
Mars - 4k base, 4k normal, clouds?
Phobos - 2k
Deimos - 2k
Jupiter - 4k
Io, Europa, Ganymede, and Callisto - 4k
Saturn - 1k (because there's not a lot of detail)
Mimas
Enceladus
Tethys
Dione
Rhea
Titan
Hyperion
Iapetus
Phoebe
Uranus - 1k (or 512; there's not much detail on Uranus)
Miranda, Ariel, Umbriel, Titania, Oberon - 1k
Neptune - 1k
Triton - 2k (possibly 4k?)
Pluto and Charon - whatever is in the standard package

Small bodies:
Eros - higher res mesh (360) and 2k normal map
best available maps for all asteroids that have been imaged by spacecraft

Space missions:
Hubble Space Telescope
Cassini
Galileo
New Horizons
Voyagers 1 + 2
Pioneer 10 + 11
others? Deep Impact, NEAR, Hayabusa, Mars Express, ... ?

This is just a starting point; other people know much more than me about what data is available for different solar system bodies. Please contribute your suggestions.

--Chris

rra
Posts: 171
Joined: 17.07.2004
With us: 20 years 4 months
Location: The Netherlands

Post #2by rra » 24.08.2006, 19:22

sounds great Chris,

any idea how big the total package will be ?

Ren?©

Avatar
t00fri
Developer
Posts: 8772
Joined: 29.03.2002
Age: 22
With us: 22 years 7 months
Location: Hamburg, Germany

Post #3by t00fri » 24.08.2006, 19:57

Just noticed:

Chris,

why do you propose to separate the spec and base texture for Earth. Putting the spec into the alpha channel (using DXT3) is a more economic alternative for the graphics card.

Bye Fridger

PS: doing such a package WELL is plenty of work. So what about assuming OpenGL 2.0 with the forthcoming Windows Vista Desktop? I read that MS is reconsidering the limitation to OpenGL 1.4 BUT is this for SURE??
Image

Topic author
chris
Site Admin
Posts: 4211
Joined: 28.01.2002
With us: 22 years 9 months
Location: Seattle, Washington, USA

Post #4by chris » 24.08.2006, 20:10

t00fri wrote:Just noticed:

Chris,

why do you propose to separate the spec and base texture for Earth. Putting the spec into the alpha channel (using DXT3) is a more economic alternative for the graphics card.

With compressed textures, the space used is the same: DXT1 compression is 8:1 vs 4:1 for DXT3. So separate DXT1 textures for specular and alpha require only as much space as a single DXT3. However, for textures that aren't compressed you are correct that placing the specular mask in the alpha channel uses less memory than separate textures.

That said, I'm fine with a combined specular / diffuse map. Whoever wants to assume the task of maintaining this package can make the decision.

PS: doing such a package WELL is plenty of work. So what about assuming OpenGL 2.0 with the forthcoming Windows Vista Desktop? I read that MS is reconsidering the limitation to OpenGL 1.4 BUT is this for SURE??


MS has capitulated to the demands of users and software and hardware vendors. OpenGL 2.0 will work just fine on Vista.

--Chris

ElChristou
Developer
Posts: 3776
Joined: 04.02.2005
With us: 19 years 9 months

Post #5by ElChristou » 24.08.2006, 20:44

Any limit for the size of models for space missions? Cmod or 3DS?
Image

Avatar
Cham M
Posts: 4324
Joined: 14.01.2004
Age: 60
With us: 20 years 10 months
Location: Montreal

Post #6by Cham » 24.08.2006, 20:47

I agree with that High Res Celestia proposal. However, I would include more space missions, much more probes models. Much more comets and asteroids (transneptunians, trojans, NEO, etc...).
"Well! I've often seen a cat without a grin", thought Alice; "but a grin without a cat! It's the most curious thing I ever saw in all my life!"

Topic author
chris
Site Admin
Posts: 4211
Joined: 28.01.2002
With us: 22 years 9 months
Location: Seattle, Washington, USA

Post #7by chris » 24.08.2006, 20:48

ElChristou wrote:Any limit for the size of models for space missions? Cmod or 3DS?


Cmod definitely. I don't have a hard size limit in mind . . . but offhand, I'd say that 50,000 triangles or less is reasonable. Is that too limiting?

--Chris

Avatar
Cham M
Posts: 4324
Joined: 14.01.2004
Age: 60
With us: 20 years 10 months
Location: Montreal

Post #8by Cham » 24.08.2006, 20:52

I think it's preferable to measure the size of models with the CMOD file size on disk, not in triangles.
"Well! I've often seen a cat without a grin", thought Alice; "but a grin without a cat! It's the most curious thing I ever saw in all my life!"

Starshipwright
Posts: 78
Joined: 08.08.2006
With us: 18 years 3 months

Post #9by Starshipwright » 24.08.2006, 22:29

Except that it is the number of triangles, I believe, that defines the computer resources needed to process the object. Can someone verify this?,

Avatar
selden
Developer
Posts: 10192
Joined: 04.09.2002
With us: 22 years 2 months
Location: NY, USA

Post #10by selden » 24.08.2006, 23:37

I think the appropriate model size would depend on how many objects one wants to display in a frame and the performance of the graphics adaptor for which the high resolution package is intended. I suspect a goal of an average of 50K vertices per model is reasonable.

If the target graphics processor can process about 30x10^7 vertices/second (a number often quoted for an Nvidia 6600), then that's about 30x10^5 vertices/frame (assuming about 100fps). If each object has 5x10^4 vertices (as Chris suggests), that would be about 6x10^1 = 60 objects/frame at 100ps, which sounds generous.

Unfortunately, my impression is that the overhead time for an individual model in Celestia is substantial, so that one gets much better performance for a small number of models than for a large number of models with the same total number of vertices.

The system I'm using right now has a 5200, which is sometimes claimed to have about 1/3 the 6600's performance in vertices/second (12.5x10^7). The numbers above suggest it should be able to run full speed with 20 50K models. However, it's much slower. It displays 12 models, each with about 1x10^4 vertices, at only about 4 fps with the Earth behind them, but with Stars, Galaxies and Nebulae disabled.
(i.e. my 3D maps of Philmont)

System:
512MB 2.4GHz P4, WinXP Pro SP2
128MB FX5200, ForceWare v91.31
Celestia from CVS
Selden

ElChristou
Developer
Posts: 3776
Joined: 04.02.2005
With us: 19 years 9 months

Post #11by ElChristou » 25.08.2006, 13:52

chris wrote:...but offhand, I'd say that 50,000 triangles or less is reasonable. Is that too limiting?...


Well indeed, sounds a bit limiting :oops:

Let's take as example my Vostok, which is not really Highres (you can still see hard edges), but has many details to give a good idea of what was this capsule; it's a 6,5 mo 3ds, a 8,8 mo cmod (binary) and have 332107 poly(!), but still run almost fine (:wink:) on my poor config (32 mo)... (the zip is 5,5 mo)

LB7, Pioneer and Voyager (which are the most documented models I've done so far) are all passing 50000 poly...

If a new line of models must be created, I suppose we could go around 100000 poly for single models but even in this case, the optimization become a must...
Image

Avatar
Cham M
Posts: 4324
Joined: 14.01.2004
Age: 60
With us: 20 years 10 months
Location: Montreal

Post #12by Cham » 25.08.2006, 14:13

As an example, my Hubble Space Telescope model (in CVS now) has about 20? 000 triangles. The 3ds file is 468 KB while the CMOD version is 676 KB.
"Well! I've often seen a cat without a grin", thought Alice; "but a grin without a cat! It's the most curious thing I ever saw in all my life!"

Topic author
chris
Site Admin
Posts: 4211
Joined: 28.01.2002
With us: 22 years 9 months
Location: Seattle, Washington, USA

Post #13by chris » 25.08.2006, 16:55

selden wrote:I think the appropriate model size would depend on how many objects one wants to display in a frame and the performance of the graphics adaptor for which the high resolution package is intended. I suspect a goal of an average of 50K vertices per model is reasonable.

If the target graphics processor can process about 30x10^7 vertices/second (a number often quoted for an Nvidia 6600), then that's about 30x10^5 vertices/frame (assuming about 100fps). If each object has 5x10^4 vertices (as Chris suggests), that would be about 6x10^1 = 60 objects/frame at 100ps, which sounds generous.

Unfortunately, my impression is that the overhead time for an individual model in Celestia is substantial, so that one gets much better performance for a small number of models than for a large number of models with the same total number of vertices.

For any 3D graphics engine it's true that using fewer but larger batches will result in better performance. In the Celestia engine, there's a greater per-object processing overhead than in other systems. I've got some ideas for reducing this, but there's a lot more to be done than in a gaming engine: coordinates have to be reduced to camera-centered single precision, tests for eclipse shadows must be performed, light directions calculated, and quite a bit more.

The system I'm using right now has a 5200, which is sometimes claimed to have about 1/3 the 6600's performance in vertices/second (12.5x10^7). The numbers above suggest it should be able to run full speed with 20 50K models. However, it's much slower. It displays 12 models, each with about 1x10^4 vertices, at only about 4 fps with the Earth behind them, but with Stars, Galaxies and Nebulae disabled.
(i.e. my 3D maps of Philmont)


Here are the theoretical vertex rates for a few cards:
5200 - 62.5M (2VS, 250MHz)
5900 - 160.0M (3VS, 400MHz)
6600 - 225M (3VS, 300MHz)
6600 GT - 375M (3VS, 500MHz)
7600 GT - 700M (5VS, 560MHz)
7900 GTX - 1300M (8VS, 650MHz)

These rates are only achieved in synthetic benchmarks. For one thing, they only apply to unlit and untextured vertices, i.e. a vertex program that does nothing by a 4x4 matrix multiplication. Lighting, effects like atmospheric scattering, and processing texture coordinates require much more complex vertex shaders that will result in vertex throughput well below the theoretical rate. Also, vertex rate does not equal triangle rate. In the worst case, there three vertices must be processed per triangle; however, mesh optimization and the hardware vertex cache keep this worst case from occurring often.

That said, the performance you're reporting seems quite low. I'll have to take a look at the Philmont models to see if they contain a lot of extraneous materials that could be optimized away (though I doubt it.) I'm also going to profile Celestia while looking at the Philmont models to see where the CPU cycles are being burned.

--Chris

Avatar
selden
Developer
Posts: 10192
Joined: 04.09.2002
With us: 22 years 2 months
Location: NY, USA

Post #14by selden » 25.08.2006, 17:27

Chris,

Thanks for investigating!

To set the viewpoint and render options to do the timing, I used the second Cel: URL in the HTML page that accompanies the Addon. That URL is titled "Philmont: East is up, camps are white spots". The screen resolution is 1600x1200.

p.s. The Addon includes an SSC which defines ~120 camp positions using models (not Location declarations). The model being used for each of them is a single CMOD point. Deleting all of those SSC declarations except for the one titled "Base Camp", which is used by the URL, made no significant change in the fps rate.

p.p.s.

Oops.

My memory of the total number of vertices was wrong. The models created by my Fortran program were 128x128 vertices, which is where I got my numbe of ~10K vertices. Those models were then massaged by cmodfix, however. The resulting models each contain 32K vertices!
Selden

Topic author
chris
Site Admin
Posts: 4211
Joined: 28.01.2002
With us: 22 years 9 months
Location: Seattle, Washington, USA

Post #15by chris » 25.08.2006, 22:30

selden wrote:Chris,

Thanks for investigating!

To set the viewpoint and render options to do the timing, I used the second Cel: URL in the HTML page that accompanies the Addon. That URL is titled "Philmont: East is up, camps are white spots". The screen resolution is 1600x1200.

You might be somewhat pixel rate limited at that resolution on a 5200--not as bad as 4 fps, but less 60 fps.

p.s. The Addon includes an SSC which defines ~120 camp positions using models (not Location declarations). The model being used for each of them is a single CMOD point. Deleting all of those SSC declarations except for the one titled "Base Camp", which is used by the URL, made no significant change in the fps rate.

I'll omit the camp positions while trying to understand why the Philmont terrtain models are being rendered so slowly.

My memory of the total number of vertices was wrong. The models created by my Fortran program were 128x128 vertices, which is where I got my numbe of ~10K vertices. Those models were then massaged by cmodfix, however. The resulting models each contain 32K vertices!


128x128 gives 16k vertices, though that still doesn't account for the 4 fps. 12 models with 32k vertices should be ok too, though I can't figure out why cmodfix would double your vertex count. Do you recall what smoothing angle you used? Vertices do get duplicated at hard edges, but I don't think your terrain models had any. Also, what render path were you using? Was there any difference in performance amoung the different paths?

--Chris

Avatar
selden
Developer
Posts: 10192
Joined: 04.09.2002
With us: 22 years 2 months
Location: NY, USA

Post #16by selden » 26.08.2006, 11:22

Chris,


*sigh* another Oops.

I'm at home now, so I don't have access to the system with the 5200. However, I discovered that I had
AntialiasingSamples 4
enabled in celestia.cfg. I suspect that's the case at work, too.

With it disabled, the framerate at home increases to 30fps.

With it enabled, the frame rate on my home system is a comparably slow 7.5-10fps.

In both cases, all render paths are the same speed. Use of the 2K medres DDS surface textures or 1K lores Jpegs doesn't seem to make any difference, either, whether both sets of textures or only one is available (I tested each with only the one folder in /extras/.)

Both V1.4.1 final and CVS versions of Celestia generated the same frame rates.

I used the Windows binary version of cmodfix available on Shatters. This sequence of commands was used to regenerate the cmod models from the cruder versions output by my Fortran program:

Code: Select all

/cygdrive/c/cvs/Celestia/cmodfix --weld --normals --smooth 60 --uniquify tmp.cmod tmp2.cmod
/cygdrive/c/cvs/Celestia/cmodfix -o tmp2.cmod tmp3.cmod
/cygdrive/c/cvs/Celestia/cmodfix -b tmp3.cmod tmp4.cmod



Config:
1GB, 3.4GHz P4-550, WinXP Pro
128MB GF6600GT 16xPCI-e, ForceWare v84.21
1600x1200, 60Hz
Celestia v1.4.1 final & from CVS
Selden

ElChristou
Developer
Posts: 3776
Joined: 04.02.2005
With us: 19 years 9 months

Post #17by ElChristou » 27.08.2006, 15:00

Guys, apart the problem of FPS, if we are talking here about a HIGH RES package, we must have in mind that the geometrie of certain models cannot be define within a certain limit of poly; again, the Vostok without the cables and instruments don't make any sense... I'm pretty sure that if I remove all this stuff, the model is less than 30k poly but in this case it cannot anymore be define as High res...

IMHO, a High res model must give the right "feeling" of the craft by showing some importants points like articulation system, instruments, the way the external structure is done (like for LB7), etc... This with 50k is a bit difficult...
Image

Avatar
selden
Developer
Posts: 10192
Joined: 04.09.2002
With us: 22 years 2 months
Location: NY, USA

Post #18by selden » 27.08.2006, 16:55

I think the best thing to do is to submit catalogs, models and textures for inclusion. Keep them as small as convenient, with Chris' suggested sizes as goals. If they have to go over, well, that the way it is.

Then someone will get the thankless task of being the editor: deciding just which ones are included in "HighRes for Celestia -- volume 1" and making sure the resulting download (install file?) works correctly.

Then it gets published on some high-performance server somewhere. Since they're supposed to be "official" presumably that should be SourceForge.

And then volume 2 is created, etc.
Selden

Topic author
chris
Site Admin
Posts: 4211
Joined: 28.01.2002
With us: 22 years 9 months
Location: Seattle, Washington, USA

Post #19by chris » 27.08.2006, 17:04

selden wrote:I think the best thing to do is to submit catalogs, models and textures for inclusion. Keep them as small as convenient, with Chris' suggested sizes as goals. If they have to go over, well, that the way it is.

Then someone will get the thankless task of being the editor: deciding just which ones are included in "HighRes for Celestia -- volume 1" and making sure the resulting download (install file?) works correctly.

Then it gets published on some high-performance server somewhere. Since they're supposed to be "official" presumably that should be SourceForge.

And then volume 2 is created, etc.


My feelings exactly . . . If we have to use more polygons for a particular model to make it look decent, then so be it. I meant the figure of 50k to be a guideline, not a hard and fast limit.

Which spacecraft do we think need to be included? I mentioned a few obvious ones: Cassini, Galileo, Voyager 1 and 2, Pioneers 10 and 11. What else? Apollo 11 might be neat if we could get trajectories for it.

--Chris

Avatar
Cham M
Posts: 4324
Joined: 14.01.2004
Age: 60
With us: 20 years 10 months
Location: Montreal

Post #20by Cham » 27.08.2006, 17:15

chris wrote:Which spacecraft do we think need to be included? I mentioned a few obvious ones: Cassini, Galileo, Voyager 1 and 2, Pioneers 10 and 11. What else? Apollo 11 might be neat if we could get trajectories for it.


Personally, I would include ALL the space missions (mostly probes) we already have available for Celestia. Most can be found on the Motherlode (?) or from JACK :

- Apollo
- DeepImpact
- Genesis
- Giotto
- Helios
- Ice
- magellan
- Mariner
- MarsExpress
- Near
- Planet-b
- Sakigake
. . .
etc
. . .

It's very interesting to actually see, in 3D, the extension of humanity in our solar system. So all the available probes should be included in some "HighRes" Celestia.
"Well! I've often seen a cat without a grin", thought Alice; "but a grin without a cat! It's the most curious thing I ever saw in all my life!"


Return to “Ideas & News”