selden wrote:Of course, I can't get it to fail here.
My guess, from my very vague understanding of such things, is that it might be related to the arithmetic used by the hardware when doing clipping. The card in my new computer at work (Quadro FX 550, a low end professional card, apparently based on the same chipset design as Nvidia's GF 7000 consumer series) uses 32bit floating point arithmetic. Apparently most older consumer cards use either integers or 16bit floating point.
I fear Chris is going to have to review some of the algorithms currently used by Celestia in order to make them work better on chipsets with limited arithmetic units. That's going to be hard, especially since I have the impression he doesn't actually have any of those kinds of cards.
Every graphics chip that does transform and lighting (every single Radeon or GeForce, and any other consumer graphics chip manufactured today) uses 32 bit floating point arithmetic for vertex processing and clipping. The precision of the pixel processing may be lower on older hardware, but that's almost certainly unrelated to this orbit problems.
Different graphics hardware makers do use very different techniques for clipping. I think that is what's causing the problems. I will have to do some work to make Celestia to push the precision limits of clipping. The constant fight with numerical precision limitations is a familiar story in Celestia development. Drawing the solar system from an arbitrary point of view is a nightmare case for numerical precision, and it's exacerbated by the fact that there's more than one set of precision limitations to contend with.
It'll take some more work, but I've got a lot of confidence in the new approaches that 1.5.0 takes to depth sorting and orbit rendering (these are related, by the way, as orbits are sort of the pathological case for depth sorting.)
--Chris