Hi
I hope somebody will answer my - from your point of view - silly question, but I'm getting trouble in my own little satellite simulation with the floating point precision of geometry data.
I need to image the position of a satellite orbiting earth, that is orbiting the sun. Sun is positioned at my zero vector. With 32 bit floating point I can't position the satellite let's say some kilometers above earth's position, that is 149,597,870 km away from the sun, cause a float data type has only got 32 bit on my machine, that gives me a precision of 6 digits.
The problem is, that my 3d-engine does not offer 64 bit precision for geometry data. By the way, does DirectX or OpenGl offer double precision vectors?
Apperently Celestia manages this quite well.
So, please tell me, how?
Regards
Ayman
32bit or 64bit geometry data?
Celestia uses a mixture of 32 bit, 64 bit and 128 bit precision, depending on what it's calculating. You'll have to look at the code for details.
The 3D engine should be used to display the results for a particular viewpoint. That's done after the astronomical calculations have been made and can be much lower precision.
The 3D engine should be used to display the results for a particular viewpoint. That's done after the astronomical calculations have been made and can be much lower precision.
Selden
You can pass double precision vectors to OpenGL (and presumably, also to DirectX). However, even if you did pass double precision data to OpenGL, that precision will probably not get passed all the way up to the display.
What is likely to happen (and I've seen various bits of code that do this) is that the OpenGL / DirectX implementation will simple convert the input data to single precision before passing it on.
I doubt you'd be able to find any consumer level hardware that works at anything above single precision (they've only just recently got to single precision for the whole graphics pipeline). You might be able to find some software renderers that work in double precision, but this is probably not what you want.
It costs a lot of hardware and speed to implement double precision, so you're not likely to see this soon (especially as there is little demand for it, compared to fast single precision for games, etc.).
For what you're doing, it really depends on how you're specifying the satellite's position and how that position gets converted into the final position on the screen. Are you doing this in Celestia, or in something else?
Couldn't you just specify it as a distance above the Earth's surface (which would be a much smaller number)?
What is likely to happen (and I've seen various bits of code that do this) is that the OpenGL / DirectX implementation will simple convert the input data to single precision before passing it on.
I doubt you'd be able to find any consumer level hardware that works at anything above single precision (they've only just recently got to single precision for the whole graphics pipeline). You might be able to find some software renderers that work in double precision, but this is probably not what you want.
It costs a lot of hardware and speed to implement double precision, so you're not likely to see this soon (especially as there is little demand for it, compared to fast single precision for games, etc.).
For what you're doing, it really depends on how you're specifying the satellite's position and how that position gets converted into the final position on the screen. Are you doing this in Celestia, or in something else?
Couldn't you just specify it as a distance above the Earth's surface (which would be a much smaller number)?