Page 1 of 1

Limit of magnification

Posted: 17.02.2008, 10:03
by lidocorc
A wonderful feature of Celestia is showing the motion of binary stars or the motion of close stars relating to parallax. To observe these motions from a position within the solar system the screen needs to be set to very high magnification (i.e. extremely narrow fields of view. Is it true that Celestias minimum FOV is limited to 3.6 arc seconds (=magnification 37419x on a 17" screen)? Or is it a constraint on my machine?

At highest magnification, binary star components are wobbling along their orbits. Is this due to truncation errors in floating point computation? If so, I would understand why allowing even higher magnifications would make no sense.

By the way: Is the kernel code for the computation of star positions implemented by single or by double precision floating point variables? I ask that question although it's well known fpus of Intel and AMD processors convert every floating point value to long double first before they are added, multiplied and so on.

Re: Limit of magnification

Posted: 18.02.2008, 17:16
by chris
lidocorc wrote:A wonderful feature of Celestia is showing the motion of binary stars or the motion of close stars relating to parallax. To observe these motions from a position within the solar system the screen needs to be set to very high magnification (i.e. extremely narrow fields of view. Is it true that Celestias minimum FOV is limited to 3.6 arc seconds (=magnification 37419x on a 17" screen)? Or is it a constraint on my machine?

No, this same constraint applies to anyone running Celestia.

At highest magnification, binary star components are wobbling along their orbits. Is this due to truncation errors in floating point computation? If so, I would understand why allowing even higher magnifications would make no sense.

By the way: Is the kernel code for the computation of star positions implemented by single or by double precision floating point variables? I ask that question although it's well known fpus of Intel and AMD processors convert every floating point value to long double first before they are added, multiplied and so on.


The problem is that to draw anything in 3D quickly, you need to use the graphics processor. These are generally restricted to using single precision arithmetic, although we're starting to see some double precision support now. There's been a lot of work done in Celestia to get around the limitations of single precision arithmetic, but as you've discovered, the final transformation to camera coordinates still occurs on the graphics processor at single precision. The only workaround that I've thought of is to switch to doing all vertex transformations on the CPU at double precision when then magnification is extremely high. It would be a lot of work.

--Chris

Posted: 18.02.2008, 22:02
by lidocorc
chris wrote:There's been a lot of work done in Celestia to get around the limitations of single precision arithmetic


This lot of work is greatly appreciated!

lidocorc