About Negative Parallax.

General physics and astronomy discussions not directly related to Celestia
Topic author
Spaceman Spiff
Posts: 420
Joined: 21.02.2002
With us: 22 years 9 months
Location: Darmstadt, Germany.

About Negative Parallax.

Post #1by Spaceman Spiff » 14.08.2007, 20:46

In this topic: Full-time on Celestia! (http://www.celestiaproject.net/forum/viewtopic.php?t=11207&postdays=0&postorder=asc&start=75), someone raised the matter of negative parallaxes. I've put this post here because it's not to do with the original matter of that topic.

This is an important matter in using statistics for scientific analysis in astronomy, biology and even physics. The reporting of a negative parallax in a star catalogue is correct, even though it must obviously not be true. It is not correct to, er, correct the parallax to zero or positive just because it was measured as negative. I'll explain why in a moment.

The reason a parallax can turn up negative is simple. Errors can cause a star position to off by any direction. During the six months we measure the parallax, we expect the star's position to shift from A to B. In this case, the true parallax was about the same size as a typical error, but the first error pushed the position reading to roughly where B is, and the second error pushed the star to roughly where A is. So the star appears to move from B to A instead of A to B: the parallax is the wrong way round. We can't know what the error was, so we can't subtract those.

Now, you can't eliminate these 'spurious' measurements from a statistical analysis because such filtering will bias any results.

It might be better explained with this curious example. A physics department's nuclear group proudly displays a graph showing the increase of heavy metal inside a worker during working life: the concentrations being found by radio-isotope measurements of factory workers who voluteered. The duration starts at 0 years for beginning of work, and the concentration is in parts per million (ppm). The rise is linear. So far, so good. Then I noticed that the first data point was for a concentration of -5 ppm. Yup, minus!
"How?" I asked the head of department.
"Ah well, you see, you have to measure control volunteers who don't work in the factory and subtract their concentrations from the subjects'. Some controls have more cadmium in them naturally than some factory workers on their first day. The difference gives a negative concentration. You can't correct it to zero, because that would bias the slope of the line and give the wrong result."
"Ah!"

It's also explainable with the infamous Nature paper by Jacques Benevenist 'proving' homeopathy: the negative white blood cell counts between control and subject that he corrected to zero on the grounds that the treated subject cannot react less than the untreated control.

Spiff.

Avatar
Fenerit M
Posts: 1880
Joined: 26.03.2007
Age: 17
With us: 17 years 8 months
Location: Thyrrenian sea

Post #2by Fenerit » 14.08.2007, 21:29

Thanks Spaceman Spiff for clarify the concepts involved in the subject. My question is: what should happen whether every time an astronomer look at the stars with negative parallax he should find these yet with negative parallax and so on. What we could knowing more about that stars of what we just don't know? On other hand, when and how these parallax to be measured right, that is, with real physical status to say: "well, now we know more of that stars and precisely that now the parallax is not negative".
Never at rest.
Massimo

Avatar
t00fri
Developer
Posts: 8772
Joined: 29.03.2002
Age: 22
With us: 22 years 8 months
Location: Hamburg, Germany

Post #3by t00fri » 14.08.2007, 21:33

Spiff,

sure I agree. You are detailing a rather common effect arising frequently in experimental measurements: suppose you measure a quantity that according to its /theoretical/ interpretation is supposed to be sin(something), hence theoretically should be <=1.

Yet, after performing many measurements on this quantity, you will find a certain fraction with values >1. Clearly you must NOT eliminate those values >1 from your measured sample to avoid bias.

The resulting measurements will be eventually published as

sin(something) = 0.86 +- 0.18, say

i.e. ranging between 0.68 and 1.04 >1 within one sigma (68% probability).

The main tricky issue is to sort out whether the values >1 are systematic or statistical in nature. Since the treatment of systematic uncertainties differes substantially from that of statistical errors....

Bye Fridger
Image

Avatar
Fenerit M
Posts: 1880
Joined: 26.03.2007
Age: 17
With us: 17 years 8 months
Location: Thyrrenian sea

Post #4by Fenerit » 15.08.2007, 00:42

t00fri wrote:
The main tricky issue is to sort out whether the values >1 are systematic or statistical in nature. Since the treatment of systematic uncertainties differes substantially from that of statistical errors....

Bye Fridger


I do not know whether this is the right place or instead is more pertinent to the Purgatory, but if all that seem delirant, it's because I'm not a professional astronomer and I need at least some links to learn more about the fashion in which are builded the star catalogs.

I look at, to say, 100.000 stars and of these 10.000 are good and 90.000 have the negative parallax. Then I look at 1.000.000 stars and of these 100.000 are good and 900.000 have the negative parallax. In total I've 110.000 good stars and 990.000 bad.
Suppose that in the 1.000.000 I've considered the previous 100.000, I've increased the number of the good stars as well as the bad ones, until the point in which I shouldn't have sufficent stars to compute the increasement of that good. And the bad rest, however. That is, these stars are in a special case of quantistic being/not being. Being, because it hold all the other physical attributes, e.a the spectral class; not being because we do not know where they are, nor in the space concerned as an "out there" but just within the theory developed for find it, with his precision and his predictive power. Is this matter of instrumental/systematic error? How did the astronomers collapse this particular "star-wave function"?
Never at rest.
Massimo

Topic author
Spaceman Spiff
Posts: 420
Joined: 21.02.2002
With us: 22 years 9 months
Location: Darmstadt, Germany.

Post #5by Spaceman Spiff » 15.08.2007, 07:20

t00fri wrote:Spiff,

sure I agree.

I knew you would ;). Note that I addressed that matter generally to all: it's for the benefit of anyone who might pick up coding for Celestia not to make such unwarranted 'corrections'. Actually, another I can think of is that some comets have eccentricity measured as slightly greater than one, though it's likely they can't all (or even any of them) be interstellar.

t00fri wrote:Since the treatment of systematic uncertainties differes substantially from that of statistical errors....

Here I am of course talking only of those statistical errors, the random kind, the ones that can't be corrected for.

Fenerit wrote:I look at, to say, 100.000 stars and of these 10.000 are good and 90.000 have the negative parallax.

It's more like the bad stars usually have small, but positive, parallaxes less than five times the mean error, and they're more like 10% of the catalogue (?). The ones with negative parallaxes would be quite rare.

Notice that parallaxes which are about the same size as the typical (random) errors, both positive and negative, would be disregarded as having "insignificant" parallax, that is, the star has been found to be too far away to measure its distance reliably. In fact, as said, we have a cut of in our catalogue that the random error should make 20% or less of the parallax. In science, a cut-off limit of 'statistical significance' is applied to divide reliable measurements from unreliable measurements: the limit is not zero parallax to cut off negative parallaxes, but usually some small multiple of the estimated mean error to cut off 'small' parallaxes. Statistical significance is a special kind of thinking that scientists use commonly, but is not familiar to the general public.

Fenerit wrote:On other hand, when and how these parallax to be measured right, that is, with real physical status to say: "well, now we know more of that stars and precisely that now the parallax is not negative".

Of course the desire of scientists to have greater reliable data is what drives the making of more accurate instruments, such as ESA's Gaia (http://sci.esa.int/science-e/www/area/index.cfm?fareaid=26) to supercede Hipparcos (I mentioned this). When Gaia is finished, almost all the 'bad' stars of Hipparcos will be 'good', but Gaia will provide a new band of bad stars beyond that which Hipparcos couldn't even see. "The more you know, the more you don't know."

Then, some new satellite will be launched where the band of uncertainty is now pushed away further than the size of our Milky Way galaxy, and all the galaxy's stars will be located. Yet, stars in the Andromeda galaxy, 20 times further away, may be measured with 'poor' reliability. Then an even better satellite will be launched to measure those... And so on.

Fenerit wrote:That is, these stars are in a special case of quantistic being/not being.


Heh, this isn't a quantum mechanical issue, y'know... :). It's a purely classical matter of statistical measurement uncertainty of the signal-to-noise ratio kind. It arises even before you get to any quantum problem involving Heisenberg's Uncertainty Principle, which is quite different!

I hope that helps, Fenerit.

Spiff.

Avatar
t00fri
Developer
Posts: 8772
Joined: 29.03.2002
Age: 22
With us: 22 years 8 months
Location: Hamburg, Germany

Post #6by t00fri » 15.08.2007, 07:56

Spaceman Spiff wrote:
t00fri wrote:Since the treatment of systematic uncertainties differes substantially from that of statistical errors....

Here I am of course talking only of those statistical errors, the random kind, the ones that can't be corrected for.



Spiff,

as a matter of fact, usually the "nasty" kind of residual errors that do NOT decrease with a larger amount of measurements are the systematic ones. In complex experiments they can really never be eliminated entirely and provoke most "headache". They should also NOT be added quadratically to the statistical errors, by the way, i.e. like so

e_total = sqrt( e_statistical^2 + e_systematic^2)

What we do mostly in particle physics is to design experiments such that eventually

statistical errors ~ systematic errors.

For example, an excessive number of negative parallax values would rather point to an uncorrected systematic error rather than to a statistical fluctuation.

Bye Fridger
Image

Avatar
Fenerit M
Posts: 1880
Joined: 26.03.2007
Age: 17
With us: 17 years 8 months
Location: Thyrrenian sea

Post #7by Fenerit » 15.08.2007, 15:27

This discussion it's very interesting. Thanks Spiff and Fridger for yours time.


t00fri wrote:
What we do mostly in particle physics is to design experiments such that eventually

statistical errors ~ systematic errors.

For example, an excessive number of negative parallax values would rather point to an uncorrected systematic error rather than to a statistical fluctuation.

Bye Fridger


If i'm not wrong, the systematic error is the most important one, because it depend on the test machinery, and only better devices can "diminuish" it.
Never at rest.
Massimo


Return to “Physics and Astronomy”