travelling at the spead of light

General physics and astronomy discussions not directly related to Celestia
ajtribick
Developer
Posts: 1855
Joined: 11.08.2003
With us: 21 years 3 months

Post #101by ajtribick » 13.08.2005, 13:06

Spaceman Spiff wrote:Fridger,

you're "totally" ;) on the right track already: that it can be mathematically proven that sinc(0) = 1, that is is not just defined like that.


What you are actually asking is whether it is possible to prove that sin(x)/x=1 at x=0, let's leave sinc(x) out of it, since we have not agreed on a definition of it yet (you claim that you can leave out the explicit definition sinc(0)=1, I claim that it is necessary)

I still claim that you cannot actually say that sin(0)/0=1, you can only find the limits of the function as x tends to zero - there is a difference!

Consider the following function:

f(x)=0 when x?‰ 0
f(x)=17 when x=0

However, limits do not depend on the value of the function at the point to which the limit is being taken, so for this function we have the situation that:

f(0)=17
limit(x?†’0, f(x)) = 0

Which just goes to show that the limit says nothing about the value of the function, however because the limits of the function as x tends to zero from above and below are the same, we can create a continuous function g(x) from f(x) as follows:

g(x)=f(x) when x?‰ 0
g(x)=limit(x?†’0, f(x)) when x=0

If instead I had defined f(x)=0 for x<0 and f(x)=1 for x>0, the value of the limit would be different depending on whether the limit was being taken from below (in which case the limit is 0), or above (in which case the limit is 1) - and still neither limit corresponds to the value of the function at x=0

Similarly, sin(x)/x is discontinuous at x=0: at this point the function requires you to divide by zero! Fortunately the function tends to the same limit as x tends to zero from both positive and negative values, which means that we can create a continuous function h(x) as follows:

h(x)=sin(x)/x when x?‰ 0
h(x)=limit(x?†’0, sin(x)/x)) when x=0

Avatar
t00fri
Developer
Posts: 8772
Joined: 29.03.2002
Age: 22
With us: 22 years 7 months
Location: Hamburg, Germany

Post #102by t00fri » 13.08.2005, 14:37

chaos syndrome wrote:I still claim that you cannot actually say that sin(0)/0=1, you can only find the limits of the function as x tends to zero - there is a difference!

I strongly disagree...

If you define f(x) =sin(x)/x, then you can prove very easily, that f(0)=1.

Since you seem to know some math, the rest of the proof comes of course from the analyticity of f(x). Our function is easily shown to be an /entire/ function in the /complex/ x plane, hence the limit( x->0,f(x)) is direction independent and therefore unique and identical to f(0). QED.

[sin(x) is an entire function since it's a difference of two exponential functions. x in the denominator is an entire function since it's a polynomial. The ratio is /obviously/ an entire function once the ratio does not have a pole at x=0 (which it doesn't).]

Incidentally f(x) is a standard example in every first semester course, called typically "Math for physicists I" or similar...

chaos syndrome wrote:
Consider the following function:

f(x)=0 when x?‰ 0
f(x)=17 when x=0

+++++++++++++++
That example is of course NOT analytic around x=0, hence the crucial difference!
+++++++++++++++

I did not emphasize the analyticity of sin(x)/x explicitly before, since I thought this was obvious to those with math knowledge, while possibly confusing to others.

Incidentally, my method a) above, used the Taylor series expansion which implies analyticity, though.

Bye Fridger

ajtribick
Developer
Posts: 1855
Joined: 11.08.2003
With us: 21 years 3 months

Post #103by ajtribick » 13.08.2005, 14:55

Ok, I get it now. Thing is I've had it explained that you can never assume that the limit is the value of the function, presumably to avoid falling into traps with non-analytic functions, and probably also to avoid any debate about whether removable singularities can be removed without explicitly mentioning that they have been removed.

Avatar
t00fri
Developer
Posts: 8772
Joined: 29.03.2002
Age: 22
With us: 22 years 7 months
Location: Hamburg, Germany

Post #104by t00fri » 13.08.2005, 18:10

Some basics on Special Relativity (SR) ("Mini-lecture"):
==================================
1) Introduction:
==========

I think that much of the confusion in this thread arose because people picked
inappropriate quantities for trying to explain things! I suppose the
implicit reason was that they wanted to stay as close as possible to the more
familiar /non-relativistic/ description of the kinematics of a particle with
mass m. This is indeed approximately possible for quite massive particles, but
for the MASSLESS photon, it will easily lead to disaster ;-). A photon is NEVER
non-relativistic! According to SR, there is no coordinate system where the
photon is at rest or even travelling with speed v different from c.

In non-relativistic mechanics of a point particle of mass m0, we are
used to the familiar formulae for the kinetic energy, E_kin =1/2*m0*v^2 and
for the momentum, p =m0*v. Equivalently we may write in the non-relativistic
case

Code: Select all

E_kin = p^2/(2*m0).                                               (1)

by using v= p/m0.
+++++++++++++++++++++++++++++++
NOTE: All these formulae are /incompatible/ with SR!
+++++++++++++++++++++++++++++++

I remind you that both for massive and massless particles, the relation between
energy, momentum and the rest mass m0, required by SR is

Code: Select all

E = sqrt((p*c)^2 + (m0*c^2)^2)                                    (2)


The above /non-relativistic/ formulae may only be recovered /approximately/, when

Code: Select all

p << m0 * c,                                                      (3)

i.e. when the momentum of the massive particle is SMALL on the scale of its
MASS*c. Photons can NEVER satisfy this inequality and thus using the above
formulae for photons is bound to lead to WRONG RESULTS!

If Eq.(3) is satisfied, we may use the Taylor series expansion for SMALL p/(m0*c)
to write Eq.(2) approximately

Code: Select all

E = m0*c^2*sqrt(1 + (p/(m0*c))^2) = m0*c^2 + p^2/(2*m0) +...      (4)

We note that the rest mass m0 contributes in the famous way to the total energy
as

Code: Select all

E_rest = m0*c^2        (<=> p=0)                                  (5)

Moreover, by comparison with Eq.(1), we recognize the second term in Eq.(4) as the
/non-relativistic/ form of the kinetic energy!

As I emphasized yesterday already, the /relativistic/ energy-momentum relation
(2) remains valid also for photons, i.e for m0=0, we get E=+-c*|p|, which defines
the so-called light cone (plot it!).[take just 2 momentum components
p=(p1,p2) for simplicity besides the Energy E and plot E=+-c*sqrt(p1^2+p2^2)
=+-c*|p| in an E vs p1, p2 3d-plot, see the cone?]

The light cone is the geometrical locus in relativistic phase space
(E/c,p1,p2,p3) [or equivalently (c*t,x1,x2,x3) space-time], where massless photons
are allowed to exist/move!

++++++++++++++++++++++++
You see that massless photons carry BOTH energy and momentum!
++++++++++++++++++++++++

Perhaps it provides some refreshing points of view or even more clarity, if I try below
to expose my favourite approach to SR. Let me do that next:

2) SR viewed as a symmetry!
==================

In general, a symmetry means that after doing some specific transformations
(here on the 3d coordinates and time), all relevant laws of physics have to
remain unaffected. As we say: observables have to remain /invariant/ wrto the
symmetry transformations.

The great thing about this symmetry approach to SR is that it is intuitive,
mathematically concise and --best of all-- fits beautifully to many similar
considerations within the very general framework of theoretical physics.
The symmetry approach is valid and most convenient in describing applications
of SR from simple relativistic kinematics up to Quantum Field theory!

So here are some respective relevant questions to think about before we get "to business":

1) Why is SR a symmetry?
2) What is supposed to stay invariant under these transformations?
3) What are the transformations respecting this invariance?
4) How do we describe them mathematically?


(to be continued)

Bye Fridger

Fightspit
Posts: 510
Joined: 15.05.2005
With us: 19 years 6 months

Post #105by Fightspit » 17.08.2005, 09:52

I wait the second part :) .
Motherboard: Intel D975XBX2
Processor: Intel Core2 E6700 @ 3Ghz
Ram: Corsair 2 x 1GB DDR2 PC6400
Video Card: Nvidia GeForce 8800 GTX 768MB GDDR3 384 bits PCI-Express 16x
HDD: Western Digital Raptor 150GB 10000 rpm
OS: Windows Vista Business 32 bits

wcomer
Posts: 179
Joined: 19.06.2003
With us: 21 years 5 months
Location: New York City

Post #106by wcomer » 17.08.2005, 17:45

You are both right about the value of sinc(0). I really have nothing new to add that you haven't already said. I think this is largely an issue of semantics. Fridger is right, but only if he takes some conventional assumptions; which he has tacitly acknowledged. Chaos is right, because he is assuming absolutely nothing; which he has explicitly stated.

I'll define a new function. f(x)=x/sin(x). We know that f(0)=1 if we make some kind of assumption. Continuity or analyticity at f(0) would be such assumptions. Unless explicitly stated otherwise, by convention, f(0) is understood to take the value of 1 because f(0) has the same limit from both the left and from the right. We do this even though the function is elsewhere discontinuous. BTW, this function isn't everywhere analytic, failing Fridger?€™s requirement. Likewise, if we added the statement f(0)=1, we wouldn't be doing it to make the function continuous.

On a far more interesting note, I'm curious why the small photon mass would ruin renormalization. I haven't been through QED calculations in a while so my memory is pretty fuzzy; I may be speaking gibberish here. All of this is just gut intuition. Wouldn't the existence of the photon mass simply constrain the cut-off process to some non-arbitrary integration limit in order to prevent explosion. I guess there would some different results in any QED calculation, but for a small enough mass these differences wouldn?€™t conflict with current values. Perhaps the issue starts earlier in the setup of QED because we have to treat the photon as a relativistic massive particle; but I?€™d expect that we would just use the original massless setup as an approximation and then have some additional correction terms to the required order. I?€™ve no doubt that this has been studied extensively; I?€™m just curious where in the fine detail things go horribly wrong. Do the required correction terms explode as you go to higher orders? If so, why? Or does the integration limit have to be arbitrary in order to satisfy invariance? Or am I speaking gibberish?

wcomer
Posts: 179
Joined: 19.06.2003
With us: 21 years 5 months
Location: New York City

Post #107by wcomer » 17.08.2005, 19:10

I just want to throw out a horribly seductive boat/ball push/pull analogy.

As we all know photons are +/-1 spin particles. So you just need to give the ball a lot of top spin or bottom spin when passing it back and forth between the boats to get attraction or repulsion, respectively. Just make sure the ball carries most (ideally all, it?€™s an ACME physics ball) of its momentum as the angular variety and the two forces will be of equivalent magnitude.

WARNING: I take no responsibility for the corruption of any young physics minds due to the misuse of this analogy. Handle with care.

Avatar
t00fri
Developer
Posts: 8772
Joined: 29.03.2002
Age: 22
With us: 22 years 7 months
Location: Hamburg, Germany

Post #108by t00fri » 18.08.2005, 09:41

wcomer wrote:You are both right about the value of sinc(0). I really have nothing new to add that you haven't already said. I think this is largely an issue of semantics. Fridger is right, but only if he takes some conventional assumptions; which he has tacitly acknowledged. Chaos is right, because he is assuming absolutely nothing; which he has explicitly stated.



Walton,

I have to insist that my argument for f(0)=1 was a rigorous /proof/ without ANY further assumption in case of the function f(x)=sin(x)/x! I gave the reasons earlier. Have a look again. We are obviously dealing here with a ratio of two entire functions that is itself analytic everywhere, with possible exception of a simple pole at x=0. That pole is however absent, as one easily can show. Hence f(x) may be expanded in a Taylor series around x=0.

It is certainly NOT an assumption that sin(x) and x are entire functions in the complex x-plane! Come on...that's really first semester math!

Please remember that the discussion focussed on the limit x->0 of a completely known function sin(x)/x and NOT on such limits in general.

Bye Fridger

Avatar
t00fri
Developer
Posts: 8772
Joined: 29.03.2002
Age: 22
With us: 22 years 7 months
Location: Hamburg, Germany

Post #109by t00fri » 18.08.2005, 09:57

Walton,

to make the issue about non-renormalizability really clear, we would have to jump on a completely different level of the discussion. I am not sure whether from your studies of math you would have the required transparent view and technical background about renormalization theory in general?

We would have to employ Ward-Takahashi Identities etc to argue at the level of arbitrary orders of perturbation theory. You would also need to have a clear view about possibilities of non-perturbative renormalization, for example.

The main point behind the much more technical steps is the loss of /local gauge invariance/ in case of a small bare mass of the gauge bosons (the same argumentation applies also to non-abelian Yang Mills theories) . Next, one would have to examine (and rule out) the possibility that a non-vanishing photon mass may have arisen via spontaneous symmetry breaking (like that of the W,Z bosons)...


Bye Fridger

wcomer
Posts: 179
Joined: 19.06.2003
With us: 21 years 5 months
Location: New York City

Post #110by wcomer » 18.08.2005, 19:14

Fridger,

My sole background in QED was to buy a basic graduate text, read it and work through all the problems. Unfortunately, what you describe sounds like something that requires additional reading. So this will just have to remain a mystery to me for the time being.

On sinc(0), I really think this has been argued in circles. The question at hand seems to be: Is sinc(0)=1 an essential part of the function's definition or is it redundant?

I'll quote from Mathworld.

http://mathworld.wolfram.com/SincFunction.html
sinc(x)={1 for x==0; sin(x)/x otherwise}
Note the explicit definition of sinc(0).

http://mathworld.wolfram.com/Singularity.html
Removable singularities are singularities for which it is possible to assign a complex number in such a way that f(z) becomes analytic. For example, the function f(z)==z^2/z has a removable singularity at 0, since f(z)==z everywhere but 0, and f(z) can be set equal to 0 at z==0.
Note the careful use of the phrases "possible to assign" and "can be set to"; clearly implying that this is an act of the mathematician not of the mathematics.

http://mathworld.wolfram.com/RiemannRemovableSingularityTheorem.html
Let f:D(z_0,r)\{z_0}->C be analytic and bounded on a punctured open disk D(z_0,r), then lim_(z->z_0)f(z) exists, and the function defined by g:D(z_0,r)->C

g(z)=={f(z) for z!=z_0; lim_(z'->z_0)f(z') for z==z_0}

is analytic.

In other words, sinc(x) is everywhere analytic because sinc(0) was explicitly assigned the value of 1. While, if we are being rigorous, sin(x)/x is not analytic at x=0 because the derivative remains ill defined at x=0 (the quotient rule gives 0/0.) Or put differently, sin(x)/x is single valued everywhere except at x=0, where it takes on all values. Hence, the need to explicitly pin down its value at x=0; and thusly creating an entirely new function. In the special case where this value is assigned to 1, we call this new function sinc(x).

Using L'Hospital, Taylor series and complete/analytic/regular functions arguments is circular. One has to assume that the function is locally regular to invoke these arguments; else the values derived are only limits. Therefore they cannot be used to show regularity.

It is conventional in physics and engineering to assume functions take appropriate values at removable singularities, because the physicality of the problems either ensures that some other assumption takes care of this anyway or prevents a real infinity from existing (and hence integrating across it, for example, has no effect.) Therefore the assumption never causes problems; so that it is easy to forget that it was an assumption all along.

I feel that this discussion has degraded into trying to score debating points over something of no relevance to the forum; so this is my final comment on the matter.

respectfully,
Walton

Avatar
t00fri
Developer
Posts: 8772
Joined: 29.03.2002
Age: 22
With us: 22 years 7 months
Location: Hamburg, Germany

Post #111by t00fri » 18.08.2005, 19:21

Walton,

I have never argued about sinc(x). I was only considering f(x)=sin(x)/x. Nothing else. If Mathworld claims something different from what I said wrto that function they must be wrong ;-) . Incidentally, a former PhD student of my institute went to work for three months with Wolfram Inc and wrote such Mathworld articles...Hmm


Bye Fridger

wcomer
Posts: 179
Joined: 19.06.2003
With us: 21 years 5 months
Location: New York City

Post #112by wcomer » 18.08.2005, 19:37

Well, if your student were wrong... I don't think you should take it too hard on yourself ;)

Avatar
t00fri
Developer
Posts: 8772
Joined: 29.03.2002
Age: 22
With us: 22 years 7 months
Location: Hamburg, Germany

Post #113by t00fri » 18.08.2005, 19:53

wcomer wrote:Fridger,

My sole background in QED was to buy a basic graduate text, read it and work through all the problems. Unfortunately, what you describe sounds like something that requires additional reading. So this will just have to remain a mystery to me for the time being.
...

respectfully,
Walton


Walton,

it all depends on the level at which you want to see the proof. In qualitative terms the argument is quite straightforward:

First of all, one restricts the discussion to the question of /perturbative/ renormalizability. Then the crucial point is whether to n-th order of perturbation theory you may absorb all arising ultraviolet infinities in the Feynman loop diagrams in a finite number of local counter terms (with arbitrary subtraction constants). If so then you may redefine (i.e. renormalize) the parameters of the theory such that everything becomes finite with only a /finite/ number of free parameters (renormalization constants). In order that this can happen one needs a sufficiently good high-energy behaviour of all Feynman diagrams. It is here where the "gauge miracle" happens! Local gauge symmetry is the reason why all but a few "controllable" UV divergencies cancel. Without it you need an /infinte/ number of (infinite) subtraction constants meaning the theory is unrenormalizable.

A physically much more contrived possibility would be to allow for a photon mass that arises via spontaneous symmetry breaking using some kind of modified Higgs mechanism.

Let me just point out that in this case the interaction would remain locally gauge invariant, but the ground state would have a broken gauge symmetry such that a mass arises via the standard Higgs mechanism (like for the W,Z bosons) of the weak interactions. That spontaneously broken variant of a locally gauge invariant theory remains indeed renormalizable!

No idea whether this sort of qualitative talk is of any use to you?

Bye Fridger
Last edited by t00fri on 18.08.2005, 20:12, edited 1 time in total.

Avatar
t00fri
Developer
Posts: 8772
Joined: 29.03.2002
Age: 22
With us: 22 years 7 months
Location: Hamburg, Germany

Post #114by t00fri » 18.08.2005, 20:10

wcomer wrote:Well, if your student were wrong... I don't think you should take it too hard on yourself ;)

Seriously, Walton,

I think that some misunderstanding arose since in your evaluation & arguments you probably overread that chaos syndrome and subsequently myself have switched from sinc(x) to the function f(x)=sin(x)/x and its value f(0).

chaos syndrome wrote:What you are actually asking is whether it is possible to prove that sin(x)/x=1 at x=0, let's leave sinc(x) out of it,

with my answer, again /clearly/ specializing on f(x)=sin(x)/x:
t00fri wrote:I strongly disagree...

If you define f(x) =sin(x)/x, then you can prove very easily, that f(0)=1.

...
and in my later post to which you again responded with arguments referring to sinc(x) rather than sin(x)/x!

t00fri wrote:I have to insist that my argument for f(0)=1 was a rigorous /proof/ without ANY further assumption in case of the function f(x)=sin(x)/x!

Since I know the subtle differences between sinc(x) and sin(x)/x very well, I was always restating my starting point ....

Given your math background, I would anyway never have believed that you had the /slightest/ doubts about the value of sin(x)/x at x=0!! The issue about sinc(0) is certainly more tricky, but that's NOT what was at stake in the more recent part of the debate.

Bye Fridger

wcomer
Posts: 179
Joined: 19.06.2003
With us: 21 years 5 months
Location: New York City

Post #115by wcomer » 18.08.2005, 22:27

Fridger,

I hope you didn't take my comment on your student the wrong way, I was trying to be silly not snarky.

Your qualitative description is quite helpful. Though I'm going to have to go back through some specific calculations in order to really absorb what you have said.

However, I don't see why the inability to renormalize is sufficient to eliminate the existance of a photon mass. Certainly this is an inconvenience, but why are the two mutually exclusive?

As a rough analogy, the Taylor series of f(x)=1/(1+x) explodes for x>=1. I cannot use the Taylor series to calculate f(1), as there is no right answer to that infinite summation. Nevertheless f(x) is still a legitimate function, and likewise f(1) has a well defined value, namely 0.5. And the fact that the Taylor series works just fine for f(.99999) doesn't mean that x cannot take the value of 1.

So then, by analogy, the fact that the QED fails as a calculation tool under a photon mass, isn't sufficient alone to eliminate photons mass; even if QED does work fantastically well for massless photons. Can it be shown that in the massive case the theory becomes not only non renormalizable but also results in absurdities?

I may have more specific questions next week.

-Walton

Avatar
t00fri
Developer
Posts: 8772
Joined: 29.03.2002
Age: 22
With us: 22 years 7 months
Location: Hamburg, Germany

Post #116by t00fri » 19.08.2005, 11:10

wcomer wrote:Fridger,

I hope you didn't take my comment on your student the wrong way, I was trying to be silly not snarky.

Your qualitative description is quite helpful. Though I'm going to have to go back through some specific calculations in order to really absorb what you have said.

However, I don't see why the inability to renormalize is sufficient to eliminate the existance of a photon mass. Certainly this is an inconvenience, but why are the two mutually exclusive?

As a rough analogy, the Taylor series of f(x)=1/(1+x) explodes for x>=1. I cannot use the Taylor series to calculate f(1), as there is no right answer to that infinite summation. Nevertheless f(x) is still a legitimate function, and likewise f(1) has a well defined value, namely 0.5. And the fact that the Taylor series works just fine for f(.99999) doesn't mean that x cannot take the value of 1.

So then, by analogy, the fact that the QED fails as a calculation tool under a photon mass, isn't sufficient alone to eliminate photons mass; even if QED does work fantastically well for massless photons. Can it be shown that in the massive case the theory becomes not only non renormalizable but also results in absurdities?

I may have more specific questions next week.

-Walton


Walton,

no, the analogy to being outside the convergence radius of a series, say, is not relevant here. Perturbation theory e.g. in QED is defined by a possibly arbitrarily small value of the fine structure constant \alpha.

The UV divergencies arise /independently/ of the strength \alpha of the QED interactions in the (intermediate) 4-momentum integrations of /loop/ diagrams, i.e. in the quantum sector of the theory.

The damage is huge if renormalizability is lost, since both all predictivity is lost /and/ the high-energy behaviour of scattering processes gets intolerably bad (again a manifestation of lacking cancellations due to local gauge invariance).

Since theoretical physicists tend to judge the merrits of theories with a somewhat different perspective than e.g. mathematicians, it is considered a great success that /all four/ sectors of interactions, the strong, weak, electromagnetic and even gravitational ones, may be equally formulated as local gauge theories. In all cases the form of the interactions is /completely fixed/ by the principle of local gauge invariance and the requirement of perturbative renormalizability. At the same time the predictions of these formulations are amazingly successful! So for theorists, it is the combination of elegance, conceptional simplicity and amazingly successful predictivity that makes Yang Mills theories so outstanding. In addition, we know that the "low-energy" limit of sting theory again results in Yang Mills (i.e. non-abelian) local gauge theories!

All this beauty necessarily requires massless gauge bosons (possibly modulo spontaneous symmetry breaking).

Bye Fridger

Fightspit
Posts: 510
Joined: 15.05.2005
With us: 19 years 6 months

Post #117by Fightspit » 19.08.2005, 15:57

Demonstration (for wcomer :wink: ):

f(x)=sin(x)/x

However, lim(x->0) sin(x)/x <=> lim(x->0) [sin(x)-sin(0)]/(x-0) = f '(0)

Because, sin(0)=0 and lim(x->?±) (f(x)-f(?±))/(x-?±)=f '(?±) with f '(?±) , a derivable function in ?±.

And, f '(0) = cos(0) = 1
because, if f(x) = sin(x), f '(x) = cos(x).

That why lim(x->0) sin(x)/x =1
8)
edit: I think it is right if Toofri is agree with me :wink: .
Motherboard: Intel D975XBX2
Processor: Intel Core2 E6700 @ 3Ghz
Ram: Corsair 2 x 1GB DDR2 PC6400
Video Card: Nvidia GeForce 8800 GTX 768MB GDDR3 384 bits PCI-Express 16x
HDD: Western Digital Raptor 150GB 10000 rpm
OS: Windows Vista Business 32 bits

wcomer
Posts: 179
Joined: 19.06.2003
With us: 21 years 5 months
Location: New York City

Post #118by wcomer » 20.08.2005, 19:15

Fightspit,

I appreciate your efforts to educate me on the matter, however there are several mechanical errors in your presentation and fixing them would make your argument stronger.

Just the same, your conclusion is correct and it is not in dispute. The question isn't of the value of the limit but whether sin(x)/x (or any function) implicitly takes the value of its limits. It is my position that this is a well understood bit of mathematics and that the rigorous conclusion is that it does not implicitly take the value of 1 at x=0. Which is why we have a function called sinc(x) which explicitly gives it this value. If you reread your calculus text you will most likely find that the author is very careful in using L'Hospital's Rule to only assign a value to the limits of a function. It would be unfortunate verbage if the author said that given f(x)=sin(x)/x, f(0)=1 when he means lim_(x->0)_f(x)=1. I think that this is a statement with which Fridger will agree. The 'dispute' is merely a reflection of my poor ability to raise this technical point.

cheers,
Walton

Spaceman Spiff
Posts: 420
Joined: 21.02.2002
With us: 22 years 8 months
Location: Darmstadt, Germany.

Post #119by Spaceman Spiff » 01.10.2005, 20:15

At the risk of annoying several Celestians (esp. Cham and D M Falk, I suppose) my reply ressurects this topic to conspicuity.

My raising of the sinc(x) issue was a red herring after all. Stuck by the matter of whether the use of the Lorentz transform in Special Relativity necessitates the 'rest' mass of a photon to be zero, I was trying to make an analogy that if sinc(x) turns out to be 1 when x = 0, then maybe m0/(1-v??/c??)^?? would turn out to be zero, even though 1/(1-v??/c??)^?? is undefined.

On the matter of sinc(x): t00fri is quite right that the function is equal to 1 at x = 0 for quite 'natural' reasons. The claim that sinc(x) has to be defined as 1 at x = 0 is probably because computer programmers (coders) have to do a IF THEN check for the special case for x= 0 to avoid a division by zero error. Since Wolfram produce Mathematica (a popular computer modelling language) it's not surprising that they claim sinc(x=0)=1 is a definition. The sinc function is a natural outcome of Fourier transforms of uniform apertures in optics and radar, and of course the gain pattern of such apertures needs to be a continuous function to be meaningful.

On the matter of 1/(1-v??/c??)^??: I hoped to find through derivation some answer that would help prove the rest mass of a photon has to be zero, but found that I got a recursion. That means that just as the Lorentz transform tends to infinity as v tends to c, so does its derivative, and also its derivative in turn.

Then I realised that that was what is intended by the Lorentz transform: it tries to put the speed of light beyond any hope of physical experience. Einstein adopted it in his theory: so he already had an answer to his question: 'what is it like to travel on a light wave?', or even the original question of this topic: 'travelling at the spead of light.' (sic)

So the answer is that 'one can't travel at the speed of light, ever'. No matter how fast you think you are going, light always travels at the speed of light faster than you, and you appear to have achieved nothing getting to that speed.

Still, the Lorentz transform and all its derivatives do tend to infinity as v tends to c, so it must be the case that the rest mass is zero, or else the photon would have infinite mass, not a finite mass.

I thought that photons could not have kinetic energy as their rest mass is zero. So I wrote that photons can travel at the speed of light because they have no kinetic energy. But Cham wrote that photons have energy, and the energy of a photon is kinetic energy, and since then I think an argument I saw that the energy of a photon (E = hf) is half kinetic and half potential is correct. But that was a quantum mechanical argument.

Spiff.

Avatar
t00fri
Developer
Posts: 8772
Joined: 29.03.2002
Age: 22
With us: 22 years 7 months
Location: Hamburg, Germany

Post #120by t00fri » 02.10.2005, 12:01

Hi Spiff,

let me add that the familiar relation from quantum mechanics|physics

Code: Select all

E_photon = h*??


is intrinsically quite non-trivial!

Already for dimensional reasons a new FUNDAMENTAL dimensionful constant (Planck's quantum of action h) MUST exist, for being able to write the photon energy proportional to its frequency ??!

Such a relation (involving h) clearly signals that we have reached a regime here, where classical physics ceases to be valid.

Bye Fridger


Return to “Physics and Astronomy”