Jump to content
Science Forums

Recommended Posts

Posted
Mostly it was the, ignore or take a "shut-up-and-calculate" approach. That is essentially the -"direction"- my thesis advisor gave me some forty years ago.

 

Ah, I see... Well yeah it seems to come down to that when people really start thinking about the odd feature of a model, in terms of what they imply about ontological reality if you take them as literal descriptions of reality. In this case though, I did not even understand what the problem was, because I am too unfamiliar with the subject matter. I have almost zero idea of what "perturbation theory" is, or what is "radius of convergence" of "the perturbation series", or what is "coupling constant" and what it means that it's taken as negative, or what's "coulomb force" etc.... They are just using too many concepts that I'm not familiar with at present time, that I could not make much sense of it :eek:

 

I'm interested to hear what it means in more detail if you want to comment on it, albeit, if it's not very important right now, I could also just concentrate on getting through the Dirac deduction...

 

Since it relates to something Qfwfq commented about two years ago, I decided to answer his post. I don't think the issue really needs to be addressed by me as I am sure there are many physicists much more qualified to work out those issues. Read my new post to the “What can we know of reality?” thread.

 

Okay, I think I understood the gist of that post. I did not stop to looking at the exact details of the matter just yet, but I have no problems believing the circumstance, and I think I could follow it with reasonable effort... But, the algebra of Dirac's equation first :doh:

 

-Anssi

Posted

Alright, so at post #31, I was left with:

 

[math]

\left [\left\{ -ic\hbar \vec{\alpha}_1 \cdot \vec{\nabla}_1 \right\}\vec{\Psi}_1

-i\hbar \sqrt{\frac{1}{2}}

\left (\frac{\partial}{\partial t}\vec{\Psi}_1 \right )\right ] \vec{\Psi}_2

+

(-ic\hbar)

\left\{ \vec{\alpha}_2 \cdot \vec{\nabla}_2 + (\beta_{12} + \beta_{21}) \delta(\vec{x_1}-\vec{x_2}) \right\} \vec{\Psi}_1\vec{\Psi}_2

=

\left\{\frac{i\hbar}{\sqrt{2}}\frac{\partial}{\partial t}\vec{\Psi}_2\right\}\vec{\Psi}_1

[/math]

 

Where the square brackets amounts to 0, since the terms came from the expression of energy relationship of element #1.

 

So after factoring that out, and some more reordering;

 

[math]

\left\{ -ic\hbar \vec{\alpha}_2 \cdot \vec{\nabla}_2 + -ic\hbar (\beta_{12} + \beta_{21}) \delta(\vec{x_1}-\vec{x_2}) \right\} \vec{\Psi}_1\vec{\Psi}_2

=

\frac{i\hbar}{\sqrt{2}}\vec{\Psi}_1 \frac{\partial}{\partial t}\vec{\Psi}_2

[/math]

 

we are exactly at your expression from post #29.... Onwards with that post;

 

If we now multiply through by [imath]\vec{\Psi}_1^\dagger\cdot[/imath] and integrate over [imath]\vec{x}_1[/imath], we get unity as the contribution from every term except the interaction term. The interaction term once again spikes when [imath]\vec{x}_1=\vec{x}_2[/imath] and the result is the amplitude of the probability that our electron is in the position referred to as [imath]\vec{x}_2[/imath]: i.e., it is exactly given by [imath]\vec{\Psi}_1^\dagger(\vec{x}_2,t)\cdot\vec{\Psi}_1(\vec{x}_2,t) = \psi^\dagger(\vec{x}_2,t)\psi(\vec{x}_2,t)[/imath] with the function [imath]\psi[/imath] defining the position of the electron as defined above. The resultant equation can thus be rewritten as

[math]\left\{-i\hbar c\vec{\alpha}\cdot \vec{\nabla}_{2xyz\tau} -\left[i\hbar c (\beta_{12}+\beta_{21}) \psi^\dagger(\vec{x}_2,t)\psi(\vec{x}_2,t)\right] \right\}\vec{\Psi}_2=\frac{i\hbar}{\sqrt{2}}\frac{\partial}{\partial t}\vec{\Psi}_2[/math].

 

(If I haven't made any typos.)

 

Well the alpha is missing the subscript "2", but otherwise, yup.

 

Again you are somewhat ahead of the game here. What we want is to know here are the differential equations [imath]\Phi(\vec{x},t)[/imath] and [imath]\vec{A}(\vec{x},t)[/imath] must obey. What I have just deduced is the equation [imath]\vec{\Psi}_2[/imath] must behave. If you go back and look at the deduction of Dirac's equation, you will see that, in order for that to be Dirac's equation, we need [imath]\Phi(\vec{x},t)[/imath] and [imath]\vec{A}(\vec{x},t)[/imath] to be given by,

[math]\Phi(\vec{x},t)=-i\frac{\hbar c}{e}\left[\gamma_\tau \vec{\Psi}_2^\dagger(\vec{x},t)\cdot \vec{\Psi}_2(\vec{x},t)\right][/math]

 

and

 

[math]\vec{A}(\vec{x},t)=i\frac{\hbar c}{e}\left[\vec{\gamma}\vec{\Psi}_2^\dagger(\vec{x},t)\cdot \vec{\Psi}_2(\vec{x},t)\right][/math].

 

Noting that, except for [imath]i\frac{\hbar c}{e}[/imath] which is a simple constant, both [imath]\Phi(\vec{x},t)[/imath] and [imath]\vec{A}(\vec{x},t)[/imath] are nothing more than expectation values of [imath]\vec{\gamma}[/imath] so what we really need is to convert that equation above (for [imath]\vec{\Psi}_2[/imath]) into an equation for [imath]\vec{\gamma}[/imath]. That is actually a pretty straight forward procedure very similar to what was done in my deduction of Schrödinger's equation.

 

Starting with

[math]\left\{-i\hbar c\vec{\alpha}\cdot \vec{\nabla}_{2xyz\tau} -\left[i\hbar c (\beta_{12}+\beta_{21}) \psi^\dagger(\vec{x}_2,t)\psi(\vec{x}_2,t)\right] \right\}\vec{\Psi}_2=\frac{i\hbar}{\sqrt{2}}\frac{\partial}{\partial t}\vec{\Psi}_2[/math].

 

We can assert that, so long as we are operating on the correct [imath]\vec{\Psi}_2[/imath] the above (divided through by [imath]i\hbar c[/imath]) can be seen as implying the operator identity

[math]\left\{-\vec{\alpha}\cdot \vec{\nabla}_{2xyz\tau} -\left[(\beta_{12}+\beta_{21}) \psi^\dagger(\vec{x}_2,t)\psi(\vec{x}_2,t)\right] \right\}=\frac{1}{c\sqrt{2}}\frac{\partial}{\partial t}[/math].

 

Right... (except for all the alphas missing the subscript)

 

Thus it is that we can operate on [imath]\vec{\Psi}_2[/imath] with that operator twice and obtain

[math]\left\{-\vec{\alpha}\cdot \vec{\nabla}_{2xyz\tau} -\left[(\beta_{12}+\beta_{21}) \psi^\dagger(\vec{x}_2,t)\psi(\vec{x}_2,t)\right] \right\}\left\{-\vec{\alpha}\cdot \vec{\nabla}_{2xyz\tau} -\left[(\beta_{12}+\beta_{21}) \psi^\dagger(\vec{x}_2,t)\psi(\vec{x}_2,t)\right] \right\}\vec{\Psi}_2[/math]

 

[math]=\frac{1}{c\sqrt{2}}\frac{\partial}{\partial t}\frac{1}{c\sqrt{2}}\frac{\partial}{\partial t}\vec{\Psi}_2=\frac{1}{2c^2}\frac{\partial^2}{\partial t^2}\vec{\Psi}_2[/math].

 

Now, if we perform the indicated multiplication ([imath]\{\cdots\}\{\cdots\}[/imath]),noting the fact that because of the commutation properties of the alpha and beta operators (which appear in every term making up those two factors) the cross terms of those operators all vanish and the direct terms all yield exactly [imath]\frac{1}{2}[/imath] we will get,

[math]\left\{\frac{1}{2} \nabla_{2xyz\tau}^2 +\left[(\frac{1}{2}+\frac{1}{2}) \left(\psi^\dagger(\vec{x}_2,t)\psi(\vec{x}_2,t)\right)^2\right] \right\}\vec{\Psi}_2=\frac{1}{2c^2}\frac{\partial^2}{\partial t^2}\vec{\Psi}_2[/math].

 

Hmmm, I have a question here. In the OP, you note that [imath]\vec{\alpha}_1\cdot\vec{\alpha}_1=2[/imath], since the alpha consists of 4 components, each yielding 1/2.

 

But here, looking at your result, it appears that you've used [imath]\alpha^2 = \frac{1}{2}[/imath], so is the issue simply that [imath]\vec{\alpha_i} \cdot \vec{\alpha_i}[/imath] is very much a different operation than [imath]\alpha_i^2[/imath] :I

 

Because I started doing this with the idea that each component of that alpha essentially yields one half when squared, so I would have said:

 

[imath]

(-\vec{\alpha_2}\cdot \vec{\nabla}_{2xyz\tau})

(-\vec{\alpha_2}\cdot \vec{\nabla}_{2xyz\tau})

=

2\vec{\nabla}_{2xyz\tau}^2

[/imath]

 

Well, let me know which way it is, because I simply don't know right now. Assuming that [imath]\alpha_{2}^2 = \frac{1}{2}[/imath] is indeed correct, then I managed to work my way out to exactly:

 

[math]

\left \{

\frac{1}{2}\vec{\nabla}_{2xyz\tau}^2

+

(\frac{1}{2}+\frac{1}{2}) (\psi^\dagger(\vec{x}_2,t)\psi(\vec{x}_2,t))^2

\right \}

\vec{\Psi}_2

=

\frac{1}{2c^2}\frac{\partial^2}{\partial t^2}\vec{\Psi}_2

[/math]

 

(Well you've left the square brackets in, I guess they don't really mean anything at that point though?)

 

At one point, while working that out, I had these cross terms in there:

 

[math]

(-\vec{\alpha_2}\cdot \vec{\nabla}_{2xyz\tau})

(-\left[(\beta_{12}+\beta_{21}) \psi^\dagger(\vec{x}_2,t)\psi(\vec{x}_2,t)\right])

+

(-\left[(\beta_{12}+\beta_{21}) \psi^\dagger(\vec{x}_2,t)\psi(\vec{x}_2,t)\right])

(-\vec{\alpha_2}\cdot \vec{\nabla}_{2xyz\tau})

[/math]

 

I really had to scratch my head, trying to remember how these anti-commuting things work exactly in this situation, but I supposed in the end that that is exactly the circumstance that we had before, and it amounts to 0 as per [imath](a_ia_j+a_ja_i)=0[/imath] because those operators anti-commute. Right?

 

Finally, multiplying through by two we have a final differential equation [imath]\vec{\Psi}_2[/imath] must obey

[math]\left\{\nabla_{2xyz\tau}^2 +2 \left(\psi^\dagger(\vec{x}_2,t)\psi(\vec{x}_2,t)\right)^2 \right\}\vec{\Psi}_2=\frac{1}{c^2}\frac{\partial^2}{\partial t^2}\vec{\Psi}_2[/math].

 

Yup.

 

If we left multiply this by [imath]\vec{\gamma}\vec{\Psi}_2^\dagger\cdot[/imath] we obtain the equation

[math]\left\{\vec{\gamma}\vec{\Psi}_2^\dagger\cdot\nabla_{2xyz\tau}^2 +\vec{\gamma}\vec{\Psi}_2^\dagger\cdot2 \left(\psi^\dagger(\vec{x}_2,t)\psi(\vec{x}_2,t)\right)^2 \right\}\vec{\Psi}_2=\vec{\gamma}\vec{\Psi}_2^\dagger\cdot\frac{1}{c^2}\frac{\partial^2}{\partial t^2}\vec{\Psi}_2[/math].

 

Which can be rewritten (keeping in mind that none of the differential operators operate on [imath]\vec{\gamma}[/imath]),

[math]\vec{\gamma}\vec{\Psi}_2^\dagger\cdot\nabla_{2xyz\tau}^2\vec{\Psi}_2 +2 \left(\psi^\dagger(\vec{x}_2,t)\psi(\vec{x}_2,t)\right)^2\vec{\gamma}\vec{\Psi}_2^\dagger\cdot\vec{\Psi}_2=\vec{\gamma}\vec{\Psi}_2^\dagger\cdot\frac{1}{c^2}\frac{\partial^2}{\partial t^2}\vec{\Psi}_2[/math].

 

Yup.

 

And now we are back to the OP:

 

At this point, I want to bring up an interesting mathematical relationship,

[math]\frac{\partial^2}{\partial x^2}\left\{\vec{\Psi}^\dagger(\vec{x},t)\cdot\vec{\Psi}(\vec{x},t)\right\}=\frac{\partial}{\partial x}\left\{\left( \frac{\partial}{\partial x}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot\vec{\Psi}(\vec{x},t) + \vec{\Psi}^\dagger(\vec{x},t)\cdot\left(\frac{\partial}{\partial x}\vec{\Psi}(\vec{x},t)\right)\right\}[/math]

 

[math]=\left\{\left( \frac{\partial^2}{\partial x^2}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot\vec{\Psi}(\vec{x},t) + 2 \left(\frac{\partial}{\partial x}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot\left(\frac{\partial}{\partial x}\vec{\Psi}(\vec{x},t)\right)+ \vec{\Psi}^\dagger(\vec{x},t)\cdot\left(\frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right)\right\}.[/math]

 

Yes, I was able to follow that.

 

Adding to the above the subtle mathematical relationship [imath]\vec{\Phi}_1^\dagger \cdot\vec{\Phi}_2=\left(\vec{\Phi}_1 \cdot\vec{\Phi}_2^\dagger\right)^\dagger[/imath], we can assert that

[math]\frac{\partial^2}{\partial x^2}\left\{\vec{\Psi}^\dagger(\vec{x},t)\cdot\vec{\Psi}(\vec{x},t)\right\}=2\left\{\left( \vec{\Psi}^\dagger(\vec{x},t)\cdot\frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right) + \left(\frac{\partial}{\partial x}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot\left(\frac{\partial}{\partial x}\vec{\Psi}(\vec{x},t)\right)\right\}.[/math]

 

Well, I see you are stating there essentially that:

 

[math]

\vec{\Psi}(\vec{x},t)

\cdot

\left( \frac{\partial^2}{\partial x^2}\vec{\Psi}^\dagger(\vec{x},t)\right)

+

\vec{\Psi}^\dagger(\vec{x},t)

\cdot

\left(\frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right)

=

2\left \{ \vec{\Psi}^\dagger(\vec{x},t)

\cdot

\left(\frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right) \right \}

[/math]

 

But I do not understand why that is so, while I do understand that [imath]\vec{\Phi}_1^\dagger \cdot\vec{\Phi}_2=\left(\vec{\Phi}_1 \cdot\vec{\Phi}_2^\dagger\right)^\dagger[/imath].

 

I think I'll have a pause here :)

 

-Anssi

Posted
I have almost zero idea of what "perturbation theory" is, or what is "radius of convergence" of "the perturbation series", or what is "coupling constant" and what it means that it's taken as negative, or what's "coulomb force" etc.... They are just using too many concepts that I'm not familiar with at present time, that I could not make much sense of it :)

 

I'm interested to hear what it means in more detail if you want to comment on it, albeit, if it's not very important right now, I could also just concentrate on getting through the Dirac deduction...

I will just make a few quick comments. A “perturbation” is a disturbance. Almost all relationships in the universe are essentially “many body” problems. Many body problems, even when you have the correct interactions, are particularly hard to solve. They are impossible to solve in general although some very specific examples (Lagrange points for example) have been solved for some three body problems (two body problems in three dimensional space can be reduced to a one body problem through conservation of momentum). But, in general, problems involving more than two bodies can not be solved.

 

What one can do is solve some embedded two body problem under the assumption that no other bodies influence the solution. If another body has only a small influence that solution solution, then that influence can be seen as “a perturbation”: i.e., only a small influence which makes only a small change in the original solution. For example, obtaining the correct orbit of the earth can be first calculated as if no other planets exist. Then we calculate the path of the moon as if the sun and the other planets didn't exist. The next step is to calculate the change in the earths orbit around the sun because the gravitational pull of the moon as it proceeds through its orbit. We also have to add in the influence of the sun on the moons orbit. Each time we go through this procedure, the base solutions change a little and we have to go back through the system calculating the consequences of those changes. This series of calculations is called a “perturbation series” and you should see that essentially has an infinite number of terms.

 

That brings in the issue of “convergence”. Do the changes caused by those perturbations cause greater or smaller perturbations on the next recalculation. If you can prove that every time you recalculate, the new perturbation corrections are smaller than the last set, you know that you will get closer and closer to the correct solution: i.e., the perturbation series converges. The “radius of convergence” has to do with how big the perturbations are as a function of the required correction terms. Clearly, the stronger the interaction, the bigger the correction terms will be. That factor is a direct consequence of the strength of the interaction which is commonly referred to as the “coupling constant”. If the coupling constant is small, the perturbation converges more quickly than if the coupling constant is large.

 

I should point out that even with the gravitational forces, given enough time, the consequences of perturbation will almost always generate unstable consequences. Wait long enough and the feed back between all the bodies in the solar system will probably eventually cause a major change in things somewhere so a general guarantee of stability isn't usually in the cards: i.e., most all perturbation attacks will eventually generate extremely large terms. So the real question of convergence is, “how close to the correct solution” do we need to be obtain a reasonably long term solution via the perturbation attack? Thus it is that “Perturbation Theory” is, all by itself, a fundamental area of modern physics.

 

Essentially, if the radius of convergence is zero, your opening approximation must be correct otherwise the series will diverge. When the coupling constant is small (gravity) it is quite easy to get close to relatively long term stability and convergence (over the short term) is quite quick. In the nuclear case, the coupling constant is large and one obtains infinite terms on the first step (the radius of convergence is zero). In a nutshell, one can argue that, if you had the correct solution, all the infinite corrections would cancel out anyway so just proceed by throwing out the infinite terms (some subtle arguments have to be made as to how this disposal is to be handled). Actually the attack seems to work quite well; particularly when it comes to QED. In the nuclear case (my thesis calculations) I suspect they are still getting some questionable results but maybe not, I have not been involved with the stuff for over forty years.

 

But, as you say, we need to get to the good stuff :candle: Dirac's equation. :magic:

...we are exactly at your expression from post #29.... Onwards with that post;

[math]\cdots[/math]

Well the alpha is missing the subscript "2", but otherwise, yup.

Since there is only one alpha in the equation from here on, I think we can drop the subscript: i.e., we don't need the subscript in order to know what we are talking about.
Hmmm, I have a question here. In the OP, you note that [imath]\vec{\alpha}_1\cdot\vec{\alpha}_1=2[/imath], since the alpha consists of 4 components, each yielding 1/2.

 

But here, looking at your result, it appears that you've used [imath]\alpha^2 = \frac{1}{2}[/imath], so is the issue simply that [imath]\vec{\alpha_i} \cdot \vec{\alpha_i}[/imath] is very much a different operation than [imath]\alpha_i^2[/imath] :I

As you have already said somewhere, you are not sufficiently competent to do these things in your head yet, but that is what you are actually trying to do. The urge actually comes on quite quickly. :lol: The problem here is that we are talking about two very different constructs. In the first case, [imath]\vec{\alpha}\cdot\vec{\alpha}[/imath] (note that I have dropped the subscript as the result does not depend upon that subscript), becomes, in detail, as follows

[math]\vec{\alpha}\cdot\vec{\alpha}=\alpha_x\alpha_x+\alpha_y\alpha_y+\alpha_z\alpha_z+\alpha_\tau\alpha_\tau=\frac{1}{2}+\frac{1}{2}+\frac{1}{2}+\frac{1}{2}=2[/math]

 

(the cross terms vanish because the dot product creates no cross terms). In the second expression [imath]\left(\vec{\alpha}\cdot\vec{\nabla}\right)^2[/imath] you need to work out the dot product first and then square it. In detail,

[math]\left(\vec{\alpha}\cdot\vec{\nabla}\right)^2=\left(\alpha_x\nabla_x+\alpha_y\nabla_y+\alpha_z\nabla_z+\alpha_\tau\nabla_\tau\right)^2[/math]

 

When you square that polynomial (noting that alpha commutes with [imath]\vec{\nabla}[/imath], all the cross products vanish because of the fact that all the various alpha operators anti-commute with one another but commute with the various components of the differential operator, [imath]\vec{\nabla}[/imath]. Only the direct terms survive ( thus the detailed result will be,

[math]\left(\alpha_x\nabla_x\right)^2 +\left(\alpha_y\nabla_y\right)^2 +\left(\alpha_z\nabla_z\right)^2 +\left(\alpha_\tau\nabla_\tau\right)^2= \frac{1}{2}\left(\nabla_x^2 +\nabla_y^2 +\nabla_z^2 +\nabla_\tau^2\right) =\frac{1}{2}\nabla^2[/math]

 

as the square of the alpha operator appears once in each term and may be factored. Again, I have left out the subscript [imath](2xyz\tau)[/imath] because the algebra has nothing to do with the subscript.

Because I started doing this with the idea that each component of that alpha essentially yields one half when squared, so I would have said:

 

[imath]

(-\vec{\alpha_2}\cdot \vec{\nabla}_{2xyz\tau})

(-\vec{\alpha_2}\cdot \vec{\nabla}_{2xyz\tau})

=

2\vec{\nabla}_{2xyz\tau}^2

[/imath]

As I said, you were trying to do the algebra in your head; the dot product is a specific sum of terms.
(Well you've left the square brackets in, I guess they don't really mean anything at that point though?)
You are correct, they really don't have any algebraic significance at that point. The only reason I left them in is that I wanted to keep the “interaction term” as a conceptually different thing; set it off for attention purposes so to speak.
I really had to scratch my head, trying to remember how these anti-commuting things work exactly in this situation, but I supposed in the end that that is exactly the circumstance that we had before, and it amounts to 0 as per [imath](a_ia_j+a_ja_i)=0[/imath] because those operators anti-commute. Right?
Correct! Your only problem seems to be the failure to realize that you have to be dealing with the individual alpha and beta operators not with algebraic functions of them (dot products or vectors or such). When you get down to a simple sum of these operators, squaring it will always drop out all cross terms.
But I do not understand why that is so, while I do understand that [imath]\vec{\Phi}_1^\dagger \cdot\vec{\Phi}_2=\left(\vec{\Phi}_1 \cdot\vec{\Phi}_2^\dagger\right)^\dagger[/imath].
I would have said that I was asserting that

[math] \frac{\partial^2}{\partial x^2}\vec{\Psi}^\dagger(\vec{x},t)

= \left(\frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right)^\dagger.

[/math]

 

Essentially that the differential of the conjugate term is identical to the conjugate of the differential of the original term. The “dagger” operator does nothing except change the sign of the orthogonal component of [imath]\vec{\Psi}[/imath]. It follows that the differential operator yields exactly the same result on [imath]\vec{\Psi}^\dagger[/imath] as on [imath]\vec{\Psi}[/imath] except for the fact that the orthogonal component has the opposite sign. That fact is reversed by application of the dagger operator. It follows that the two terms you are worried about are identical.

 

The big step is setting

[math]\vec{\Psi}^\dagger(\vec{x},t) \cdot \frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)=\left(\frac{\partial}{\partial x}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot\left(\frac{\partial}{\partial x}\vec{\Psi}(\vec{x},t)\right)[/math]

 

This is essentially a statement that the function [imath]\vec{\Psi}[/imath] is an eigenfunction of momentum: i.e., quantum mechanical vacuum polarization (short term violations of conservation of energy) can not take place. This presumption is very much a part of classical electrodynamics; it is the fundamental difference between classical electrodynamics and QED.

 

Have fun -- Dick

Posted
I will just make a few quick comments. A “perturbation” is a disturbance. Almost all relationships in the universe are essentially “many body” problems....

 

Thank you for that explanation, now I have some idea about what the issue was about :)

 

As you have already said somewhere, you are not sufficiently competent to do these things in your head yet, but that is what you are actually trying to do.

 

Yes, I was thinking about it all wrong! :I

Easy mistake to make since I don't really have the familiarity with that stuff; I still feel like I will make that same mistake again as soon as this same circumstance surfaces again... It is kind of easy to understand now after your explanation, I just hope I'll remember it now...

 

I would have said that I was asserting that

[math]\left( \frac{\partial^2}{\partial x^2}\vec{\Psi}^\dagger(\vec{x},t)\right)

= \left\{\left(\frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right)\right\}^\dagger.

[/math]

 

Essentially that the differential of the conjugate term is identical to the conjugate of the differential of the original term. The “dagger” operator does nothing except change the sign of the orthogonal component of [imath]\vec{\Psi}[/imath]. It follows that the differential operator yields exactly the same result on [imath]\vec{\Psi}^\dagger[/imath] as on [imath]\vec{\Psi}[/imath] except for the fact that the orthogonal component has the opposite sign. That fact is reversed by application of the dagger operator. It follows that the two terms you are worried about are identical.

 

Um... I am missing something here... While I understand perfectly that the "differential of the conjugate term is identical to the conjugate of the differential of the original term", I do not understand how it applies to the situation I am looking at.

 

I mean, let's go back to:

 

[math]

\left\{\left( \frac{\partial^2}{\partial x^2}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot\vec{\Psi}(\vec{x},t) + 2 \left(\frac{\partial}{\partial x}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot\left(\frac{\partial}{\partial x}\vec{\Psi}(\vec{x},t)\right)+ \vec{\Psi}^\dagger(\vec{x},t)\cdot\left(\frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right)\right\}

[/math]

 

I was able to follow where that expression came from. Little re-ordering, and we get:

 

[math]

2 \left \{ \left(\frac{\partial}{\partial x}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot\left(\frac{\partial}{\partial x}\vec{\Psi}(\vec{x},t)\right) \right \}

+

\left [

\left( \vec{\Psi}(\vec{x},t) \cdot \frac{\partial^2}{\partial x^2}\vec{\Psi}^\dagger(\vec{x},t)\right) + \left( \vec{\Psi}^\dagger(\vec{x},t)\cdot \frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right) \right ]

[/math]

 

So, I understand the curly brackets perfectly, but the terms in the square brackets are giving me trouble.

 

I'm trying to find my way to your expression

 

[math]

2\left\{\left( \vec{\Psi}^\dagger(\vec{x},t)\cdot\frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right) + \left(\frac{\partial}{\partial x}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot\left(\frac{\partial}{\partial x}\vec{\Psi}(\vec{x},t)\right)\right\}

[/math]

 

So basically all I'm missing is why:

 

[math]

\left [

\left( \vec{\Psi}(\vec{x},t) \cdot \frac{\partial^2}{\partial x^2}\vec{\Psi}^\dagger(\vec{x},t)\right) + \left( \vec{\Psi}^\dagger(\vec{x},t)\cdot \frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right) \right ]

=

2\left( \vec{\Psi}^\dagger(\vec{x},t)\cdot\frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right)

[/math]

 

I guess I'm failing to see the connection of the above to [imath]\vec{\Phi}_1^\dagger \cdot\vec{\Phi}_2=\left(\vec{\Phi}_1 \cdot\vec{\Phi}_2^\dagger\right)^\dagger[/imath] somehow :(

 

The big step is setting

[math]\vec{\Psi}^\dagger(\vec{x},t) \cdot \frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)=\left(\frac{\partial}{\partial x}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot\left(\frac{\partial}{\partial x}\vec{\Psi}(\vec{x},t)\right)[/math]

 

This is essentially a statement that the function [imath]\vec{\Psi}[/imath] is an eigenfunction of momentum: i.e., quantum mechanical vacuum polarization (short term violations of conservation of energy) can not take place. This presumption is very much a part of classical electrodynamics; it is the fundamental difference between classical electrodynamics and QED.

 

Actually I'm failing to understand that step too... In the OP you say "suppose those two terms were the same", I'm not sure what that's about, is there a justification to supposing they are the same, or what does that move mean? I mean, is it analogous to some assumption done in standard physics?

 

(Incidentally, I wasn't finding my way through [imath]\nabla^2\left(\vec{\Psi}^\dagger \cdot \vec{\Psi}\right)=4\vec{\Psi}^\dagger\cdot\nabla^2\vec{\Psi}[/imath] right now :(

 

-Anssi

Posted
Um... I am missing something here... While I understand perfectly that the "differential of the conjugate term is identical to the conjugate of the differential of the original term", I do not understand how it applies to the situation I am looking at.
First of all, I wrote that by cut and paste, adding the extra parenthesis and dagger afterwards. I never looked at the actual result so I missed the point that there were extra unnecessary parentheses there. I have removed them. Sorry about that.
So, I understand the curly brackets perfectly, but the terms in the square brackets are giving me trouble.

 

[math]

\left [

\left( \vec{\Psi}(\vec{x},t) \cdot \frac{\partial^2}{\partial x^2}\vec{\Psi}^\dagger(\vec{x},t)\right) + \left( \vec{\Psi}^\dagger(\vec{x},t)\cdot \frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right) \right ]

[/math]

Ah, I think I see your problem. The issue is that each of those terms are the conjugate of the other. What is being calculated is the expectation value of the differential operator which is a scalar in that abstract vector space used to represent [imath]\vec{\Psi}[/imath]: i.e., conjugation has no impact upon the expectation value of such a scalar. In vector operations, [imath]\vec{A}\cdot \vec{B} \equiv \vec{B} \cdot \vec{A}[/imath] thus the the statement is essentially that the expectation value of the differential does not change under conjugation.

[math] \left(\frac{\partial^2}{\partial x^2}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot \vec{\Psi}(\vec{x},t)

= \vec{\Psi}(\vec{x},t) \cdot \left(\frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right)^\dagger = \left(\vec{\Psi}^\dagger(\vec{x},t) \cdot \frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right)^\dagger[/math]

Actually I'm failing to understand that step too...
That step comes directly from the fact that, if [imath]\vec{\Psi}[/imath] is an eigenfunction of the [imath]\frac{\partial}{\partial x}[/imath] operator, [imath]\frac{\partial}{\partial x}\vec{\Psi}=k\vec{\Psi}[/imath] (that equation is the definition of an eigenvalue). If you operate on both sides of that equation with the [imath]\frac{\partial}{\partial x}[/imath] operator you get [imath]\frac{\partial^2}{\partial x^2}\vec{\Psi}=k\frac{\partial}{\partial x}\vec{\Psi}=k^2\vec{\Psi}[/imath]: i.e., the square of the expectation value of the “differential operator” is equal to the expectation value of the “differential operator squared”.

 

Multiply by [imath]i\hbar c[/imath] and the differential operator becomes the “momentum” operator. In classical physics, the square of the “momentum” is equal to the “momentum squared”. In quantum mechanics, this identity fails (the relationship [imath]\frac{\partial}{\partial x}\vec{\Psi}=k\vec{\Psi}[/imath] is true only if [imath]\vec{\Psi}[/imath] is an eigen function) and it is only the macroscopic average which turns out to be the same. It is exactly this failure which, in quantum mechanics, which leads to the energy uncertainty usually expressed as [imath]\Delta E\Delta t\geq \frac{\hbar}{2}[/imath]. So essentially, by setting these two terms to be equivalent, I am presuming the uncertainty in energy is an unimportant issue (the classical energy arising from momentum is related to the square of the momenum). (In essence, it is the fact that quantum mechanics reduces to classical mechanics, which makes those terms equal; an issue which I am taking here to be common knowledge already proved by others.)

(Incidentally, I wasn't finding my way through [imath]\nabla^2\left(\vec{\Psi}^\dagger \cdot \vec{\Psi}\right)=4\vec{\Psi}^\dagger\cdot\nabla^2\vec{\Psi}[/imath] right now :(

Once you get to the fact that each of the four terms yield the same result (which essentially presumes uncertainty in energy is an unimportant factor) you have

[math] \frac{\partial^2}{\partial x^2}\left(\vec{\Psi}^\dagger(\vec{x},t) \cdot \vec{\Psi}(\vec{x},t)\right)=4 \vec{\Psi}^\dagger(\vec{x},t)\cdot \frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)[/math]

 

That expression can be applied to all the components of [imath]\vec{\nabla}[/imath], in which case one has [imath]\nabla^2\left(\vec{\Psi}^\dagger \cdot \vec{\Psi}\right)=4\vec{\Psi}^\dagger\cdot\nabla^2\vec{\Psi}[/imath]. Exactly the same relationship can be applied to the time differential which yields [imath]\frac{\partial^2}{\partial t^2}\vec{\Psi}^\dagger(\vec{x},t) \cdot \vec{\Psi}(\vec{x},t)=4 \vec{\Psi}^\dagger(\vec{x},t)\cdot \frac{\partial^2}{\partial t^2}\vec{\Psi}(\vec{x},t)[/imath].

In the OP you say "suppose those two terms were the same", I'm not sure what that's about, is there a justification to supposing they are the same, or what does that move mean? I mean, is it analogous to some assumption done in standard physics?
It amounts to presuming quantum effects can be ignored (that is exactly what is required to obtain classical mechanics). It is analogous to presuming quantum uncertainty is an unimportant issue. Since classical electrodynamics (Maxwell's equations) were developed prior to the invention of quantum mechanics, there is no presumption of “quantum uncertainty” in Maxwell's equations.

 

I hope you find that a little clearer. By the way, when I comment about approaching senility, I am not joking. Today I was going to make a comment concerning the issue that one should be careful about what others say and the word to use was on the tip of my tongue but for the life of me I couldn't think of it. It just came to me: “skeptic” was the word I was trying to think of. Those kinds of things happen more and more often as I get older. It's difficult to think about what is happening and what is coming down the road.

 

Have fun -- Dick

Posted
First of all, I wrote that by cut and paste, adding the extra parenthesis and dagger afterwards. I never looked at the actual result so I missed the point that there were extra unnecessary parentheses there. I have removed them. Sorry about that.

 

Ah okay. No worries though, that did not manage to confuse me :)

 

Ah, I think I see your problem. The issue is that each of those terms are the conjugate of the other. What is being calculated is the expectation value of the differential operator which is a scalar in that abstract vector space used to represent [imath]\vec{\Psi}[/imath]: i.e., conjugation has no impact upon the expectation value of such a scalar. In vector operations, [imath]\vec{A}\cdot \vec{B} \equiv \vec{B} \cdot \vec{A}[/imath] thus the the statement is essentially that the expectation value of the differential does not change under conjugation.

[math] \left(\frac{\partial^2}{\partial x^2}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot \vec{\Psi}(\vec{x},t)

= \vec{\Psi}(\vec{x},t) \cdot \left(\frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right)^\dagger = \left(\vec{\Psi}^\dagger(\vec{x},t) \cdot \frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right)^\dagger[/math]

 

I was going back and forth with the above explanation and the previous post and finally realized what I was missing... I just could not for the life of me understand where you are pulling out that extra dagger, but now it dawned on me that you are referring to the conjugate we apply when we arrive at the final probability! [imath]P=\vec{\Psi}^\dagger \cdot \vec{\Psi}[/imath]

 

Yes?

 

And in other words, from the "At this point, I want to bring up an interesting mathematical relationship..." comment onwards, the following relationships are referring to [imath]\vec{\Psi}_2[/imath]. It is never explicitly stated (although clear now that I understood my error), and first I was just thinking of that term [imath]\left(2\psi^\dagger(\vec{x},t)\psi(\vec{x},t)\right)[/imath] from the previous step. (as it sort of looked similar to [imath]\frac{\partial^2}{\partial x^2}\left\{\vec{\Psi}^\dagger(\vec{x},t)\cdot\vec{\Psi}(\vec{x},t)\right\}[/imath] at quick glance.

 

So yeah, it all makes perfect sense to me suddenly. Double doh!

 

That step comes directly from the fact that, if [imath]\vec{\Psi}[/imath] is an eigenfunction of the [imath]\frac{\partial}{\partial x}[/imath] operator,

 

A bit shaky with these "eigen"-concepts, as we only passed through them briefly before.

I'm looking at:

Eigenvalue, eigenvector and eigenspace - Wikipedia, the free encyclopedia

 

So, I remember still that it had to do with matrix operations, and like they say there, if a matrix acts on a certain vector by changing only their magnitude, the vector is said to be an "eigenvector" of the matrix.

 

I'm looking at the "Eigenfunctions" section of that page, where they talk about the action of differential operators on function spaces. "The eigenvectors are commonly called eigenfunctions" they say.

 

The most simple case is the eigenvalue equation for differentiation of a real valued function by a single real variable. In this case, the eigenvalue equation becomes the linear differential equation

[math]\frac{d}{dx}f(x)=\lambda f(x)[/math].

 

Here [imath]\lambda[/imath] is the eigenvalue associated with the function, f(x). This eigenvalue equation has a solution for all values of [imath]\lambda[/imath]. If [imath]\lambda[/imath] is zero, the solution is

[math]f(x)=A[/math],

 

where A is any constant; if [imath]\lambda[/imath] is non-zero, the solution is the exponential function

[math]f(x)=Ae^{\lambda x}[/math].

 

There is something oddly familar with that... :I

 

So is it saying simply that if the eigenvalue is 0, it just means that the function is not changed by the eigenvalue, i.e. the differentiation yields 0, i.e. the function is just constant.

 

And if the eigenvalue is non-zero, that is simply to say that the differentiation yields some constant change (the function is linear like they say)

 

I guess then when you say "[imath]\vec{\Psi}[/imath] is an eigenfunction of the [imath]\frac{\partial}{\partial x}[/imath] operator", that means that when [imath]\frac{\partial}{\partial x}[/imath] operates on appropriate [imath]\vec{\Psi}[/imath], it yields a linear change to the resulting (square root of the) probability (changes the magnitude of [imath]\vec{\Psi}[/imath]), but it does not yield a change to the direction of the [imath]\vec{\Psi}[/imath].

 

I'm trying really hard to get a proper idea of this it's difficult with such unfamiliarity with these concepts though... :I

 

That step comes directly from the fact that, if [imath]\vec{\Psi}[/imath] is an eigenfunction of the [imath]\frac{\partial}{\partial x}[/imath] operator,

[imath]\frac{\partial}{\partial x}\vec{\Psi}=k\vec{\Psi}[/imath] (that equation is the definition of an eigenvalue). If you operate on both sides of that equation with the [imath]\frac{\partial}{\partial x}[/imath] operator you get [imath]\frac{\partial^2}{\partial x^2}\vec{\Psi}=k\frac{\partial}{\partial x}\vec{\Psi}=k^2\vec{\Psi}[/imath]: i.e., the square of the expectation value of the “differential operator” is equal to the expectation value of the “differential operator squared”.

 

Ahha, and then "k" is simply the expectation value of the differential operator (when operating on appropriate eigenfunction; that [imath]\vec{\Psi}[/imath]), and it's exactly the same thing as what they call "the eigenvalue" at that Wikipedia quote?

 

So, hmmm, when you said, in your earlier post;

 

The big step is setting

[math]\vec{\Psi}^\dagger(\vec{x},t) \cdot \frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)=\left(\frac{\partial}{\partial x}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot\left(\frac{\partial}{\partial x}\vec{\Psi}(\vec{x},t)\right)[/math]

 

This is essentially a statement that the function [imath]\vec{\Psi}[/imath] is an eigenfunction of momentum

 

What you are saying there is that since the momentum operator doesn't change the direction of the resulting "vector", it can either be applied twice to only one of the [imath]\vec{\Psi}[/imath]'s, or applied once to both. The end result must be the same... ...I think. Hmmmm :I

 

i.e., quantum mechanical vacuum polarization (short term violations of conservation of energy) can not take place. This presumption is very much a part of classical electrodynamics; it is the fundamental difference between classical electrodynamics and QED.

Multiply by [imath]i\hbar c[/imath] and the differential operator becomes the “momentum” operator. In classical physics, the square of the “momentum” is equal to the “momentum squared”. In quantum mechanics, this identity fails (the relationship [imath]\frac{\partial}{\partial x}\vec{\Psi}=k\vec{\Psi}[/imath] is true only if [imath]\vec{\Psi}[/imath] is an eigen function) and it is only the macroscopic average which turns out to be the same. It is exactly this failure which, in quantum mechanics, which leads to the energy uncertainty usually expressed as [imath]\Delta E\Delta t\geq \frac{\hbar}{2}[/imath]. So essentially, by setting these two terms to be equivalent, I am presuming the uncertainty in energy is an unimportant issue (the classical energy arising from momentum is related to the square of the momenum). (In essence, it is the fact that quantum mechanics reduces to classical mechanics, which makes those terms equal; an issue which I am taking here to be common knowledge already proved by others.)

 

Right, I think I understand, to an extent, what you are saying in the above quotes.

 

Once you get to the fact that each of the four terms yield the same result (which essentially presumes uncertainty in energy is an unimportant factor) you have

[math] \frac{\partial^2}{\partial x^2}\left(\vec{\Psi}^\dagger(\vec{x},t) \cdot \vec{\Psi}(\vec{x},t)\right)=4 \vec{\Psi}^\dagger(\vec{x},t)\cdot \frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)[/math]

 

Ah, so just to do the baby steps;

 

[math]

\left( \vec{\Psi}^\dagger(\vec{x},t)\cdot\frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right)

+

\left( \vec{\Psi}(\vec{x},t)\cdot\frac{\partial^2}{\partial x^2}\vec{\Psi}^\dagger(\vec{x},t)\right)

+

\left(\frac{\partial}{\partial x}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot\left(\frac{\partial}{\partial x}\vec{\Psi}(\vec{x},t)\right)

+

\left(\frac{\partial}{\partial x}\vec{\Psi}^\dagger(\vec{x},t)\right)\cdot\left(\frac{\partial}{\partial x}\vec{\Psi}(\vec{x},t)\right)

[/math]

 

[math]

=

\vec{\Psi}^\dagger(\vec{x},t)\cdot \left( \frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right)

+

\vec{\Psi}^\dagger(\vec{x},t)\cdot \left( \frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right)

+

\vec{\Psi}^\dagger(\vec{x},t)\cdot \left( \frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right)

+

\vec{\Psi}^\dagger(\vec{x},t)\cdot \left( \frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right)

[/math]

 

[math]

= 4\vec{\Psi}^\dagger(\vec{x},t)\cdot \left( \frac{\partial^2}{\partial x^2}\vec{\Psi}(\vec{x},t)\right)

[/math]

 

Yup.

 

That expression can be applied to all the components of [imath]\vec{\nabla}[/imath], in which case one has [imath]\nabla^2\left(\vec{\Psi}^\dagger \cdot \vec{\Psi}\right)=4\vec{\Psi}^\dagger\cdot\nabla^2\vec{\Psi}[/imath]. Exactly the same relationship can be applied to the time differential which yields [imath]\frac{\partial^2}{\partial t^2}\vec{\Psi}^\dagger(\vec{x},t) \cdot \vec{\Psi}(\vec{x},t)=4 \vec{\Psi}^\dagger(\vec{x},t)\cdot \frac{\partial^2}{\partial t^2}\vec{\Psi}(\vec{x},t)[/imath].

It amounts to presuming quantum effects can be ignored (that is exactly what is required to obtain classical mechanics). It is analogous to presuming quantum uncertainty is an unimportant issue. Since classical electrodynamics (Maxwell's equations) were developed prior to the invention of quantum mechanics, there is no presumption of “quantum uncertainty” in Maxwell's equations.

 

Yup.

 

I hope you find that a little clearer.

 

Well had to do some head scratching (in a hurry), but it did the trick so we're on our way once again :)

 

By the way, when I comment about approaching senility, I am not joking. Today I was going to make a comment concerning the issue that one should be careful about what others say and the word to use was on the tip of my tongue but for the life of me I couldn't think of it. It just came to me: “skeptic” was the word I was trying to think of. Those kinds of things happen more and more often as I get older. It's difficult to think about what is happening and what is coming down the road.

 

Yeah, that doesn't sound very nice :I I hope you can keep producing explanations to me about all this stuff as I really need them in order to walk through the thing...

 

I'll continue from here as soon as possible.

 

-Anssi

Posted

Just a short step this as I had almost no free time over the weekend :(

 

Anyway, we had arrived at the expression:

 

[math]

\vec{\gamma}\vec{\Psi}_2^\dagger\cdot\nabla_{2xyz\tau}^2\vec{\Psi}_2 +2 \left(\psi^\dagger(\vec{x}_2,t)\psi(\vec{x}_2,t)\right)^2\vec{\gamma}\vec{\Psi}_2^\dagger\cdot\vec{\Psi}_2=\vec{\gamma}\vec{\Psi}_2^\dagger\cdot\frac{1}{c^2}\frac{\partial^2}{\partial t^2}\vec{\Psi}_2

[/math]

 

And made an approximation that quantum effects can be ignored, yielding;

 

[math]

\nabla^2\left(\vec{\Psi}^\dagger \cdot \vec{\Psi}\right)=4\vec{\Psi}^\dagger\cdot\nabla^2\vec{\Psi}

[/math]

 

&

 

[math]

\frac{\partial^2}{\partial t^2}\left(\vec{\Psi}^\dagger \cdot \vec{\Psi}\right)=4\vec{\Psi}^\dagger\cdot\frac{\partial^2}{\partial t^2}\vec{\Psi}

[/math]

 

Meanwhile, using these relationships and the fact that [imath]\vec{\gamma}[/imath] commutes with both differential operators, we can write the equation the expectation values of [imath]\vec{\gamma}[/imath] must obey as follows:

[math]\frac{1}{4}\nabla^2\left(\vec{\Psi}_2^\dagger \vec{\gamma}\cdot \vec{\Psi}_2\right) +\left(2\psi^\dagger(\vec{x},t)\psi(\vec{x},t)\right)^2\left(\vec{\Psi}_2^\dagger\vec{\gamma}\cdot\vec{\Psi}_2\right)=\frac{1}{4c^2} \frac{\partial^2}{\partial t^2}\left(\vec{\Psi}_2^\dagger\vec{\gamma}\cdot\vec{\Psi}_2\right)[/math].

 

Ah I think I get it, from those approximations we get;

 

[math]

\frac{1}{4} \nabla^2\left(\vec{\Psi}^\dagger \cdot \vec{\Psi}\right) = \vec{\Psi}^\dagger\cdot\nabla^2\vec{\Psi}

[/math]

 

And substituting that in the equation we are working on (+ doing the same thing for time derivative);

 

[math]

\vec{\gamma} \frac{1}{4} \nabla^2\left(\vec{\Psi}^\dagger \cdot \vec{\Psi}\right)

+2 \left(\psi^\dagger(\vec{x}_2,t)\psi(\vec{x}_2,t)\right)^2\vec{\gamma}\vec{\Psi}_2^\dagger\cdot\vec{\Psi}_2

=\vec{\gamma}\frac{1}{4c^2} \frac{\partial^2}{\partial t^2}\left(\vec{\Psi}_2^\dagger\cdot\vec{\Psi}_2\right)

[/math]

 

Little reordering and we are exactly at your result:

 

[math]

\frac{1}{4} \nabla^2\left(\vec{\Psi}^\dagger \vec{\gamma} \cdot \vec{\Psi}\right)

+ \left( 2 \psi^\dagger(\vec{x}_2,t) \psi(\vec{x}_2,t) \right)^2 \left(\vec{\Psi}_2^\dagger \vec{\gamma} \cdot \vec{\Psi}_2 \right)

=\frac{1}{4c^2} \frac{\partial^2}{\partial t^2}\left(\vec{\Psi}_2^\dagger \vec{\gamma} \cdot \vec{\Psi}_2\right)

[/math]

 

Or, if [imath]\nabla^2[/imath] is interpreted to be the standard three dimensional version (no partial with respect to tau) we should put in the tau term explicitly and obtain

[math]\frac{1}{4}\nabla^2\left(\vec{\Psi}_2^\dagger \vec{\gamma}\cdot \vec{\Psi}_2\right) +\frac{1}{4}\frac{m_2^2c^2}{\hbar^2}\left(\vec{\Psi}_2^\dagger \vec{\gamma}\cdot \vec{\Psi}_2\right)[/math]

 

[math]+\left(2\psi^\dagger(\vec{x},t)\psi(\vec{x},t)\right)^2\left(\vec{\Psi}_2^\dagger\vec{\gamma}\cdot\vec{\Psi}_2\right) =\frac{1}{4c^2} \frac{\partial^2}{\partial t^2}\left(\vec{\Psi}_2^\dagger\vec{\gamma}\cdot\vec{\Psi}_2\right)[/math].

 

Okay, so, given the definition:

 

[math]

m = -i\frac{\hbar}{c}\frac{\partial}{\partial \tau}

[/math]

 

then

 

[math]

\frac{\partial^2}{\partial \tau^2}

=

\frac{m_2^2c^2}{-i^2 \hbar^2}

=

\frac{m_2^2c^2}{\hbar^2}

[/math]

 

So yes that would get me to exactly what you wrote down.

 

I'll continue from here later...

 

-Anssi

Posted

Okay, finally had some time to look at this and immediately ran into problems...

 

We have already concluded that the electromagnetic potentials [imath]\Phi(\vec{x},t)[/imath] and [imath]\vec{A}(\vec{x},t)[/imath] have to be proportional to the respective expectation values of [imath]\vec{\gamma}[/imath] where the proportional constant is [imath]-i\sqrt{2}\frac{\hbar c}{e}[/imath] for [imath]\Phi[/imath] and 1/c times that value for [imath]\vec{A}[/imath].

 

You must be talking about:

 

[math]

\Phi(\vec{x},t)=-i\frac{\hbar c}{e}\left[\gamma_\tau \vec{\Psi}_2^\dagger(\vec{x},t)\cdot \vec{\Psi}_2(\vec{x},t)\right]

[/math]

 

[math]

\vec{A}(\vec{x},t)=i\frac{\hbar c}{e}\left[\vec{\gamma}\vec{\Psi}_2^\dagger(\vec{x},t)\cdot \vec{\Psi}_2(\vec{x},t)\right]

[/math]

 

And after staring at this for a while, I think you may have written the quoted paragraph back when those definitions still had errors in them? Or then I'm just completely missing something :P

 

If it's the former, then the proportional constants you are referring to are actually:

 

For [imath]\Phi[/imath]

 

[math]

-i\frac{\hbar c}{e}

[/math]

 

And for [imath]\vec{A}[/imath]

 

[math]

i\frac{\hbar c}{e}

[/math]

 

Yes?

 

Looking at the next move you make, I guess the point was simply that whatever the constant is, it exists in every term and can be simply factored?

 

It follows directly that these electromagnetic potentials must obey equations with the following structure

[math]\nabla^2\Phi +\frac{m_2^2c^2}{\hbar^2}\Phi-\frac{1}{c^2}\frac{\partial^2}{\partial t^2}\Phi= -4\pi\rho [/math]

 

where [imath]\rho[/imath] is defined to be

[math]\rho=\frac{1}{\pi}\Phi\sum_i\left(2\psi_i^\dagger(\vec{x},t)\psi_i(\vec{x},t)\right)^2[/math]

 

Okay, just putting down the explicit steps I'm tracing, for later reference and also for the benefit of all the lurkers out there. (Sorry about the great volume of math, it's all really just to make it super simple)

 

Starting from where we were left in the last post, multiply through by 4 and little reordering:

 

[math]

\nabla^2\left(\vec{\Psi}_2^\dagger \vec{\gamma}\cdot \vec{\Psi}_2\right) +\frac{m_2^2c^2}{\hbar^2}\left(\vec{\Psi}_2^\dagger \vec{\gamma}\cdot \vec{\Psi}_2\right)

-\frac{1}{c^2} \frac{\partial^2}{\partial t^2}\left(\vec{\Psi}_2^\dagger\vec{\gamma}\cdot\vec{\Psi}_2\right)

=

-4\left(2\psi^\dagger(\vec{x},t)\psi(\vec{x},t)\right)^2\left(\vec{\Psi}_2^\dagger\vec{\gamma}\cdot\vec{\Psi}_2\right)

[/math]

 

Substitution to [imath]\Phi[/imath];

 

[math]

\nabla^2\Phi

+\frac{m_2^2c^2}{\hbar^2}\Phi

-\frac{1}{c^2} \frac{\partial^2}{\partial t^2}\Phi

=

-4 \left(2\psi^\dagger(\vec{x},t)\psi(\vec{x},t)\right)^2 \Phi

[/math]

 

Substitution to [imath]\rho[/imath]

 

[math]

\nabla^2\Phi

+\frac{m_2^2c^2}{\hbar^2}\Phi

-\frac{1}{c^2} \frac{\partial^2}{\partial t^2}\Phi

=

-4\pi\rho

[/math]

 

Yup, looks valid to me.

 

and

[math]\nabla^2\vec{A}+\frac{m_2^2c^2}{\hbar^2}\vec{A} -\frac{1}{c^2}\frac{\partial^2}{\partial t^2}\vec{A}= -\frac{4\pi}{c}\vec{J}[/math]

 

where [imath]\vec{J}[/imath] is defined to be

[math]\vec{J}=\frac{c}{\pi}\vec{A}\sum_i\left(2\psi^\dagger(\vec{x},t)\psi(\vec{x},t)\right)^2[/math].

 

Almost the same exact steps there to get that result, with trivial differences and looks exactly valid to me.

 

If [imath]m_2 =0 [/imath] (that is, we presume the boson is a photon), the above equations are exactly Maxwell's equations expressed in the microscopic Lorenz Gauge.

 

That wiki page went completely over my head...

 

The only questions here are the relationship between “e” and [imath]\left( 2\psi^\dagger(\vec{x},t)\psi(\vec{x},t)\right)^2[/imath] and the roll of the electromagnetic potentials in [imath]\rho[/imath] and [imath]\vec{J}[/imath].

 

Just to be absolutely sure, do you mean "...and the role of the electromagnetic potentials..."?

 

The both are fundamentally dependent upon the shape of [imath]\psi(\vec{x},t)[/imath]. We have already made the approximation that [imath]\psi^\dagger(\vec{x},t)\psi(\vec{x},t) \approx a\delta(\vec{x}-\vec{v}t)[/imath] which means that the term is extremely localized (essentially a point) so that both of these factors end up being little more than mechanisms available to set the strength of the interaction: i.e., the specific value of “e”, the electric charge. Dynamically speaking it is exactly the experimental circumstance defined by Dirac's equation together with Maxwell's equations.

 

So by all that you mean their strength (values) is essentially just adjusted to whatever is necessary for these definitions to hold true with the experimental results?

 

One last subtle difficulty seems to exist with with my presentation. Both [imath]\Phi[/imath] and [imath]\vec{A}[/imath] are expectation values of [imath]\vec{\gamma}[/imath] explicitly multiplied by [imath]i=\sqrt{-1}[/imath] and some additional “real” numbers. This seems strange in view of the fact that [imath]\Phi[/imath] and [imath]\vec{A}[/imath] are, by electromagnetic theory, real values.

 

My mathematical competence is too weak to even see the strangeness in that... Can you elaborate? :I

 

Also, I've been concentrating on walking through the algebra, but it is high time to ask more closely what are [imath]\Phi[/imath] and [imath]\vec{A}[/imath] conventionally thought to be? In the OP, I think you refer to both as "electromagnetic potential", but what is their difference?

 

[math]

In view of that fact, please note the following calculation of the amplitude of [imath]\vec{\gamma}[/imath].

[math]A_\gamma = \sqrt{\vec{\gamma}\cdot\vec{\gamma}}=\left\{\left(\sum_{i=1}^4\alpha_i\beta\hat{x}_i\right)\cdot\left( \sum_{i=1}^4\alpha_i\beta\hat{x}_i\right)\right\}^\frac{1}{2}=\left\{\sum_{i=1}^4\alpha_i\beta\alpha_i\beta\right\}^\frac{1}{2}[/math]

 

[math]=\left\{-\sum_{i=1}^4\alpha_i\alpha_i\beta\beta\right\}^\frac{1}{2}=\left\{-\sum_{i=1}^4\frac{1}{2}\frac{1}{2}\right\}^\frac{1}{2}=\sqrt{-1}=i[/math]

 

It follows that both charge and current densities in this mental model are, in fact, real.

 

Well there are few things there that elude me.

 

I guess by [imath]A_\gamma[/imath] you mean "the amplitude of [imath]\gamma[/imath]", but I do not know what the amplitude of [imath]\gamma[/imath] means.

 

I do not know how you get to [imath]\sqrt{\vec{\gamma}\cdot\vec{\gamma}}[/imath].

 

I do not know how you get [imath]\vec{\gamma} = \sum_{i=1}^4\alpha_i\beta\hat{x}_i[/imath].

 

I'm not sure what [imath]\beta[/imath] is referring to.

 

Not sure what is going on in the 4th step in there either.

 

Or where the negative comes from in the 5th step.

 

I'm guessing the 6th step is supposed to be [math]\left\{-4\frac{1}{2}\frac{1}{2}\right\}^\frac{1}{2}[/math]?

 

There are a few issues which deserve a little attention. First, the issue of the substitution [imath]\nabla^2\left(\vec{\Psi}^\dagger \cdot \vec{\Psi}\right)=4\vec{\Psi}^\dagger\cdot\nabla^2\vec{\Psi}[/imath] essentially says that the resultant equations (and that would be Maxwell's equations) are only valid so long as the energy of the virtual photon is small enough such that quantum fluctuations can be ignored. That explains the old classical electron radius problem from a slightly different perspective. Maxwell's equations are an approximation to the actual situation. The correct solution requires one include additional elements (those quantum fluctuations which arise when the energy exceeds a certain threshold). It could very well be that the energy must be below the energy necessary to create a fluctuation equal to the energy of the electron: the field solutions must include photon-photon interactions before the "classical electron radius" is reached. It is also possible that inclusion of the fluctuations could lead to massive boson creation and another solution. The problem with that fact is that I have not discovered a way to approximate a solution to the required many body problem.

 

Okay... Some of the implications of that may be lost in me as I'm not familiar enough with those concepts.

 

A second issue concerns the existence of magnetic monopoles. In this development of Maxwell's equations, the symmetry between the electric and magnetic fields does not exist and likewise, “magnetic monopoles” do not exist.

 

...as above, I'm not sure what that implies as I'm not familiar enough with electromagnetism.

 

My position (directly opposed to Qfwfq) is that quantization of fields is not the proper approach to understanding reality. It is the quantum elements themselves which are conceptually fundamental, not the supposed fields he (and others) want to quantize. The fields are the consequence of the information to be explained, not the fundamental information itself.

 

Yeah, certainly... Just as a benefit for lurkers, I would be inclined to think that; regardless of what the ontological reality is like, to analytically describe reality, is to define it into discreet pieces, so that one can assign "an expected behaviour" to anything at all. And when you do and follow those resulting definitions absolutely analytically, it may lead to definitions and consequences that do not follow from the position that reality is continuous... ...even if you still want to see it as continuous (and you are certainly entitled to that belief as long as you don't take it as a provable fact)

 

Actually in the view of this presentation, it is not very smart to talk about reality as continuous or discreet at all, as those ideas both arise from epistemological standpoints anyway. (It is exactly like arguing about what does the surface of an atom feel like if you were to touch it :D )

 

From the above we may conclude that Maxwell's equations are an approximation to the fundamental equation and thus, the entire field of Classical Electrodynamics may be deduced from my fundamental constraint which is, in fact, nothing more than "any explanation of the universe must be internally self consistent". It is apparent that, once one defines "charge" and "current", Maxwell's equations are also true by definition.

 

Yes. Via their particular definitions, that are in fact aligned to the symmetry requirements that existed far before any observation of electromagnetism, and via the approximations made regarding the feedback from the rest of the universe. I am certain I am missing some implications here due to me not being familiar enough with conventional physics, but it is not very hard for me to take the actual result as valid at this point.

 

There is still your comments about quantum electrodynamics int he OP that I would like to understand better. I am completely unfamiliar with spherical harmonics, and the math you lay down is not immediately obvious for me... But I can look at it with more concentration little bit later.

 

In the meantime:

 

The existence of Dirac particles does tell us something about the universe: it tells us that certain specific patterns of data exist in our universe. Just as the astrologer points to specific events which occurred together with certain astrological signs, the actual information content is that the events occurred and that the signs were there. That can not be interpreted as a defense that the astrologers world view is correct! Both presentations are nothing more than mechanisms for cataloging information. The apparent advantage of the classical scientific position is that no cases exist which violate his "catalog" or so he tells you. When it comes to actual fact, both the scientist and the astrologer have their apologies for failure ready (mostly that you don't understand the situation or there are exigent circumstances). The astrologer says that there was a unique particular combination of signs the impact of which was not taken into account while the scientist says some new theory (another set of signs?) was not taken into account.

 

What is significant is that the existence of Dirac particles may add to our knowledge of the universe but it adds nothing to our understanding of the universe. This is an important point. The reader should realize that the object of all basic scientific research is to discover the rules which differentiate between all possible universes and the one we actually find ourselves in. Since my fundamental equation must be satisfied by all possible universes, only constraints not specifically required by that equation tell us anything about our universe.

 

Well that is interesting point, although I must say it is very difficult for me to follow the tangling up of all the definitions, and how they play down together. I mean I can't follow everything through in my head to be able to readily convince myself about whether we are, with Diracs equation referring to a specific circumstance arising from the content of the data, or whether we have rather seen a specific tangling up of particular definitions, that were tied to the symmetry requirements (and all the approximations made to get here)...

 

-Anssi

Posted

Hi Anssi,

 

When I read your post yesterday (I've been pretty busy lately) I was quite impressed and commented to my wife that your ability to pick up on things is rather astonishing particularly in view of the fact that your knowledge of physics is almost non-existent. She replied that perhaps the fact that you are not thoroughly indoctrinated in the dogma of modern physics is actually of benefit. She could be right; that really goes directly to my “baggage” comments I have made to almost everyone else.

 

At any rate, every comment you have made in the latest post is dead on the mark.

Okay, finally had some time to look at this and immediately ran into problems...
Yes; you immediately ran into the problems I created. I have edited the OP to reflect your observations including explicitly pointing out that the term “can be simply factored out”.
That wiki page went completely over my head...
The only reason I referenced that page is because of the fact that it refers to the “Lorenz gauge” representation of Maxwell's equation. Anyone familiar with electrodynamics would realize that my result is essentially identical to the standard representation. If you look down in that page, you will find the expressions

[math]\square \vec{A}=\left[\frac{1}{c^2}\frac{\partial^2}{\partial t^2}-\nabla^2 \right ]\vec{A}=\mu_0 \vec{J}[/math]

 

[math]\square \phi=\left[\frac{1}{c^2}\frac{\partial^2}{\partial t^2}-\nabla^2 \right ]\phi=-\frac{1}{\epsilon_0 }\rho [/math]

 

along with [imath]c=\frac{1}{\sqrt{\mu_0 \epsilon_0}}[/imath] which implies that the ratio between the standard defined magnitudes of [imath]\vec{J}[/imath] and [imath]\rho[/imath], [imath]\frac{\mu_0}{\epsilon_0}[/imath] is c2. Well, I have caught another error: my definition of [imath]\vec{J}[/imath] is incorrect! The factor should be c2 and not c. Sorry about that; it is merely a definition error and has nothing to do with the algebra. I have made this correction and the “roll” to “role” typo you pointed out. You, of course, know you are dealing with a senile old man here!

 

That page also points out the connection between the “electromagnetic potentials” and the forces created by these fields. “Potential” energy is commonly pictured as either a hill (positive potential energy) or a hole (negative potential energy) where the “forces” created by these potentials are related directly to the steepness of the slope of the “potential hill/hole”. The slope (in one dimension) is given by the derivative of the height (that function which defines the height). The three dimensional equivalent is the gradient, mathematically given by [imath]\vec{\nabla}[/imath]. Thus it is that the electric and magnetic fields (the forces derived from the electrodynamic potentials) are given by the gradient of the potentials.

 

A little further down on that page, you will find the expressions

[math] \vec{E}=-\vec{\nabla}{\phi}-\frac{\partial }{\partial t}\vec{A}[/math]

 

[math] \vec{B}= \vec{\nabla} \times \vec{A}[/math]

 

(Note that the vector on [imath]\nabla[/imath] is often omitted in common printing; the "bold" version is usually taken to indicate a vector.) The term “gauge” has to do with the fact that these potentials may be shifted by a constant: i.e., one can always change the electromagnetic potentials in any way one wishes so long as the electric and magnetic fields remain the same (i.e., there will be no change in the physical measurements). “Gauge” theory (and it is not a trivial issue) was first encountered as a fundamental symmetry in Maxwell's equations. Later, a whole field was created and “Gauge theory” developed as a field unto itself. If you really want to understand it, you will have to embark on a serious study of the subject. I googled the thing and didn't find any good overall presentations.

So by all that you mean their strength (values) is essentially just adjusted to whatever is necessary for these definitions to hold true with the experimental results?
Not exactly. Maxwell's equations essentially consider the electron to be a point source of the electromagnetic fields: i.e., the charges themselves have no dependence on the fields they create. Notice that in the equations above, neither [imath]\vec{A}[/imath] or [imath]\phi[/imath] appear in the standard physics definitions of [imath]\vec{J}[/imath] or [imath]\rho[/imath] whereas, in my attack, they also are proportional to those fields. My point is that the standard classical physics definitions essentially allow the fields to go to infinity at the source. Thus it is that the dependence upon the strength these fields at the source is really immaterial anyway since we are essentially dealing with something similar to a Dirac delta function. Don't worry about it; there is a lot of room for discussion here and we really need some seriously educated minds to unravel the possibilities. The issue is certainly beyond my abilities. What is important here is that it does not contradict the physics involved.
My mathematical competence is too weak to even see the strangeness in that... Can you elaborate? :I
If you look at any standard presentation of Maxwell's electrodynamics, you will find that the electromagnetic potentials, [imath]\vec{A}[/imath] and [imath]\phi[/imath] together with the standard definitions of [imath]\vec{J}[/imath] and [imath]\rho[/imath] are all “real” variables (there is no [imath]\sqrt{-1}[/imath] there). My equation seems to insert an imaginary factor. We multiplied through by an i, [imath]\sqrt{-1}[/imath], which became part of the definition of momentum and energy. It remained as a factor in the interaction term; multiplying the expectation value of gamma. What I show is that the expectation value of gamma yields a second factor [imath]\sqrt{-1}[/imath] which results in real values for all terms.
Well there are few things there that elude me.
Let me go through them one by one.
I guess by [imath]A_\gamma[/imath] you mean "the amplitude of [imath]\gamma[/imath]", but I do not know what the amplitude of [imath]\gamma[/imath] means.
Gamma is a vector operator. Vectors have both direction and “length”; the length is commonly called amplitude. The length of a vector is normally found via the square root of the sum of the squares of its components. Since the dot product (by definition) of a vector with itself is the sum of the squares of its components, the length (or amplitude) of the vector must be the square root of the dot product of that vector with itself. Thus it is that the amplitude of the vector operator gamma is exactly [imath]\sqrt{\vec{\gamma}\cdot\vec{\gamma}}[/imath].
I do not know how you get [imath]\vec{\gamma} = \sum_{i=1}^4\alpha_i\beta\hat{x}_i[/imath].
The factors [imath]\hat{x}_i[/imath] are unit vectors in the directions of the various coordinates : i.e., [imath]\hat{x}_1=\hat{x}[/imath], [imath]\hat{x}_2=\hat{y}[/imath], [imath]\hat{x}_3=\hat{z}[/imath] and [imath]\hat{x}_4=\hat{\tau}[/imath]; unit vectors in the four different directions in our [imath](x,y,z,\tau)[/imath] space. Our vector [imath]\vec{\gamma}[/imath] was originally defined to be [imath]\vec{\gamma}=\vec{\alpha}\beta[/imath] where beta was defined to be [imath]\beta=\beta_{12}+\beta_{21}[/imath]. Ah, there is a factor of “2” I have omitted [imath] <\beta^2> = 1[/imath] not 1/2 ; sorry about that. It changes the actual magnitude of the amplitude of gamma but not the imaginary/real character of that amplitude.

 

At any rate, [imath]\vec{\alpha}=\alpha_x \hat{x}+\alpha_y \hat{y}+\alpha_z \hat{z}+\alpha_\tau \hat{\tau}\equiv\sum_{i=1}^4\alpha_i \hat{x}_i[/imath]. Multiply that by beta and you have exactly what you have above.

I'm not sure what [imath]\beta[/imath] is referring to.
Sorry about that. I hope what I have spelled out makes that clearer.
Not sure what is going on in the 4th step in there either.
I am writing out the actual “dot” product. That is achieved by noting that the result is entirely dependent upon the fact that [imath]\hat{x}_i \cdot \hat{x}_j[/imath] is “zero” if i is not the same as j and “unity” when i=j . Thus it is that, under this multiplication, only four terms survive: one for i=1 (the x component), one for i=2 (the y component), one for i=3 (the z component) and one for i=4 (the tau component). So we started with a product of two terms (each of which was a sum of four terms) and ended with a sum of four terms.
Or where the negative comes from in the 5th step.
That arises because alpha and beta operators anti-commute: i.e., [imath]\alpha \beta=-\beta\alpha[/imath] or, equivalently [imath]\alpha (\beta_{12} + \beta_{21})=-(\beta_{12} + \beta_{21})\alpha[/imath], I can leave out the subscripts since which alpha and which beta I use makes no difference; they all anti-commute so the sign changes when I change the order of those two inside terms.
I'm guessing the 6th step is supposed to be [math]\left\{-4\frac{1}{2}\frac{1}{2}\right\}^\frac{1}{2}[/math]?
Yes, except for the definition of beta. Since the actual gamma was built from [imath]\beta=\beta_{12}+\beta_{21}[/imath], that factor should have been [math]\left\{-4\frac{1}{2}(1)\right\}^\frac{1}{2}[/math] which becomes [imath]\sqrt{-2}=i\sqrt{2}[/imath]. In other words, it still generates that factor of “i” which makes the term [imath]i<\vec{\gamma}>[/imath] (my definition of the potentials) a real value. (I don't think I will bother to fix that beta thing as it really has no bearing on the result.)
Okay... Some of the implications of that may be lost in me as I'm not familiar enough with those concepts.
I wouldn't worry about it if I were you. All I am talking about are possibilities worth looking at if one accepts my paradigm. I haven't worked out everything and there are some serious questions to think about.
And when you do and follow those resulting definitions absolutely analytically, it may lead to definitions and consequences that do not follow from the position that reality is continuous... ...even if you still want to see it as continuous (and you are certainly entitled to that belief as long as you don't take it as a provable fact)
And that is exactly the essence of Zeno's paradox: you cannot prove that an object passed through every point in a line describing its path! Motion is clearly an assumption built into your world view. It is not only impossible to prove true but, in fact, does indeed lead to conclusions which clearly violate our experiments: Bell's inequalities and “entanglement” in particular.
There is still your comments about quantum electrodynamics in the OP that I would like to understand better. I am completely unfamiliar with spherical harmonics, and the math you lay down is not immediately obvious for me... But I can look at it with more concentration little bit later.
Again, I wouldn't really worry about it if I were you. What is essentially being discussed are the solutions as seen from a spherical coordinate system as apposed to a rectilinear coordinate system: i.e., it is no more than analysis via more advanced mathematics. In a spherical coordinate system, when you write [imath]\vec{\Psi}[/imath] as a function of [imath](r,\phi,\theta)[/imath], those same wave like solutions which cropped up in the rectilinear system yield important consequences (embedded in those functions called “spherical harmonics”). If we change theta by 360 degrees, we must arrive back at exactly the same point in space we started from. It follows that the probability of finding an entity there must be exactly the same. Analysis of such specific solutions yield the set of functions referred to as spherical harmonics. Due to spherical symmetry of most problems, only the radial portion becomes really significant.

 

I only brought up the issue because I wanted to point out that one gets exactly the same form of radial potentials that the standard model yields for massive exchange forces. Maxwell's equations with massive mediating exchange elements is central to unifying the first three forces and I have some serious comments to make on that problem.

I mean I can't follow everything through in my head to be able to readily convince myself about whether we are, with Diracs equation referring to a specific circumstance arising from the content of the data, or whether we have rather seen a specific tangling up of particular definitions, that were tied to the symmetry requirements (and all the approximations made to get here)...
As far as I am concerned, the last sentence in your quote is the most important, ”Since my fundamental equation must be satisfied by all possible universes, only constraints not specifically required by that equation tell us anything about our universe”. Every approximation I have made corresponds exactly to the presumptions and assumptions made in the common everyday use and analysis of Dirac's equation and Maxwell's equations. Thus it follows that, when we use Dirac's equation together with Maxwell's equation we are essentially looking at an approximate solution to my equation. The only real question which exists is, “exactly how are those physical constants normally used in physics to be determined?” That is a deep issue we could go into but it's not trivial and I would like to let it go until later.

 

Finally,

Also, I've been concentrating on walking through the algebra, but it is high time to ask more closely what are [imath]\Phi[/imath] and [imath]\vec{A}[/imath] conventionally thought to be? In the OP, I think you refer to both as "electromagnetic potential", but what is their difference?
In Classical physics, [imath]\Phi[/imath] is the static potential due to the existence of an electron (or the charge residing in some place in space): the electric field (a vector force field) is given by the gradient of that potential (which is a scalar potential defined in a three dimensional space). The vector potential, [imath]\vec{A}[/imath] is a “vector potential” due to the motion of an electron (or the essential motion of charge: i.e., the current): the magnetic field (another vector force field) is given by cross gradient of that potential (which is a vector potential defined in a three dimensional space).

 

In my case, which is a four dimensional Euclidean space, the potential, which is the expectation value of [imath]\vec{\gamma}[/imath] is a four dimensional vector potential. The common vector potential, [imath]\vec{A}[/imath] is the (x,y,z) space component of that potential and [imath]\Phi[/imath] is the tau component of that potential. What is significant here is that the magnetic fields and the electric fields arise by rather different means; the two fields are not symmetric at all. In a sense, we can say we have electric monopoles (electric charges) for the same reason we have massive particles and there is no need to search for "magnetic monopoles" ("magnetic charges").

 

It answers the question, "Why does the magnetic charge always seem to be zero?"

 

Have fun -- Dick

  • 2 weeks later...
Posted

The only reason I referenced that page is because of the fact that it refers to the “Lorenz gauge” representation of Maxwell's equation. Anyone familiar with electrodynamics would realize that my result is essentially identical to the standard representation. If you look down in that page, you will find the expressions

[math]\square \vec{A}=\left[\frac{1}{c}\frac{\partial^2}{\partial t^2}-\nabla^2 \right ]\vec{A}=\mu_0 \vec{J}[/math]

 

[math]\square \phi=\left[\frac{1}{c}\frac{\partial^2}{\partial t^2}-\nabla^2 \right ]\phi=-\frac{1}{\epsilon_0 }\rho [/math]

 

Yes (except you forgot to put in the square for the C's)

 

If [imath]m_2 =0 [/imath] (that is, we presume the boson is a photon), the above equations are exactly Maxwell's equations expressed in the microscopic Lorenz Gauge.

 

So we had:

 

[math]

\nabla^2\vec{A}+\frac{m_2^2c^2}{\hbar^2}\vec{A} -\frac{1}{c^2}\frac{\partial^2}{\partial t^2}\vec{A}= -\frac{4\pi}{c^2}\vec{J}

[/math]

 

and

 

[math]

\nabla^2\Phi +\frac{m_2^2c^2}{\hbar^2}\Phi-\frac{1}{c^2}\frac{\partial^2}{\partial t^2}\Phi= -4\pi\rho

[/math]

 

So with [imath]\vec{A}[/imath], we factor that term with m right out:

 

[math]

\nabla^2\vec{A}-\frac{1}{c^2}\frac{\partial^2}{\partial t^2}\vec{A}= -\frac{4\pi}{c^2}\vec{J}

[/math]

 

And little re-ordering:

 

[math]

\left [ \frac{1}{c^2}\frac{\partial^2}{\partial t^2} - \nabla^2 \right ] \vec{A}= \frac{4\pi}{c^2}\vec{J}

[/math]

 

The right side is still a mystery to me, as I do not know what they mean by [imath]\mu_0[/imath] in that wikipage...

 

I looked at what that square, "D'Alembert operator", stands for, and it appears to be a differential operator over t,x,y and z in minkowski space... So in that expression at the wiki page, the time derivative is just separated into a term of its own. Hmmm, not sure about the role of that factor [imath]\frac{1}{c^2}[/imath], I guess it has got something to do with relativistic transformation somehow?

 

With [imath]\Phi[/imath] I walked through to expression:

[math]

\left [ \frac{1}{c^2}\frac{\partial^2}{\partial t^2} - \nabla^2 \right ] \Phi= 4\pi\rho

[/math]

 

And again don't know what is happening on the right side; I do not know what they mean by [imath]\epsilon_0[/imath]

 

along with [imath]c=\frac{1}{\sqrt{\mu_0 \epsilon_0}}[/imath] which implies that the ratio between the standard defined magnitudes of [imath]\vec{J}[/imath] and [imath]\rho[/imath], [imath]\frac{\mu_0}{\epsilon_0}[/imath] is c2.

 

So;

 

[math] c^2=\frac{1}{\mu_0 \epsilon_0} [/math]

 

[math] - c^2 \mu_0 =- \frac{1}{\epsilon_0} [/math]

 

[math] - \frac{1}{\epsilon_0} \rho = - c^2 \mu_0 \rho [/math]

 

And the expression we got for [imath]\vec{J}[/imath] was [imath]\mu_0 \vec{J}[/imath], the only difference there is [imath]-c^2[/imath]. Is that what you were pointing out? I'm really just guessing again... :I

 

Well, I have caught another error: my definition of [imath]\vec{J}[/imath] is incorrect! The factor should be c2 and not c. Sorry about that; it is merely a definition error and has nothing to do with the algebra.

 

Yup.

 

That page also points out the connection between the “electromagnetic potentials” and the forces created by these fields. “Potential” energy is commonly pictured as either a hill (positive potential energy) or a hole (negative potential energy) where the “forces” created by these potentials are related directly to the steepness of the slope of the “potential hill/hole”. The slope (in one dimension) is given by the derivative of the height (that function which defines the height). The three dimensional equivalent is the gradient, mathematically given by [imath]\vec{\nabla}[/imath]. Thus it is that the electric and magnetic fields (the forces derived from the electrodynamic potentials) are given by the gradient of the potentials.

 

A little further down on that page, you will find the expressions

[math] \vec{E}=-\vec{\nabla}{\phi}-\frac{\partial }{\partial t}\vec{A}[/math]

 

[math] \vec{B}= \vec{\nabla} \times \vec{A}[/math]

 

(Note that the vector on [imath]\nabla[/imath] is often omitted in common printing; the "bold" version is usually taken to indicate a vector.) The term “gauge” has to do with the fact that these potentials may be shifted by a constant: i.e., one can always change the electromagnetic potentials in any way one wishes so long as the electric and magnetic fields remain the same (i.e., there will be no change in the physical measurements). “Gauge” theory (and it is not a trivial issue) was first encountered as a fundamental symmetry in Maxwell's equations. Later, a whole field was created and “Gauge theory” developed as a field unto itself. If you really want to understand it, you will have to embark on a serious study of the subject. I googled the thing and didn't find any good overall presentations.

 

Yup okay, I think I get some idea of what it's about from your explanation already. At least, I can understand that if the expectations have to do with "differences" in electromagnetic potentials over spatial regions, then a shift over that whole "landscape" is immaterial. At the same time, I'm sure there are many subtleties to that issue that I'd have to study carefully to understand properly... :I Especially as it appears the Lorenz gauge operates on Minkowski space, and they seem to imply on the wikipage that the non-relativistic form cannot be expressed in that manner.

 

...I think it is interesting here to note that it is also recognized in the conventional view that these expressions arise from symmetries in some form... By this time seems obvious to me that ultimately they must arise from symmetries arising from unknown aspects of the data-to-be-explained... I remember you commented once that you see the consequences of your discover all over the place and I think I'm starting to see what you meant by that.

 

Not exactly. Maxwell's equations essentially consider the electron to be a point source of the electromagnetic fields: i.e., the charges themselves have no dependence on the fields they create. Notice that in the equations above, neither [imath]\vec{A}[/imath] or [imath]\phi[/imath] appear in the standard physics definitions of [imath]\vec{J}[/imath] or [imath]\rho[/imath] whereas, in my attack, they also are proportional to those fields. My point is that the standard classical physics definitions essentially allow the fields to go to infinity at the source. Thus it is that the dependence upon the strength these fields at the source is really immaterial anyway since we are essentially dealing with something similar to a Dirac delta function. Don't worry about it; there is a lot of room for discussion here and we really need some seriously educated minds to unravel the possibilities. The issue is certainly beyond my abilities. What is important here is that it does not contradict the physics involved.

 

Ahha... Hmmm, a little Wikipedia adventure suggests that [imath]\vec{J}[/imath] stands for "current density", and [imath]\rho[/imath] stands for charge density...

 

If you look at any standard presentation of Maxwell's electrodynamics, you will find that the electromagnetic potentials, [imath]\vec{A}[/imath] and [imath]\phi[/imath] together with the standard definitions of [imath]\vec{J}[/imath] and [imath]\rho[/imath] are all “real” variables (there is no [imath]\sqrt{-1}[/imath] there).

 

I couldn't find good definitions for them, the wikipedia page about Maxwell's equation has got a lot of information in it and I'm getting a bit lost in there :(

 

My equation seems to insert an imaginary factor. We multiplied through by an i, [imath]\sqrt{-1}[/imath], which became part of the definition of momentum and energy. It remained as a factor in the interaction term; multiplying the expectation value of gamma. What I show is that the expectation value of gamma yields a second factor [imath]\sqrt{-1}[/imath] which results in real values for all terms.

 

Ahha....

 

Let me go through them one by one.

Gamma is a vector operator. Vectors have both direction and “length”; the length is commonly called amplitude. The length of a vector is normally found via the square root of the sum of the squares of its components. Since the dot product (by definition) of a vector with itself is the sum of the squares of its components, the length (or amplitude) of the vector must be the square root of the dot product of that vector with itself. Thus it is that the amplitude of the vector operator gamma is exactly [imath]\sqrt{\vec{\gamma}\cdot\vec{\gamma}}[/imath].

 

Right okay, that was one of the things that eluded me.

 

When I asked "I do not know what the amplitude of [imath]\gamma[/imath] means" I meant to ask, what does it represent?

 

The factors [imath]\hat{x}_i[/imath] are unit vectors in the directions of the various coordinates : i.e., [imath]\hat{x}_1=\hat{x}[/imath], [imath]\hat{x}_2=\hat{y}[/imath], [imath]\hat{x}_3=\hat{z}[/imath] and [imath]\hat{x}_4=\hat{\tau}[/imath]; unit vectors in the four different directions in our [imath](x,y,z,\tau)[/imath] space. Our vector [imath]\vec{\gamma}[/imath] was originally defined to be [imath]\vec{\gamma}=\vec{\alpha}\beta[/imath] where beta was defined to be [imath]\beta=\beta_{12}+\beta_{21}[/imath]. Ah, there is a factor of “2” I have omitted [imath] <\beta^2> = 1[/imath] not 1/2 ; sorry about that. It changes the actual magnitude of the amplitude of gamma but not the imaginary/real character of that amplitude.

 

At any rate, [imath]\vec{\alpha}=\alpha_x \hat{x}+\alpha_y \hat{y}+\alpha_z \hat{z}+\alpha_\tau \hat{\tau}\equiv\sum_{i=1}^4\alpha_i \hat{x}_i[/imath]. Multiply that by beta and you have exactly what you have above.

 

Right.

 

Sorry about that. I hope what I have spelled out makes that clearer.

 

Yeah, I'm wondering if there should at least be a mention of that little error related to beta in the OP too. I figure you don't want to change the math because it just makes it more messy without changing the point at all, but someone might be wondering...

 

I am writing out the actual “dot” product. That is achieved by noting that the result is entirely dependent upon the fact that [imath]\hat{x}_i \cdot \hat{x}_j[/imath] is “zero” if i is not the same as j and “unity” when i=j . Thus it is that, under this multiplication, only four terms survive: one for i=1 (the x component), one for i=2 (the y component), one for i=3 (the z component) and one for i=4 (the tau component). So we started with a product of two terms (each of which was a sum of four terms) and ended with a sum of four terms.

 

Ah, I was able to follow through that step now.

 

That arises because alpha and beta operators anti-commute: i.e., [imath]\alpha \beta=-\beta\alpha[/imath] or, equivalently [imath]\alpha (\beta_{12} + \beta_{21})=-(\beta_{12} + \beta_{21})\alpha[/imath], I can leave out the subscripts since which alpha and which beta I use makes no difference; they all anti-commute so the sign changes when I change the order of those two inside terms.

 

Ah yes... I'm sure you can see how unfamiliar I am with this stuff still, seems like you need to refresh my memory about these things every time they appear :D

 

Yes, except for the definition of beta. Since the actual gamma was built from [imath]\beta=\beta_{12}+\beta_{21}[/imath], that factor should have been [math]\left\{-4\frac{1}{2}(1)\right\}^\frac{1}{2}[/math] which becomes [imath]\sqrt{-2}=i\sqrt{2}[/imath]. In other words, it still generates that factor of “i” which makes the term [imath]i<\vec{\gamma}>[/imath] (my definition of the potentials) a real value. (I don't think I will bother to fix that beta thing as it really has no bearing on the result.)

 

Okay but maybe you should take out the sum sign as I guess it makes no sense in that step anymore...?

 

I wouldn't worry about it if I were you. All I am talking about are possibilities worth looking at if one accepts my paradigm. I haven't worked out everything and there are some serious questions to think about.

And that is exactly the essence of Zeno's paradox: you cannot prove that an object passed through every point in a line describing its path! Motion is clearly an assumption built into your world view. It is not only impossible to prove true but, in fact, does indeed lead to conclusions which clearly violate our experiments: Bell's inequalities and “entanglement” in particular.

 

Indeed. I'd think the experimental verification of Bell's inequalities should motivate some people to look at this analysis... Hmm....

 

Again, I wouldn't really worry about it if I were you. What is essentially being discussed are the solutions as seen from a spherical coordinate system as apposed to a rectilinear coordinate system: i.e., it is no more than analysis via more advanced mathematics. In a spherical coordinate system, when you write [imath]\vec{\Psi}[/imath] as a function of [imath](r,\phi,\theta)[/imath], those same wave like solutions which cropped up in the rectilinear system yield important consequences (embedded in those functions called “spherical harmonics”). If we change theta by 360 degrees, we must arrive back at exactly the same point in space we started from. It follows that the probability of finding an entity there must be exactly the same. Analysis of such specific solutions yield the set of functions referred to as spherical harmonics. Due to spherical symmetry of most problems, only the radial portion becomes really significant.

 

I only brought up the issue because I wanted to point out that one gets exactly the same form of radial potentials that the standard model yields for massive exchange forces. Maxwell's equations with massive mediating exchange elements is central to unifying the first three forces and I have some serious comments to make on that problem.

 

Oh... Hmm, that's interesting.. Well I would like to hear those comments.

 

As far as I am concerned, the last sentence in your quote is the most important, ”Since my fundamental equation must be satisfied by all possible universes, only constraints not specifically required by that equation tell us anything about our universe”. Every approximation I have made corresponds exactly to the presumptions and assumptions made in the common everyday use and analysis of Dirac's equation and Maxwell's equations. Thus it follows that, when we use Dirac's equation together with Maxwell's equation we are essentially looking at an approximate solution to my equation. The only real question which exists is, “exactly how are those physical constants normally used in physics to be determined?” That is a deep issue we could go into but it's not trivial and I would like to let it go until later.

 

Okay

 

Finally,

In Classical physics, [imath]\Phi[/imath] is the static potential due to the existence of an electron (or the charge residing in some place in space): the electric field (a vector force field) is given by the gradient of that potential (which is a scalar potential defined in a three dimensional space). The vector potential, [imath]\vec{A}[/imath] is a “vector potential” due to the motion of an electron (or the essential motion of charge: i.e., the current): the magnetic field (another vector force field) is given by cross gradient of that potential (which is a vector potential defined in a three dimensional space).

 

In my case, which is a four dimensional Euclidean space, the potential, which is the expectation value of [imath]\vec{\gamma}[/imath] is a four dimensional vector potential. The common vector potential, [imath]\vec{A}[/imath] is the (x,y,z) space component of that potential and [imath]\Phi[/imath] is the tau component of that potential. What is significant here is that the magnetic fields and the electric fields arise by rather different means; the two fields are not symmetric at all. In a sense, we can say we have electric monopoles (electric charges) for the same reason we have massive particles and there is no need to search for "magnetic monopoles" ("magnetic charges").

 

It answers the question, "Why does the magnetic charge always seem to be zero?"

 

Ah okay, well I've never been involved with electromagnetism enough to come to wonder about that, but once again that sounds like a´n issue that some people might want to understand... :I

 

EDIT: Should we move to general relativity? I'll probably still be replying to this thread about few things at some point though, but probably mostly about the conventional view of Dirac's equation.

 

-Anssi

Posted

Hi Anssi, Sorry I have been so slow to answer your last post but I have a home maintenance project going on here which I hope to get done before December when we head off to Denver. As a consequence I have very little time to spare (particularly because I am an old man and don't work near as fast as I did a few years ago). At the same time you bring up issues which are not at all trivial though one might put forth some quite trivial answers. I want to think about a few things before I make a serious post. Meanwhile, some quick answers:

Yes (except you forgot to put in the square for the C's)
Thank you; I have fixed it!
So with [imath]\vec{A}[/imath], we factor that term with m right out:
The term “factor” is not really appropriate here. Terms are generally “factored” when they are multiplying every term in an expression and the expression (without the term) can be seen as a unit unto itself (placed in parenthesis which is multiplied by the factor): i.e., factors have to do with multiplcation. What we are doing here is simply setting m2=0, quite a different thing from factoring.
The right side is still a mystery to me, as I do not know what they mean by [imath]\mu_0[/imath] in that wikipage...
This is where I stopped to think. I could give a quick reply but what is actually being raised here is the definitions of physical variables themselves. That issue has to do with “metaphysics” and is essentially the reason I have been posting to the “philosophy of science” subject area. Once upon a time philosophy (and metaphysics) was respected as a bona fide area of serious scientific examination. In fact it was once held as the queen of the sciences; but it has since fallen into scientific disrespect. Essentially, there have been no advances in serious scientific examination in the field since the ancient Greeks. The area is simply no longer taken seriously by the scientific community and hasn't been for hundreds of years.

 

Today, “physics” is essentially held as the queen of the sciences. This is almost entirely due to the almost unbelievable success physics has had in solving the problems it has attacked. Today, physics is just not a simple subject and I think certain aspects of that fact need to be brought to the front. People have been thinking about the issue of coming up with a decent explanation of “reality” (essentially what we know) for thousands of years and they have come up with some very complex chains of reasoning which one could spend a lifetime learning to analyze in detail. The important thing is to understand what they are doing. The real issue is the path by which they managed to achieve those profound insights.

 

No one on earth has the time to actually analyze that path in complete detail as it is far to complex and contains more information than any one man can be expected to know (I have heard it said that Leonardo da Vinci was the last man on earth to know everything meaning, of course, that that he knew everything which was known at the time). What I am getting at here is that knowledge and understanding are vastly different things.

 

The scientific knowledge now held by mankind is today so vast that we must take the accuracy of their conclusions at their word. This is not actually a great a step of faith as it is quite reasonable that every aspect of the accuracy of those steps is under almost continuous examination. In this world today whole careers can and are built out of issues totally devoted to what is, in reality, mere minor aspects of that complete path. The important thing in what we call “scientific conclusions” is logical internal consistency. These many different fields of scientific expertise each police their own areas of research. They may make errors but, insofar as internal consistency goes, one can be quite confident that the assertions made by the hard sciences are quite well backed up.

 

So I am not talking about “knowing” the logical path from ignorance to present day knowledge but rather about “understanding” that path. First and foremost in understanding that path is realizing that mathematics is the language of the exact sciences. As I have commented in my essay on “rational thought”, logic is a very difficult procedure to carry much past a few dozen steps and we can carry our logical conclusions beyond those few dozen steps only through the use of mathematics. The language of “logical internal consistency” is the field of mathematics; however, even knowing all the mathematics required to examine the details of that entire logical path to scientific knowledge is beyond the abilities of a single person.

 

The reason I bring all this up at this point is that classical electrodynamics is an excellent example example of what I am talking about and it is worthwhile to look a little closer to that small area of modern physics in order to gain an insight into what I am talking about when I say “understanding the path”.

 

First of all, classical electrodynamics, though it is but a small part of modern physics, is a deep and profound subject covering so much ground that professional engineers can provide themselves complete careers via specialization in specific areas of that field alone. It has been over forty years since I studied the subject and since I have not worked at all in the field my memory of the stuff is actually kind of vague. We can talk about stationary fields, circuit theory, inductance calculations, eddy currents, induction heating, radiation calculation... a whole slew of consequences of Maxwell's equations. My concerns are not the issues which can be developed from his equations but rather the path by which they were reached. Originally Maxwell took three “laws” generated respectively by Coulomb, Ampere, and Faraday. Today, these laws are expressed by mathematical expressions which may or may not have been developed by those people. What they did was to discover the mathematical relationships which governed the physical phenomena they spent their life examining. Coulomb spent much time examining the forces between electrically charged objects.

 

Coulomb's law: [math]\vec{\nabla} \cdot\vec{D}=4\pi \rho[/math]

 

where [imath]\vec{D}[/imath] is the effective electric field due to the charge distribution defined by [imath]\rho[/imath]. The electric field is defined to be the source of the force on an electric charge: i.e., [imath]\vec{f}=q\vec{E}[/imath] where “q” is the charge. [imath]\vec{D}=\epsilon \vec{E}[/imath] is the result of the polarization of the media in which the field exists ([imath]\epsilon[/imath] is called the dielectric constant (take a gander at this). As I said, engineers can provide themselves with complete careers by specialization in very specific areas.

 

Essentially Coulomb came to the conclusion that the above was an accurate representation of the results.

 

Ampere spent most of his time examining the forces between magnetic phenomena.

 

Ampere's law: [math]\vec{\nabla} \times\vec{H}=\frac{4\pi}{c}\vec{J}[/math]

 

where [imath]\vec{H}[/imath] is the effective magnetic field due to a current distribution defined by [imath]\vec{J}[/imath] ([imath]\vec{J}(\vec{x})[/imath] describes the current in a wire at each point in the wire). A magnetic field results in a force on moving charges (other currents or other magnets). The definition of the magnetic field resolves down to

[math]\vec{f}=\frac{1}{c}\int\vec{J}(\vec{x})\times\vec{B}(\vec{x})dV[/math]

 

where [imath]\vec{J}[/imath] is the current distribution upon which the magnetic field [imath]\vec{B}[/imath] acts. ([imath]\vec{B}[/imath] is the magnetic field created by some other current distribution: the other current which causes [imath]\vec{B}[/imath].) [imath]\vec{B}=\mu\vec{H}[/imath] has to do with the “magnetic permeability” of the media in which the magnetic field exists (quite analogous to the [imath]\epsilon[/imath] in static electricity).

 

Essentially Ampere came to the conclusion that the above was an accurate representation of the results.

 

Faraday's law: [math]\vec{\nabla}\times \vec{E}+\frac{1}{c}\frac{\partial \vec{B}}{\partial t}=0[/math]

 

together with the constraint [imath]\vec{\nabla}\cdot\vec{B}=0[/imath], where the definitions of the variables are the same as the definitions in Coulomb's and Ampere's laws. Essentially there are two statements here: the first is the fact that a changing magnetic field produces an electric field (the existence of electromagnetic induction: i.e., both transformers and electric generators built of moving magnets can exist). Again I state that engineers can provide themselves complete careers via specialization in solutions to specific parts of electromagnetic phenomena. Oh yeah, the last one, [imath]\vec{\nabla}\cdot\vec{B}=0[/imath], essentially says that there is no such thing as “a magnetic charge” (often referred to as “magnetic monopoles”): i.e., the field lines of a magnetic field are closed loops and do not emanate from a point. If the gradient ([imath]\vec{\nabla}\cdot [/imath]) of the force field (that would be the field of force vectors) vanishes the field can be represented by lines which loop back on themselves. If the gradient does not vanish everywhere, the field can be represented by lines emanating from the points in space where the gradient has a non zero value (the points where the charges exist).

 

Now these three people accomplished what they did by examining the actual consequences of real physical phenomena and deduced the mathematical relationships consistent with their observations. That approach to scientific examination is quite often put forth as the real work of physics; however, there is another path which is just as important as that one. This other path was followed by Maxwell and essentially involved no examination of physical phenomena at all.

 

What Maxwell discovered was an internal inconsistency within those three laws. The problem was with Ampere's law. Ampere's work was concerned with steady-state problems: i.e., it presumed [imath]\vec{\nabla}\cdot\vec{J}=0[/imath] (currents could only go different directions, they could not actually change). In order to make things internally consistent one needed a continuity equation for charge and current: [imath]\vec{\nabla}\cdot\vec{J}+\frac{\partial \rho}{\partial t}=0[/imath] (charge distributions could change with time).

 

Using the same definitions elaborated above, Maxwell came up with the four equations commonly called “Maxwell's equations”.

[math]\vec{\nabla}\cdot\vec{D} = 4\pi\rho \;\;\;\;\;\;\; \vec{\nabla}\times\vec{H}=\frac{4\pi}{c}\vec{J}+\frac{1}{c}\frac{\partial \vec{D}}{\partial t}[/math]

 

[math]\vec{\nabla}\cdot\vec{B} =0\;\;\;\; \;\;\;\;\;\;\; \vec{\nabla}\times\vec{E}+\frac{1}{c}\frac{\partial \vec{B}}{\partial t}=0[/math]

 

I originally started down this path because I had been rather caviler in my explanation of “gauge symmetry” which is, in fact, quite a subtle and significant thing. It turns out that Maxwell's four equations can be reduced to two equations through the use of “potentials”. Since [imath]\vec{\nabla}\cdot \vec{B}=0[/imath], it is possible to prove there exists a vector potential [imath]\vec{A}[/imath] such that [imath]\vec{B}=\vec{\nabla}\times \vec{A}[/imath] where [imath]\vec{\nabla}\times[/imath] is often called “the curl”. (Just don't worry about the proofs as, if we go into that, we will waste way too much time.) That relationship can be substituted into the last Maxwell equation, yielding:

[math]\vec{\nabla}\times \left(\vec{E}+\frac{1}{c}\frac{\partial \vec{A}}{\partial t}\right)=0[/math]

 

which brings in another proved relationship: if the curl of a field vanishes the field can be written as a gradient of a scaler function, in this case, a scalar potential usually called [imath]\Phi[/imath]. Having expressed Maxwell's equations in terms of these “potentials”, it turns out that there is a certain arbitrariness in the definition of those potentials (essentially using the fact that the gradient of the curl vanishes one can add different functions to those potentials which will have no effect on the final answers but will nevertheless create major alterations in the structure of the equations themselves). That arbitrariness can be exploited in many ways to transform Maxwell's equations into various forms convenient for different purposes. The various transformations which can be achieved eventually led to another mathematical field called “gauge transformations”. Go read a little of that reference and you will understand why I talk about it with such broad vague strokes.

 

Physics is a vast jungle of logically consistent transformations useful to specific situations (different ways of looking at specific problems). Engineering is the career field which makes use of those various specific circumstances and, as I said, careers can be built of very minor aspects of that vast jungle. My interest is in the fundamental path which leads to those insights. I have discovered a rather simple unique path which leads directly to the fundamental underlying principals upon which perhaps all of those deep and profound discoveries are based. Once I deduce that Newton's physics, Schrödinger's equation, Dirac's equation and Maxwell's equations all are approximations to my equation, the rest of the jungle of physics which follow from those relationships also follow directly from my work. The details of that path are not really significant as they are almost all essentially mathematical transformations which simplify specific problems. What I am presenting is a fundamental paradigm from which all of modern physics flows.

 

Most of this essay was there to express the reason for my lack of interest in the details of physics in general. There are two issues at work here. First, I wanted to make it clear that teaching you physics is not a reasonable goal; if you really wanted to learn physics, your best bet would be to get yourself enrolled in a good graduate school and even then you would have to be satisfied with the prospect of declaring a specialty. Second,There are people far more qualified than myself capable of doing a far better job than I at showing how to use the constraints embedded in my equation to get to any specific place in that vast jungle of scientific knowledge. My interest is in the philosophic consequences implied by the paradigm standing behind that equation. All I am really interested in here at this moment is convincing you at that Dirac's equation and Maxwell's equations can be directly deduced from the assumption that my equation is a universal representation of internal consistency. In other words, most of your questions are, to a great extent, essentially beside the point.

 

Nevertheless, I will do my best to clarify your difficulties.

I looked at what that square, "D'Alembert operator", stands for, and it appears to be a differential operator over t,x,y and z in minkowski space... So in that expression at the wiki page, the time derivative is just separated into a term of its own. Hmmm, not sure about the role of that factor [imath]\frac{1}{c^2}[/imath], I guess it has got something to do with relativistic transformation somehow?
That factor is there because the common units used to define t and the units used to define x are defined differently. In order to add two things together they must have the same character. I am sure you have heard the old adage, “you can't add apples to oranges”. The “D'Alembert operator” is a sum of terms and all those terms must have the same units and the c2 takes care of that issue.
[math]

\left [ \frac{1}{c^2}\frac{\partial^2}{\partial t^2} - \nabla^2 \right ] \Phi= 4\pi\rho

[/math]

 

And again don't know what is happening on the right side; I do not know what they mean by [imath]\epsilon_0[/imath]

It is really of little consequence. I have above presented Maxwell's equations as deduced from the earlier three electromagnetic laws. In that deduction, I gave a link to information on the definitions of both [imath]\mu[/imath] and [imath]\epsilon[/imath]. What they both essentially do is to reproduce the general effects of the many body interactions (occuring because of the media whose properties are being defined by [imath]\mu[/imath] and [imath]\epsilon[/imath]) being ignored in the representation. It turns out that there exists a wave solution to Maxwell's equations (a free propagating wave which exists far from any actual charges or currents) which was totally unexpected prior to Maxwell. When one solves those equations for the wave solution, the velocity of the wave turns out to be given by the inverse of [imath]\sqrt{\mu\epsilon}[/imath] which are qualities of the media within which the electric and magnetic fields exist. They can be directly measured with static experiments from the earlier three electromagnetic laws. The zero subscript merely indicates that these are the values of these constants in the absence of any physical media: i.e., in a vacuum.

 

It was the fact that the speed of light was so close to that speed which indicated that light was indeed electromagnetic radiation.

 

But back to your post.

I'm really just guessing again... :I
I think you guess quite well!
Yup okay, I think I get some idea of what it's about from your explanation already. At least, I can understand that if the expectations have to do with "differences" in electromagnetic potentials over spatial regions, then a shift over that whole "landscape" is immaterial. At the same time, I'm sure there are many subtleties to that issue that I'd have to study carefully to understand properly... :I Especially as it appears the Lorenz gauge operates on Minkowski space, and they seem to imply on the wikipage that the non-relativistic form cannot be expressed in that manner.
This is one of the comments that led me to compose that horrific essay above. As I said, I was rather loose in my reference to gauge transformations. It is not at all as trivial as just adding a constant. These “gauge transformations” are rather complex mathematical re-expressions of the potentials via a transformation which does not alter the resultant electromagnetic fields. Fundamentally it is little more than a complex mathematical trick which makes the equations easier to solve in certain special circumstances.

 

With regard to the relativistic issues, Maxwell's equations, since they imply a fixed velocity independent of the observer's motion, are the very foundation of special relativity. As such, all of the possible gauge transformations equally conform to special relativity. As I said, the real issue of gauge transformation is to make solving the equations easier. And again, whole fields are devoted to those various different gauges. The Lorentz gauge is very a very general representation convenient to calculations of general problems involving known time dependent charges and currents. A great many simpler problems (where various things can be ignored) can be made easy via a proper gauge transformation. Essentially it is all just various mathematical relationships which are quite valuable to engineers committed to designing electromagnetic devices.

By this time seems obvious to me that ultimately they must arise from symmetries arising from unknown aspects of the data-to-be-explained...
I think life is a little more subtle than that. Yes symmetries arise from unknown or unknowable aspects but they can also arise from physical aspects of reality which can be ignored; not exactly the same thing.

 

With regard to the symmetries behind gauge transformations, it is interesting to note that my representation (my fundamental equation) actually possesses no “potentials”. The Dirac delta functions impose boundary conditions quite a different thing. All energies arise through momentum effects (including so called rest mass energy). The identification with “potential energy” came about when I made approximations necessary to reproduce the common modern physics expressions; essentially when I ignored the “rest of the universe” and replaced the effect of its inclusion with integrals over those portions. At that point, the energy of the rest of the universe, as a function of the actual position of the particle of interest, appears to be quite equivalent to the modern concept of potential energy.

 

The fact that, in my deduction, the native appearance of Maxwell's equations is in the Lorentz gauge make that gauge appear to be the most fundamental representation. In that paradigm, the other gauges appear to be simple mathematical transformations leading to easy solutions in special cases I spoke of. In the same vein, Einstein's space time representation of reality can likewise be seen as a mathematical transformation (a transformation to what is called a covariant formulation) which makes it easy to prove that a given law (when expressed in a covariant form) is the same in all inertial frames: i.e., it is a mathematical transformation which simplifies the problem of finding solutions in specific cases, in many respects, quite analogous to gauge transformations.

Ahha... Hmmm, a little Wikipedia adventure suggests that [imath]\vec{J}[/imath] stands for "current density", and [imath]\rho[/imath] stands for charge density...
Yes, it was Ben Franklyn who invented the idea of mobile currents of charges. Later it was discovered that charges were quantized and existed in multiples of “e” the charge on an electron. So charge density is proportional to electron density. And, when electrons are moving we can talk about “current density”. These ideas (charge density and current density) arose long before it was realized that charges were quantized.
I couldn't find good definitions for them, the wikipedia page about Maxwell's equation has got a lot of information in it and I'm getting a bit lost in there :(
All I was referring to was the fact that the definition of charge density was essentially the number of electrons per cubic meter and current density amounted to the flow rate of these same charges; thus both charge density and current density were real numbers by convention (there is no [imath]\sqrt{-1}[/imath] in the definitions).
When I asked "I do not know what the amplitude of [imath]\gamma[/imath] means" I meant to ask, what does it represent?
It was introduced in order to allow me to factor out [imath]\vec{\alpha}[/imath] from both the momentum term and the interaction term (the sum over Dirac delta functions): i.e., when I factor out [imath]\vec{\alpha}[/imath] from [imath]\vec{\gamma}[/imath] I get [imath]\beta_{12}+\beta_{21}[/imath], exactly the original coefficient in my equation. I wanted that factor out there to make it look like Dirac's equation. Clearly, after I did that, I proved that the four components of [imath]\vec{\gamma}[/imath] were indeed exactly the four components of the standard electromagnetic potentials: the tau component being the scalar Coulomb potential and the three x,y and z components being the common vector potential in the Lorentz gauge of Maxwell's equations.
Yeah, I'm wondering if there should at least be a mention of that little error related to beta in the OP too. I figure you don't want to change the math because it just makes it more messy without changing the point at all, but someone might be wondering...
Yeah, I guess I need to go fix that.
Okay but maybe you should take out the sum sign as I guess it makes no sense in that step anymore...?
I don't understand which sum sign you are referring to.
Oh... Hmm, that's interesting.. Well I would like to hear those comments.
Down the road a ways. I would like to cover general relativity first.
EDIT: Should we move to general relativity? I'll probably still be replying to this thread about few things at some point though, but probably mostly about the conventional view of Dirac's equation.
Yes, I think we should; however, because of the things I am trying to get done before December, I would like to lay off things on the forum for a while. I will work something up when I find some spare time. Right now I am waiting for some paint to dry and my wife said I should take Sunday off. Sorry for this great rant.

 

Thanks for all your attention -- Dick

Posted

I have managed to follow this thread up to the point

 

[math]\frac{1}{4}\nabla^2\left(\vec{\Psi}_2^\dagger \vec{\gamma}\cdot \vec{\Psi}_2\right) +\left(2\psi^\dagger(\vec{x},t)\psi(\vec{x},t)\right)^2\left(\vec{\Psi}_2^\dagger\vec{\gamma}\cdot\vec{\Psi}_2\right)=\frac{1}{4c^2} \frac{\partial^2}{\partial t^2}\left(\vec{\Psi}_2^\dagger\vec{\gamma}\cdot\vec{\Psi}_2\right)[/math].

 

Or, if [imath]\nabla^2[/imath] is interpreted to be the standard three dimensional version (no partial with respect to tau) we should put in the tau term explicitly and obtain

[math]\frac{1}{4}\nabla^2\left(\vec{\Psi}_2^\dagger \vec{\gamma}\cdot \vec{\Psi}_2\right) +\frac{1}{4}\frac{m_2^2c^2}{\hbar^2}\left(\vec{\Psi}_2^\dagger \vec{\gamma}\cdot \vec{\Psi}_2\right)[/math]

 

[math]+\left(2\psi^\dagger(\vec{x},t)\psi(\vec{x},t)\right)^2\left(\vec{\Psi}_2^\dagger\vec{\gamma}\cdot\vec{\Psi}_2\right) =\frac{1}{4c^2} \frac{\partial^2}{\partial t^2}\left(\vec{\Psi}_2^\dagger\vec{\gamma}\cdot\vec{\Psi}_2\right)[/math].

 

But at this point I can’t understand why the second term in the second equation is positive. AnssiH suggested that

 

Okay, so, given the definition:

 

[math]

m = -i\frac{\hbar}{c}\frac{\partial}{\partial \tau}

[/math]

 

then

 

[math]

\frac{\partial^2}{\partial \tau^2}

=

\frac{m_2^2c^2}{-i^2 \hbar^2}

=

\frac{m_2^2c^2}{\hbar^2}

[/math]

 

So yes that would get me to exactly what you wrote down.

 

Which is then substituted into the equation which makes sense except that shouldn’t it be

[math]\frac{\partial^2}{\partial \tau^2} = \frac{m_2^2c^2}{(-i)^2 \hbar^2} =

-\frac{m_2^2c^2}{\hbar^2}[/math]

 

which would make the second term negative or is there some other reason that AnssiH wrote it out like he did that I am over looking?

 

I have some other questions but from Doctordick’s last post I gather that every one is quite busy at this time and so I don’t want to be too distracting, not to mention that I want to look the opening post over some more before asking questions and I don’t think I’m finished with the topic “An “analytical-metaphysical” take on Special Relativity!” either.

Posted

But back to Qfwfq's avatar; I could be wrong but I presume the [imath]\partial[/imath] with the slash through it is a shorthand notation for the differential operator in Minkowski geometry; the partial with respect to space being momentum and the partial with respect to time being energy (both of which are multiplied by [imath]\sqrt{-1}[/imath], the common definition of “i”). The [imath]\Psi[/imath] clearly refers to the “wave function” defined by the equation and “m” is the mass of the quantum entity associated with the “field” being defined by the function [imath]\Psi[/imath]. In essence it is a shorthand notation for what is commonly called the [????] equation. I am definitely going senile here as the name of the equation is on the tip of my tongue but totally eluding me....

 

Sorry this was just to funny not to chime in.

 

You guys are supposedly chatting about the Dirac equation and you DD call yourself a physicist that once did QED calculations??

 

The equation in Q's avatar is none other than the Dirac equation.

 

The [math]\partial\!\!\!/[/math] is Feynman slash notation (introduced for QED) that represents the contraction of the partial derivative with the Dirac matricies [math]\gamma^{\mu}\partial_{\mu}[/math].

 

In this case [math]\psi[/math] is a 4-component (Dirac) spinor.

Posted

A quick comment;

 

Which is then substituted into the equation which makes sense except that shouldn’t it be

[math]\frac{\partial^2}{\partial \tau^2} = \frac{m_2^2c^2}{(-i)^2 \hbar^2} =

-\frac{m_2^2c^2}{\hbar^2}[/math]

 

which would make the second term negative or is there some other reason that AnssiH wrote it out like he did that I am over looking?

 

No, I totally overlooked that myself and I must say I do not know the answer. I'm always trying to be extra careful but that's my lack of experience showing right there :P

 

and I don’t think I’m finished with the topic “An “analytical-metaphysical” take on Special Relativity!” either.

 

I've been meaning to give you a reply but haven't had good time to do it yet... But I will at some point :)

 

Jay-qu, we are chatting the relationship between Dirac's equation and some symmetry statements (expressed as an equation called "fundamental equation" in the OP).

 

All interest and help is welcome :) (There are probably more typos and errors in the algebra, quite a few have been fixed by now though)

 

Actually I think DD's last post is pretty good statement of the purpose of this examination.

 

And DD, I'll reply to the previous post at a better time too... But let it be said that I do understand what you mean by the field of physics being huge and I know exactly what you mean by your lack of interest to all the details/all the different ways of expressing the same things. My interest in the conventional picture is coming from the feel that it is very hard to discuss these matters with people unless I get a good idea of the conventional view. Just getting into the same terminology should be valuable. For one, I do not think I'd have a good idea of what is going on with the analysis of special relativity (and the significance of), if I hadn't understood relativity first.

 

-Anssi

Posted
But at this point I can’t understand why the second term in the second equation is positive.
Neither can I. I apologize for my sloppy algebra. Worse than that, the wrong sign propagates all the way down to my comparison to the Yukawa nuclear potential. At that point the error should have been obvious to me. The potential (as I had it represented,

[math]\Phi(r,\theta,\phi)=\frac{\rho}{r}\;e^{\frac{mc}{\hbar}r}[/math]

 

would be extremely strong at large distances, not at small distances! :doh:

 

All I can really say is that my stuff certainly does need proof reading. :(

I have some other questions but from Doctordick’s last post I gather that every one is quite busy at this time and so I don’t want to be too distracting, not to mention that I want to look the opening post over some more before asking questions and I don’t think I’m finished with the topic “An “analytical-metaphysical” take on Special Relativity!” either.
Thanks, maybe Anssi can be of assistance.
The equation in Q's avatar is none other than the Dirac equation.

 

The [math]\partial\!\!\!/[/math] is Feynman slash notation (introduced for QED) that represents the contraction of the partial derivative with the Dirac matricies [math]\gamma^{\mu}\partial_{\mu}[/math].

 

In this case [math]\psi[/math] is a 4-component (Dirac) spinor.

Thanks for the information. I am sorry but that particular notation was not prevalent forty years ago (at least not where I was educated). I googled “Feynman slash notation” and found no information on the actual date it was introduced by Feynman so I googled “Feynman publications” and suspect it was probably introduced in 1961 with his “Quantum Electrodynamics; a lecture note and reprint volume” which would leave very little time for the notation to become standard prior to my thesis work. Plus that, I have always generally hated that kind of notation and would much rather work with the standard long expressions as most of my work was programing the number crunching and, back in those days, we didn't have “object orientated” programming (I even did a lot in machine language to conserve space and time): i.e., one had to work out the exact mathematical expressions in detail. That kind of notation is usually used in hand waving arguments, not in detailed calculations.

 

As I have mentioned elsewhere, I left the physics community when I received my Ph.D. as I was quite disappointed with the fundamental justifications of their arguments (I saw a lot of it as being "chock full of bullshit"). I really haven't had any interactions with them since, so it is not unreasonable at all that I was unaware of exactly what was meant by those slashes. In essence, though I may have seen such things, I wouldn't have paid much attention to such things anyway. :shrug:

And DD, I'll reply to the previous post at a better time too... But let it be said that I do understand what you mean by the field of physics being huge and I know exactly what you mean by your lack of interest to all the details/all the different ways of expressing the same things. My interest in the conventional picture is coming from the feel that it is very hard to discuss these matters with people unless I get a good idea of the conventional view. Just getting into the same terminology should be valuable. For one, I do not think I'd have a good idea of what is going on with the analysis of special relativity (and the significance of), if I hadn't understood relativity first.
Oh, I agree with you completely on that but, as Jay-qu has pointed out, the “conventional view” is not a permanent thing and I am certainly not up to date on the latest professional jargon (or notation if you will). I basically think physics took an erroneous left turn back in the early 1900's and kind of dismissed most of their “declarations of truth” after Einstein!

 

I will do what I can to explain what they are talking about but we may have to ask some of these younger people what they mean when it comes to notation. I am afraid a lot of them just don't know what they mean, with or without fancy notation. I am just an opinionated old man. :lightsaber2:

 

Have fun -- Dick

  • 1 month later...
Posted

I suspect that people are still pretty busy but I’m going to post some of my questions for when someone gets a chance to look at them.

 

(Note that [imath]\vec{\alpha}[/imath] is still a four dimensional operator but that [imath]\vec{p}[/imath] is being taken as three dimensional: i.e., [imath]\alpha_\tau \cdot \vec{p}=0[/imath]). That comment is just to maintain consistency with Dirac's definition of [imath]\vec{p}[/imath]: a three dimensional momentum vector. It is interesting to note that if the second entity (the supposed massless element) is identified with the conventional concept of a free photon, its energy is given by c times its momentum. If the vector [imath]\vec{p}_2[/imath] is taken to be a four dimensional vector momentum in my [imath]x,y,z,\tau[/imath] space we can write the energy relationship for element 2 as

[math]\left\{\frac{i\hbar}{\sqrt{2}}\frac{\partial}{\partial t}\vec{\Psi}_2\right\}\vec{\Psi}_1 =c\vec{\alpha}_2\cdot\vec{p}_2\vec{\Psi}_2\vec{\Psi}_1=\sqrt{\frac{1}{2}}|cp_2|\vec{\Psi}_2\vec{\Psi}_1[/math]

 

using the fact that the value of [imath]\vec{\alpha}_2\cdot\vec{p}_2=\sqrt{\frac{1}{2}}|\vec{p}_2|[/imath] (which can be deduced from the fact that the dot product is the component of alpha in the direction of the momentum times the magnitude of that momentum). This identification drops out both the energy term, the mass term and the momentum term associated with the second element independent of that element's mass and brings the fundamental equation into the form

 

I have to wonder if this is a requirement of a massless element or if only a photon will have this property. Or in other words can we derive this directly from the idea of it being a massless element, a photon is the only massless element, or is it part of the definition of a photon there may be other massless elements that don’t behave like a photon as their energy operator is different. Or might this have something to do with considering the element to be a boson?

 

Also I’m not sure I understand why it drops out the mass term. Is this just due to it being a massless element and so the mass term is zero or is it that the mass term is just a momentum term in the [math]\tau[/math] direction and so just an extinction to the standard definition.

 

If one now defines a new operator [imath]\vec{\gamma}=\vec{\alpha}_1\beta[/imath], since [imath]\vec{\alpha}_1\cdot\vec{\alpha}_1=2[/imath] (it consists of four components each being 1/2) it should be clear that [imath]\vec{\alpha}_1\cdot\vec{\gamma}=2\beta[/imath]. Making that substitution in the above we have

 

At this point I am a little puzzled by just what your [math]\vec{\alpha}[/math] and original [math]\beta_{ij}[/math] operators are. Now the original definition of [math]\vec{\alpha}[/math] and [math]\beta_{ij}[/math] required that they

anticommute with each other and that the square of [math]\alpha[/math] or [math]\beta[/math] is [math]\frac{1}{2}[/math] (I assume that by this you mean [math]\frac{1}{2}[/math] times the unit operator in whatever space they are in that is their square is not a scalar).

 

With these definitions I’m not sure of how the multiplication works with [math]\vec{\alpha}[/math] as it looks like you use it like a vector with a dot product not a matrix which is what the [math]\beta[/math] operator seems to work more like. But if it is a vector I can’t see how they can anticommute or how multiplication can be defined in some useful way so that they will anticommute. So it seems to make more sense to consider them to be a matrix in which case I can‘t understand the idea of a dot product unless there is an inner product that can be defined to work with a matrix. Although I know that it is not too hard to define different abstract spaces for operators which looks like what you have done. That is you have not really said that they are in a matrix space it is just a convenient space to consider them part of.

 

I’m also not quite sure as to what [math]\vec{\gamma}=\vec{\alpha}_1\beta[/math] is but it looks most easily defined as a matrix. And I still don’t know how you are using both a dot product and matrix multiplication with the same element but suspect that it is more of an abstract space then a matrix space again.

 

This brings up one more point that seems like it might be interesting, is a basis used in the fundamental equation. I have not tried to count the number of operators to see if it would form a basis and I’m not sure of just how many would be needed as I don’t know what they are but it looks like it could be, either way will this say something about possible solutions when we consider elements to have no effect on the equation of interest or to be valid or invalid? As we will no longer have a basis of [math]\vec{\alpha}[/math] and [math]\beta[/math] operators to form the right side of the fundamental equation out of.

 

Also at this point is the number of dimensions that we are considering still going to have no effect on the final equation? I ask this because it seems that the value of [math]\vec{\alpha}_1\cdot\vec{\alpha}_1=2[/math] depends on the number of dimensions that the [math]\vec{\alpha}_1[/math] operator is placed in or can any effect that this might have just be moved into one of the arbitrary constants in the equation and so have no effect on the final result?

 

However, the photon (which does bear a close resemblance to what I have put forth as a fundamental element) is quite often described as being the consequence of quantizing the electromagnetic field. Anyone who has followed the derivation of my fundamental equation knows that, under that derivation, two lone fermions cannot interact: i.e., the Dirac delta function vanishes via the Pauli exclusion principle. Since the electron (the particle element number one is to represent) is a fermion, the second element has to be a boson. If that is the case, the second element in the above deduction must obey Bose-Einstein statistics: i.e., an unlimited number of particles may occupy the same state at the same time.

 

I’m not entirely sure that I understand what you are saying but it sounds like this means that if we considered just two electrons there would be no interaction term between them (they are both fermions) the only way for them to interact is to consider the effect that a boson has on them, on the other hand two bosons will interact without a fermion. That is two photons could interact with each other (they are both fermions) also a boson and a fermion can interact in the same way that two fermions would interact. The Dirac equation is a special case of this where a photon and a electron will interact with each other. If this is the case wouldn’t the case of multiple electrons and only one photon also be a vary simple case to consider or would this be almost a trivial addition to the Diarc equation as it could only add terms that effect [math]\vec{\Psi}_2^\dagger(x,t)\cdot\vec{\Psi}_2(x,t)[/math] and so the Diarc equation will in fact be unchanged.

 

I’m going to stop there for now and continue when someone gets a chance to answer it.

Posted
I have to wonder if this is a requirement of a massless element or if only a photon will have this property. Or in other words can we derive this directly from the idea of it being a massless element, a photon is the only massless element, or is it part of the definition of a photon there may be other massless elements that don’t behave like a photon as their energy operator is different. Or might this have something to do with considering the element to be a boson?
Massless is “without mass”:i.e., the mass term is zero. When I was a student, the photon was the only known massless boson and the neutrino was a massless fermion. Now it seems that they consider the neutrino to have a very small mass (personally I wonder if that could be a subtle consequence of Pauli exclusion). Of course the supposed graviton was assumed to also be a massless boson but I am not aware of a usable graviton theory outside my work.
With these definitions I’m not sure of how the multiplication works with [math]\vec{\alpha}[/math] as it looks like you use it like a vector with a dot product not a matrix which is what the [math]\beta[/math] operator seems to work more like. But if it is a vector I can’t see how they can anticommute or how multiplication can be defined in some useful way so that they will anticommute. So it seems to make more sense to consider them to be a matrix in which case I can‘t understand the idea of a dot product unless there is an inner product that can be defined to work with a matrix. Although I know that it is not too hard to define different abstract spaces for operators which looks like what you have done. That is you have not really said that they are in a matrix space it is just a convenient space to consider them part of.
The alpha and beta operators are defined as attached to specific numerical reference labels (those variables referred to as x). The alpha operators are directly attached to a specific reference whereas that beta operators are attached to a specific pair of references. If those variables are taken to be independent (i.e., seen as different directions in a geometry) then the fact that n reference variables are seen as a vector (in an n dimensional orthogonal space; four dimensional in my presentation) then the attached alpha and beta operators are also collected in terms of that vector representation.
I’m also not quite sure as to what [math]\vec{\gamma}=\vec{\alpha}_1\beta[/math] is but it looks most easily defined as a matrix. And I still don’t know how you are using both a dot product and matrix multiplication with the same element but suspect that it is more of an abstract space then a matrix space again.
The alpha and beta operators are vector entities regarding its association with the reference labels x but can be seen as a matrix with regard to the vector nature of the solution [imath]\vec{\Psi}[/imath] which is defined in an abstract space having nothing to do with the vector space defined in association with the reference labels.
I’m not entirely sure that I understand what you are saying but it sounds like this means that if we considered just two electrons there would be no interaction term between them (they are both fermions) the only way for them to interact is to consider the effect that a boson has on them, on the other hand two bosons will interact without a fermion.
In my picture, an electron is not a fundamental entity. It is rather a description of a phenomena involving interactions between fermions brought about by bosons. Without the existence of photons, electrons could not exist.

 

Have fun -- Dick

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...