Jump to content
Science Forums

Recommended Posts

Posted

Hi Anssi, I think you are beginning to get it. And I don't help when I screw things up (which I seem to do on a regular basis). When I read your post I immediately noticed a rather grave error in the post of mine you quote. With regard to the g(x), I had written

[math]g(x)= \sum_{i \neq 2}^n \beta_{ij}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_j)\vec{\Psi}_r dV_r[/math],

 

when I should have written

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r[/math].

 

First, the sum is over the set #2, not “i not equal 2”, second, there is no “j” in there and third, the thing originally (when we summed over i and j) had x1 in [imath]\delta(\vec{x}_i-\vec{x}_1)[/imath] and [imath]\delta(\vec{x}_1-\vec{x}_j)[/imath] which creats two identical terms. Sorry about that; I have made a correction in the post you quoted and would appreciate it if you would correct your quote of it. Thanks!

 

Other than not noticing that error of mine, the only thing I found fault with was the order of your expansion. I would have gone from

[math]\left\{ \vec{\alpha}\cdot \vec{\nabla} + g(\vec{x}) \right\}^2 \vec{\Phi} = \left\{ K\frac{\partial}{\partial t} \right\}^2 \vec{\Phi}[/math]

 

to the expression

[math]\left\{ \alpha_x \frac{\partial}{\partial x}+\alpha_\tau \frac{\partial}{\partial \tau} + g(\vec{x}) \right\}^2 \vec{\Phi} = \left\{ K\frac{\partial}{\partial t} \right\}^2 \vec{\Phi}[/math]

 

or maybe just (in my head) to

{the sum of a bunch of things multiplied by non commuting operators which square to 1/2[math]\}^2 \vec{\Phi} = \left\{ K\frac{\partial}{\partial t} \right\}^2 \vec{\Phi}[/math]

 

remembering that the first two were [math]\frac{\partial}{\partial x}[/math] and [math]\frac{\partial}{\partial \tau}[/math]: i.e., noting that all cross terms vanish because of the non commuting operators which happen to square to 1/2. Why pull off the first two terms to be handled differently? Yeah, I know, you are trying to be careful. The only thing you really have to remember here is that you get only the direct terms; two partials and then the collection of terms in the sum making up g(x) and all you have to do is square them all and factor out the 1/2 from the square of the operators: i.e., separating out those two differential cross terms as a problem just makes you think you have a problem.

(Note to lurking observers; The resultant of [imath]g(\vec{x})^2[/imath] was defined in the first post to be [imath]\frac{1}{2}G(\vec{x})[/imath], as oppose to just [imath]G(\vec{x})[/imath], as it is in the example in the previous post)
The important fact is that it is “some function of x”; but you are right, I should be more careful.
And the cross terms that were giving [me] trouble:
Just shouldn't have been giving you any trouble.
I think I got it right now. Let me know if I'm still making a mistake somewhere. I'll try to continue from here soon.
No, I wouldn't call what you did a mistake, just a devious path adding to confusion.

 

And you are welcome to any help I can provide.

 

Have fun -- Dick

  • Replies 144
  • Created
  • Last Reply

Top Posters In This Topic

Posted
Hi Anssi, I think you are beginning to get it. And I don't help when I screw things up (which I seem to do on a regular basis). When I read your post I immediately noticed a rather grave error in the post of mine you quote. With regard to the g(x), I had written

[math]g(x)= \sum_{i \neq 2}^n \beta_{ij}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_j)\vec{\Psi}_r dV_r[/math],

 

when I should have written

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r[/math].

 

First, the sum is over the set #2, not “i not equal 2”, second, there is no “j” in there and third, the thing originally (when we summed over i and j) had x1 in [imath]\delta(\vec{x}_i-\vec{x}_1)[/imath] and [imath]\delta(\vec{x}_1-\vec{x}_j)[/imath] which creats two identical terms. Sorry about that; I have made a correction in the post you quoted and would appreciate it if you would correct your quote of it. Thanks!

 

I fixed that in the quotes, and I started going back to refresh my memory on the exact details of how that [imath]g(\vec{x}_1)[/imath] was obtained, and I must say I'm not sure about it anymore.

 

Taking one step backwards, it was essentially the integral:

 

[math]\int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=1}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f_0(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right] \vec{\Psi}_r dV_r[/math]

 

Actually, the exact quote was:

 

...If one further defines the integral within the curly braces to be [imath]g(\vec{x}_1)[/imath], [imath]\vec{x}_1[/imath] being the only variable not integrated over...

 

And I'm still not entirely sure what it means that there is a variable not integrated over. Does that mean

 

[math]g(\vec{x}_1) = \int \vec{\Psi}_r^\dagger\cdot \left[ \sum_{i=2}^n \vec{\alpha}_i \cdot \vec{\nabla}_i +f_0(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)\right] \vec{\Psi}_r dV_r[/math]

 

i.e. one variable is left out of that intergration? That's what I tried to ask about before but I guess I did not really understand the answer being that I'm still all confused :D

 

Anyhow, looking at where the function [imath]f_0[/imath] came from, it included alphas too as far as I can see (looking at the equation you wrote in post #26: )

http://hypography.com/forums/philosophy-of-science/15451-deriving-schr-dingers-equation-my-fundamental-3.html#post234940

 

So I'm not really sure how it got crunched into simply:

 

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r[/math].

 

One more thing, while thinking about this, I spotted something else in the OP that looks like an error but I'm not sure.

 

When you left multiply by [imath]\vec{\Psi}_2^\dagger[/imath] and integrate over the set #2, the resulting equation you have written down has got just "i" written down underneath one of the sums (the fourth one). Looking at the previous equation, it seems to me like it should say #2 there? If that is a typo, it has also trickled its way to few other posts down the line.

 

Other than not noticing that error of mine, the only thing I found fault with was the order of your expansion. I would have gone from

[math]\left\{ \vec{\alpha}\cdot \vec{\nabla} + g(\vec{x}) \right\}^2 \vec{\Phi} = \left\{ K\frac{\partial}{\partial t} \right\}^2 \vec{\Phi}[/math]

 

to the expression

[math]\left\{ \alpha_x \frac{\partial}{\partial x}+\alpha_\tau \frac{\partial}{\partial \tau} + g(\vec{x}) \right\}^2 \vec{\Phi} = \left\{ K\frac{\partial}{\partial t} \right\}^2 \vec{\Phi}[/math]

 

Hmm, yes, that seems like a fair bit simpler route to take :D Hehe... I was somewhat fixated to the route I was attempting before.

 

-Anssi

Posted

Hi Anssi,

 

I took such a long time to answer your post because you seem to have a few things confused and I am not sure of exactly what goes on in your head. It seems that your current problems relate to how arguments vanish via integration and where and why the functions f, f0, g and G come from. I think all these issues are actually far simpler than you imagine. Math "is" logic and mathematicians make things as simple as possible (if something is not simple they extend the notation only as far as is necessary and no farther). Let us start with differentiation. Differentiation is defined by the following:

[math]\frac{dy}{dx}=\lim_{\Delta x \rightarrow 0}\frac{y(x+\Delta x)-y(x)}{\Delta x}[/math].

 

A little common sense can usually uncover what the differential of a function is, though some functions can be somewhat elusive. The opposite of a differential is an integral. If we have already deduced that g(x) is the differential of f(x) then f(x) is the integral of g(x). There is one serious complication of that statement: since the differential of a constant is zero, the integral of g(x) could be f(x) plus any constant: i.e., the integral is not unique. It is essentially this complication which leads to two very different definitions definitions of a integral: the indefinite integral and the definite integral.

 

The indefinite integral is that f(x) and one must always remember that the "C" exists (it is the existence of that constant is that makes the integral “indefinite”). The indefinite integral is a function of the argument being integrated over whereas the definite integral is essentially an evaluation of that integral. The definite integral is defined to be the difference between the indefinite integral evaluated at two different values of x called the “upper limit” and the “lower limit” (the point being that the constant creates no complications as C-C=0). Thus it is that the definite integral is a function of those limits and not a function of the argument being integrated over.

[math]\int_a^bg(x)dx = f(b)-f(a)[/math]

 

The integral can be defined as

[math]\int_a^b g(x)dx = \lim_{\Delta x \rightarrow 0}\sum_{i=1}^n g(x_i)\Delta x [/math]

 

where [imath]\Delta x = (x_{i+1}-x_i)[/imath], [imath] x_1 = a[/imath] and [imath] x_n = b[/imath] . I think I told you once that the integral sign, [imath]\int[/imath], started life as an “S” which stood for “sum”. If you look at the sum above, you should be able to see that the integral of g(x) is the sum of a bunch of vertical slices between zero and the plot of the function g(x): i.e., the definite integral of g(x) can be seen as the area under the plot of g(x) between a and b.

 

What is important here is that the definite integral of g(x) is no longer a function of x; it is instead a function of exactly what values of x were chosen to be those limits.

 

In every case of interest to us, in my derivation, we are talking about a “definite” integral and the “limits” on the integration are "all possible values": i.e., we are, in general, talking about a probability and our limits will generally be minus infinity to plus infinity. As a result, the answer to the “operation called integration” in our case is a number, not a function; however, life gets a little more complex when we are talking about functions of more than one variable. If we have more than one variable, we have to specify what variable is being integrated over. In that case (where we integrate over one variable), our answer depends upon those other variables.

[math]\int_a^bg(x,y,z)dx = f(b,y,z)-f(a,y,z)[/math]

 

In other words, the answer isn't a number; it is instead, a function of those other arguments (remember, a function is nothing more than something which yields a specific answer given a specific argument). Every time we integrate over one of those variables, the variable being integrated over disappears from the final result (we are essentially summing over all possible values for that argument so that argument is no longer a specific number and ceases changing the answer: i.e., the answer is no longer a function of that argument).

 

So let us go back to the first post of this thread. If you reread that post you should understand that my purpose is to reduce the number of arguments in my fundamental equation. I do this by integrating over those arguments in a careful and orderly way so as to keep close track of exactly what I get. Of course, I can not know what I will get as I do not know what the functions I am integrating over actually look like; however, it turns out that there are some important things about the result that I can know. My first step was to divide all the arguments into two sets (set #1 and set #2). I put forth the idea that it was possible to define [imath]\vec{\Psi}_1[/imath] and [imath]\vec{\Psi}_2[/imath] such that

[math]\vec{\Psi}(\vec{x}_1,\vec{x}_2,\cdots, t)=\vec{\Psi}_1(\vec{x}_1,\vec{x}_2,\cdots,\vec{x}_n, t)\vec{\Psi}_2(\vec{x}_1,\vec{x}_2,\cdots, t).[/math]

 

where [imath]\vec{\Psi}_1^\dagger \cdot \vec{\Psi}_1[/imath] yielded the probability of set #1 and [imath]\vec{\Psi}_2^\dagger \cdot \vec{\Psi}_2[/imath] yielded the probability of set #2 (given set #1). That “given set #1” is a very important aspect of this factorization of [imath]\vec{\Psi}[/imath]. You should understand that the probability of some specific set #2 can very easily depend upon what was chosen as set #1. So, though [imath]\vec{\Psi}_1[/imath] can be expressed as a function of the arguments called set #1, [imath]\vec{\Psi}_2[/imath] can not be expressed as a function of the arguments called set #2. As I have defined them, [imath]\vec{\Psi}_2[/imath] is a function of both sets of arguments.

 

It follows from this fact that, when I left multiply by [imath]\vec{\Psi}_2^\dagger[/imath] and integrate over all the arguments of set #2, I am left with something which is a function of those arguments which constitute set #1. I have no idea as to what the result of that integration is because I don't really know what [imath]\vec{\Psi}_2[/imath] is; but I do know two things about it. First the result will be a function of the arguments from set #1 (that is, it most certainly can be written as [imath]f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)[/imath]) and second, it will be a sum over many different integrals (one from every term in my various sums). I then point out that every one of those terms has a different anti-commuting operator as a factor except for one term. The one term which does not arises from the integration of the term on the right hand side of the equation (the one with the time derivative) which has no such operator.

 

This whole thing is simply written as [imath]f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)[/imath] because I have no idea as to its actual structure (outside the fact that it is a sum of functions times anti-commuting operators). It is because I know that squaring the thing will drop out all cross terms that I am so interested in removing that lone term which has no anti-commuting operator attached to it. I remove it through the mechanism of presuming my expectations for the rest of the universe (outside the single argument of interest to me) are not dependent upon time. This is equivalent to presuming that time dependence of the rest of the universe has no impact upon the probability of that lone variable (essentially that the time dependence of the rest of the universe has no impact on the explanation of the experiment being described by the equation being derived). This leads to a specific solution for that time dependence of both set #2 and set “r”: i.e., the effect is to generate a simple constant term

[math]K\left\{\int \vec{\Psi}_2^\dagger \frac{\partial}{\partial t}\vec{\Psi}_2dV_2\right\}\vec{\Psi}_1=iKS_2\vec{\Psi}_1[/math]

 

for set #2 and a similar term for set “r”.

 

The final step in reducing the number of arguments to “one” is to again separate my arguments into two sets. (You should understand that only arguments from set #1 still remain in that equation as we have integrated over all arguments from set #2.) This time I divide them into [imath]\vec{x}_1[/imath] and all the remaining arguments . Once again, the integral over the time derivative generates a term which does not contain an anti-commuting operator. Both this term and the one above are moved back to the right of my equations. That is how f is changed to [imath]f_0(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)[/imath]: that single term without an anti-commuting operator (now moved to the right side) is totally removed by redefining [imath]\vec{\Psi}_0 = e^{-iK(S_2+S_r)t}\vec{\Phi}[/imath] (the time derivative of [imath]\vec{\Psi}_0[/imath] will then exactly remove that term.

 

Again we find the requirement that an integral must be done involving that f0 (a sum of terms which all have a single anti-commuting operator). Once again, I have no idea as to what that structure might be; however, I do know that it is a function of one single argument [imath](\vec{x}_1)[/imath] as all other arguments have been integrated out and I know that it must also be a sum of terms which all have a single anti-commuting operator. It certainly is not f0 so I call the function [imath]g(\vec{x})[/imath] (there is no need to include a subscript because [imath]\vec{x}_1[/imath] it is the only argument left). This leaves me with the expression

[math]\left\{\vec{\alpha}\cdot \vec{\nabla} + g(\vec{x})\right\}\vec{\Phi} = K\frac{\partial}{\partial t}\vec{\Phi}, [/math]

 

which I view as an operator operating on [imath]\vec{\Phi}[/imath]. That operator is then squared (and we have already discussed how and why the cross terms vanish). I then define the result of [imath]g(\vec{x})g(\vec{x})[/imath] to be [imath]\frac{1}{2}G(\vec{x})[/imath] (note that at this point all anti-commuting operators have vanished). Essentially, the entire impact of “the entire universe” on the probability of any value of x ends up embedded in that function G(x) (note that [imath]G(\vec{x})[/imath] cannot depend upon [imath]\tau[/imath] because tau is a total figment of our imagination).

 

If that all makes sense to you we can cover the three substitutions

[math]m=\frac{q\hbar}{c}[/math] , [math]c=\frac{1}{K\sqrt{2}}[/math] and [math]V(x)= -\frac{\hbar c}{2q}G(x)[/math]

 

which morph my fundamental equation into exactly Schrödinger's equation. It is important for you to understand why there are no free parameters in that representation. It is an entirely tautological result of our definitions; definitions which actually impose no constraints on the result other than the specific necessity that the motion of interest be non-relativistic.

 

Have fun -- Dick

Posted
Hi Anssi,

 

I took such a long time to answer your post because you seem to have a few things confused and I am not sure of exactly what goes on in your head. It seems that your current problems relate to how arguments vanish via integration and where and why the functions f, f0, g and G come from.

 

I'm writing this sentence after I've already written the reply to the rest of the post, and indeed looks like you guessed it right!

 

Differentiation is defined by the following:

[math]\frac{dy}{dx}=\lim_{\Delta x \rightarrow 0}\frac{y(x+\Delta x)-y(x)}{\Delta x}[/math].

 

A little common sense can usually uncover what the differential of a function is, though some functions can be somewhat elusive. The opposite of a differential is an integral. If we have already deduced that g(x) is the differential of f(x) then f(x) is the integral of g(x).

 

Hmmm, yes, I think I can see that relationship when I just think about how such functions would plot graphically, if one was the differential of the other one.

 

There is one serious complication of that statement: since the differential of a constant is zero, the integral of g(x) could be f(x) plus any constant: i.e., the integral is not unique.

 

And that too.

 

It is essentially this complication which leads to two very different definitions definitions of a integral: the indefinite integral and the definite integral.

 

The indefinite integral is that f(x) and one must always remember that the "C" exists (it is the existence of that constant is that makes the integral “indefinite”). The indefinite integral is a function of the argument being integrated over whereas the definite integral is essentially an evaluation of that integral. The definite integral is defined to be the difference between the indefinite integral evaluated at two different values of x called the “upper limit” and the “lower limit” (the point being that the constant creates no complications as C-C=0). Thus it is that the definite integral is a function of those limits and not a function of the argument being integrated over.

 

Right, okay, I think I understand what you are saying there.

 

What is important here is that the definite integral of g(x) is no longer a function of x; it is instead a function of exactly what values of x were chosen to be those limits.

 

In every case of interest to us, in my derivation, we are talking about a “definite” integral and the “limits” on the integration are "all possible values": i.e., we are, in general, talking about a probability and our limits will generally be minus infinity to plus infinity. As a result, the answer to the “operation called integration” in our case is a number, not a function; however, life gets a little more complex when we are talking about functions of more than one variable. If we have more than one variable, we have to specify what variable is being integrated over. In that case (where we integrate over one variable), our answer depends upon those other variables.

[math]\int_a^bg(x,y,z)dx = f(b,y,z)-f(a,y,z)[/math]

 

In other words, the answer isn't a number; it is instead, a function of those other arguments (remember, a function is nothing more than something which yields a specific answer given a specific argument). Every time we integrate over one of those variables, the variable being integrated over disappears from the final result (we are essentially summing over all possible values for that argument so that argument is no longer a specific number and ceases changing the answer: i.e., the answer is no longer a function of that argument).

 

Ah, this is something I had not picked up before, at all. Not understanding this is what led to my misconception ->

 

So let us go back to the first post of this thread.

.

.

.

It follows from this fact that, when I left multiply by [imath]\vec{\Psi}_2^\dagger[/imath] and integrate over all the arguments of set #2, I am left with something which is a function of those arguments which constitute set #1. I have no idea as to what the result of that integration is because I don't really know what [imath]\vec{\Psi}_2[/imath] is; but I do know two things about it. First the result will be a function of the arguments from set #1 (that is, it most certainly can be written as [imath]f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)[/imath])

 

I had a total misconception about this. I mean, I knew [imath]\vec{\Psi}_2[/imath] was a function of sets #1 and #2, but I had interpreted the meaning of [imath]f(\vec{x}_1,\vec{x}_2, \cdots,\vec{x}_n,t)[/imath] completely wrong.

 

I was not looking at the input arguments as the indices of set #1, but instead viewed each input argument as a term that included an integral multiplied by an anti-commuting operator. I.e. I thought that the function f was simply summing all its input arguments together.

 

Blah... Well, it's yet another example of how far our misconceptions can carry us. I was able to interpret almost everything you said one way or another, and it took this long for us to really understand we were not talking the same language, and where the error was :D

 

Anyhow, I understand why you are pointing out that the f is essentially a sum of integrals with anti-commuting operators attached to each one.

 

Also...

 

Again we find the requirement that an integral must be done involving that f0 (a sum of terms which all have a single anti-commuting operator). Once again, I have no idea as to what that structure might be; however, I do know that it is a function of one single argument [imath](\vec{x}_1)[/imath] as all other arguments have been integrated out

 

...suddenly I am able to make sense of that. Indeed, the integration is over the set "r", and [imath]\vec{x}_1[/imath] is not part of "r", and with your explanation in the beginning of your post, I can understand just how [imath]g[/imath] is a function of [imath]\vec{x}_1[/imath]; the variable not being integrated over.

 

Essentially, the entire impact of “the entire universe” on the probability of any value of x ends up embedded in that function G(x) (note that [imath]G(\vec{x})[/imath] cannot depend upon [imath]\tau[/imath] because tau is a total figment of our imagination).

 

If that all makes sense to you we can cover the three substitutions

[math]m=\frac{q\hbar}{c}[/math] , [math]c=\frac{1}{K\sqrt{2}}[/math] and [math]V(x)= -\frac{\hbar c}{2q}G(x)[/math]

 

which morph my fundamental equation into exactly Schrödinger's equation.

 

Well, I think everything makes sense to me now, except for the exact details of how you manage to express [imath]g(x)[/imath] this simple:

 

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r[/math],

 

I'm getting the impression that those details are not important right now though as long as I understand that [imath]g(x)[/imath] is a sum of integrals, each with an anti-commuting element attached to it, and how squaring drops out the cross terms.

 

Apart from that little detail, I think we can move onwards. Sorry you had to cover so much ground to find out where my error was, but good thing that you did.

 

It is important for you to understand why there are no free parameters in that representation. It is an entirely tautological result of our definitions; definitions which actually impose no constraints on the result other than the specific necessity that the motion of interest be non-relativistic.

 

Yup. I'm eager to move forward, but I know I should be careful with all the details here, it's easy for me to to get lost down a wrong path :I

 

-Anssi

 

ps. I noticed something in the OP that looked like a typo.

 

Notice once again that [imath]\int \vec{\Psi}_r^\dagger \cdot\vec{\Psi}_rdV_2 [/imath] equals unity by definition of normalization.

 

Shouldn't that say [imath]\int \vec{\Psi}_r^\dagger \cdot \vec{\Psi}_r dV_r [/imath]?

Posted

I would strongly advise anyone wishing to understand this post read my answer to Anssi's answer first.

 

Hi Anssi,

 

Yes, I apparently picked up on the problem. I am very happy to see the clarity in your response.

Well, I think everything makes sense to me now, except for the exact details of how you manage to express [imath]g(x)[/imath] this simple:

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r[/math],

 

I'm getting the impression that those details are not important right now though as long as I understand that [imath]g(x)[/imath] is a sum of integrals, each with an anti-commuting element attached to it, and how squaring drops out the cross terms.

Yes, you are correct, the central issue is how squaring drops out the cross terms. However, there is another somewhat mundane issue which is worth knowing and one I think you can pick up on easily. The Dirac delta function is infinite when the argument is zero and zero otherwise. The integral over the Dirac delta function is zero if that zero argument is not in the range integrated over and, if it is in the range being integrated over, the only value of the rest of the functions being integrated over which enters the answer is the value of that function when the argument of the Dirac delta function vanishes: i.e., [imath]\int_{-\infty}^{+\infty}\delta (x-y) f(y) dx = f(x)[/imath] (this fact is part of the definition of the Dirac delta function). Knowing the Dirac delta function is infinite at that point really leaves the definition of the result of integration open; however, the standard definition is that the result is exactly equal to the value of the function being integrated over. This means that

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r=2\sum_{i=\#2}^n \beta_{i1}\int\vec{\Psi}_r^\dagger (\vec{x}_i = \vec{x}_1)\cdot \vec{\Psi}_r (\vec{x}_i=\vec{x}_1)dV_(r \neq \vec{x}_i)[/math].

 

In other words, the impact is exactly proportional to the expectation that the ith elementary element is at the point [imath]\vec{x}_1[/imath] (or, at [imath]\vec{x}[/imath] since we have, by this point, dropped the index on [imath]\vec{x}[/imath]). Please let me know if any aspect of that confuses you.

ps. I noticed something in the OP that looked like a typo.
It is, and I have corrected it.
Sorry you had to cover so much ground to find out where my error was, but good thing that you did.
With someone capable of understanding, I have no problems with extended explanation. What I hate are people who are so bound up in the “proper presentation” that they cannot even comprehend another version. People should be open to think things out.
Yup. I'm eager to move forward, but I know I should be careful with all the details here, it's easy for me to to get lost down a wrong path :I
First, it should be clear to you that multiplying through by [imath]-\hbar c[/imath] has utterly no impact upon the result: i.e., [imath]\hbar[/imath] can have any value desired (essentially, we have not yet defined how our measures are scaled and multiplying the entire equation by a constant can have utterly no impact on the solutions). Secondly, defining [imath]m=\frac{q\hbar}{c}[/imath] defines mass to be directly related to the quantization of momentum in the tau direction (mass is thus defined in terms of a solution to the fundamental equation and once we have a solution, there is no freedom with regard to this component available to us; it is defined by that solution).

 

The definition of c via [imath]c=\frac{1}{K\sqrt{2}}[/imath] merely relates c to the velocity implied by K. If you read my presentation An “analytical-metaphysical” take on Special Relativity!, you will discover that all velocities depend upon the definition of “time” and time is a totally free parameter yielding no impact upon physical analysis: i.e., time is not a measurable variable; it is, instead, no more than a parameter definable within the frame of reference of a specific entity.

 

Finally, [imath]V(x)= -\frac{\hbar c}{2q}G(x)[/imath] yields the potential as a direct consequence of integration over the entire rest of the universe: i.e., once you have an explanation of the universe, that explanation yields a single definition of the potential demanded by a specific entity. The real issue here is that it is the context of one's experiment which sets the actual potential to be used.

 

In the end, what we have is nothing more or less than a tautological mechanism for reproducing the information we are trying to explain; exact science is a memory device and no more. Reality can be absolutely anything and it will be explained by exactly the same physics (at least at this point in my presentation). That is why I call this analytical metaphysics: it is an exact analysis of what lies behind physics. Everything found by science must be representable by an interpretation of my equation otherwise the explanation is flawed.

 

What is interesting is the converse; what I have deduced is a fundamental constraint upon a flaw-free explanation. What would tell us something about the universe would be a solution to my equation which could not be found to represent some real experiment. What is astounding to me is that, to date, I have found no solution which is not directly applicable to known scientific experiments: i.e., if one takes the position that the job of a research scientist is to search out the rules which separate the "true" universe from all possible universes, then no classical experiment can provide any guidance on the subject whatsoever (classical mechanics is itself a tautology).

 

Let me know if that all makes sense to you.

 

Have fun -- Dick

Posted

Just a quick reply for now...

 

Yes, you are correct, the central issue is how squaring drops out the cross terms. However, there is another somewhat mundane issue which is worth knowing and one I think you can pick up on easily. The Dirac delta function is infinite when the argument is zero and zero otherwise. The integral over the Dirac delta function is zero if that zero argument is not in the range integrated over and, if it is in the range being integrated over, the only value of the rest of the functions being integrated over which enters the answer is the value of that function when the argument of the Dirac delta function vanishes: i.e., [imath]\int_{-\infty}^{+\infty}\delta (x-y) f(y) dx = f(x)[/imath] (this fact is part of the definition of the Dirac delta function).

 

Right, that seems pretty straightforward.

 

Knowing the Dirac delta function is infinite at that point really leaves the definition of the result of integration open; however, the standard definition is that the result is exactly equal to the value of the function being integrated over.

 

Good thing you mentioned that, just as I was starting to wonder :D

 

This means that

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r=2\sum_{i=\#2}^n \beta_{i1}\vec{\Psi}_r (x_i = x_1)\cdot \vec{\Psi}_r (x_i=x_1)dV_(r \neq x_i)[/math].

 

In other words, the impact is exactly proportional to the expectation that the ith elementary element is at the point x1 (or, at x since we have, by this point, dropped the index on x). Please let me know if any aspect of that confuses you.

 

Yes, I'm a bit confused over the choice of words "...exactly proportional to the expectation..."

 

The way I now understood the math you laid down is that in that sum of integrals, every integral would be multiplied by 0, except the one integral where [imath]i = 1[/imath], which would become multiplied by 1. And there wouldn't be any integrals getting multiplied by anything else. Correct?

 

Also, I have to mention I can't figure out what [imath]dV_(r \neq x_i)[/imath] is supposed to tell me. :P

 

First, it should be clear to you that multiplying through by [imath]-\hbar c[/imath] has utterly no impact upon the result: i.e., [imath]\hbar[/imath] can have any value desired....

 

Yes, but, there are still algebraic steps in the OP that I have not covered, before you get to the part where you multiply through by [imath]\hbar[/imath]... Should I rather make sure I understand those first? (At first glance... I think I still need to use my head to crack them, and I think I'll have questions)

 

Btw, just as a side-note, it is somewhat interesting in this context to remember how Planck's constant appeared in physics. Kind of as a desperate move, to come up with valid mathematical solution to a physics problem, without even having any idea as to what it might mean or why reality would behave that way :I

 

-Anssi

Posted

To Anssi and anyone else reading this; there are times where I can be really stupid. Perhaps it is senility sneaking up on me. Sometimes I go off on a tangent which seems to me will make things clear and then, in the process, leave out issues absolutely critical to that perspective thus sowing total confusion . That is what has happened here and I sincerely apologize to everyone for the sloppiness of my last post. It was only after editing the typos that I noticed my omission; as I said, I begin to suspect senility. Let me try and straighten this thing out.

 

When I first made the separation, [imath]\vec{\Psi}=\vec{\Psi}_1\vec{\Psi}_2[/imath], my central concern was with the issue of asymmetry under exchange of arguments (plus the need to show the procedure by which [imath]\vec{\Psi}[/imath] could be factored).

...my first step is to divide the problem into two sets of variables: set number one will be the set referring to our “valid” ontological elements (together with the associated tau indices) and set number two will refer to all the remaining arguments.

...

Furthermore, the tau axis was introduced for the sole purpose of assuring that two identical indices associated with valid ontological elements existing in the same B(t) ( now being represented by an [imath](x,\tau)_t[/imath] point in the [imath]x,\tau[/imath] plane) would not be represented by the same point. We came to the conclusion that this could only be guaranteed in the continuous limit by requiring [imath]\vec{\Psi}_1[/imath] to be asymmetric with regard to exchange of arguments. If that is indeed the case (as it must be) then the second term in the above equation will vanish identically as [imath]\vec{x}_i[/imath] can never equal [imath]\vec{x}_j[/imath] for any i and j both chosen from set #1.

The most important fact brought forth in that separation was that the Dirac delta function ends up yielding no contribution between elements taken from set #1. A side issue I don't think I mentioned was the fact that set #2 (the hypothesized elements required by the explanation) can include elements both symmetric and antisymmetric with regard to exchange. This has to be true as there must be no way to differentiate between “valid” elements and “hypothesized” (if there were a way, one could prove what elements were valid and the explanation would be flawed).

 

I then stepped off into my separation [imath]\vec{\Psi}_1=\vec{\Psi}_0\vec{\Psi}_r[/imath] where my concern was to reduce the thing to a differential equation in one variable. What I did not make clear was that these kinds of separations can be made for practically any reason of interest to you; the only critical issue being that one be consistent with the forms those specific separations yield. I can go further into that if you are interested but for the moment, the fact that the exact form of the separation is an open issue.

 

I could have separated [imath]\vec{\Psi}[/imath] into two different sets where one was entirely made up of elements symmetric under exchange and the other consisted of elements antisymmetric under exchange; but, before I could really justify such a thing, I would first have to show that these different possibilities had to exist (that is why I did the separation between “valid” and “hypothesized ” elements first). Secondly, I separated [imath]\vec{\Psi}_1[/imath] into a function of a single element and all remaining elements because of the arguments I wanted to make with regard to the approximations central to reducing the thing to Schrödinger's equation.

 

What I am getting at here is that, in order to clear up your problem with the cross terms dropping out, (without telling you what I was doing) I changed my perspective to the single separation [imath]\vec{\Psi}=\vec{\Psi}_0\vec{\Psi}_r[/imath] (where there was no initial separation into set #1 and set #2). At the same time, I used the fact that the Dirac delta function vanishes for all elements which are asymmetric under exchange to reduce the sum to a sum over set #2 (even though I hadn't actually made the separation, what I meant by set #2 was still , in my mind, a defined set). So once again I apologize for the sloppiness of my presentation. I hope I have been a little better here.

 

That being done, let us go to your current post.

Yes, I'm a bit confused over the choice of words "...exactly proportional to the expectation..."
It's worse than that; I got sloppy and was thinking about [imath]\vec{\Psi}=\vec{\Psi}_0\vec{\Psi}_r[/imath] when I wrote that down; doing the whole thing in one fell swoop. I also noticed some typos in the expression you quoted (which I have corrected). I should have written down

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r=2\sum_{i=\#2}^n \beta_{i1}\int\vec{\Psi}_r^\dagger (\vec{x}_i = \vec{x}_1)\cdot \vec{\Psi}_r (\vec{x}_i=\vec{x}_1)dV_(r \neq \vec{x}_i)[/math].

 

I had omitted the dagger and integration sign in the second expression. Again I apologize for being sloppy. The function, g(x) should be defined to be the result of integrating out all variables outside [imath]\vec{x}_1[/imath]. You can think of this case as beginning from [imath]\vec{\Psi}=\vec{\Psi}_0\vec{\Psi}_r[/imath] thus the step through “f” is avoided and the integral is directly over the Dirac delta function. (Where I have reduced the sum to a sum over set #2 for the reason given above: i.e., all other terms vanish. In fact, some of the terms from set #2 may also vanish as they may also be asymmetric under exchange.)

The way I now understood the math you laid down is that in that sum of integrals, every integral would be multiplied by 0, except the one integral where [imath]i = 1[/imath], which would become multiplied by 1.
Not quite right; I think you are thinking of the thing without the integral (my typo). If you look at the above, you need to understand that there are three different places where the index “i” appears. First, since we are summing over all elements remaining in the universe (after singling out the variable of interest, [imath]\vec{x}_1[/imath]), the sum over i and j, where i is not equal to j, of [imath]\beta_{i j}\delta(\vec{x}_i-\vec{x}_j)[/imath] is replaced by [imath]2\beta_{1i}\delta(\vec{x}_i-\vec{x}_1)[/imath] (the two is there because the Dirac delta function for (i,1) is identical to the one for (1,i): i.e., we originally were summing over both i and j and the two terms yielded by i=1 and j=1 are identical).
And there wouldn't be any integrals getting multiplied by anything else. Correct?
There still exist a whole slew of terms where neither i nor j are equal to one and I will get back to them. For the time being I will concern myself with the limited set I have displayed above.

 

Secondly we are integrating over every [imath]\vec{x}_i[/imath] where i is not equal to one. That is what I am symbolizing when I write [imath]dV_(r \neq \vec{x}_i)[/imath]. Somewhere else I have commented that I use the notation “dVr” to represent the abstract volume of the integration space normally represented by [imath]dx_2d\tau_2dx_3d\tau_3\cdots dx_nd\tau_n \cdots[/imath] (remember, I am not integrating over [imath]\vec{x}_1[/imath]). The subscript “[imath](r \neq \vec{x}_i)[/imath]” (and I have just discovered another typo) means that the set being integrated over in this case also excludes the specific argument [imath]\vec{x}_i[/imath] as I have actually done that integral via the rule of integration over the Dirac delta function. The integration over all the other arguments of [imath]\vec{\Psi}_r^\dagger \cdot\vec{\Psi}_r[/imath] is simply a probability. It is specifically the sum over the probability of all the possibilities for the rest of the universe consistent with element one being represented at the point [imath]\vec{x}_1[/imath] and the ith element being represented by exactly the same point (the two elements must be symmetric with respect to exchange).

 

We know that this probability has to be zero for all valid elements (due to the valid elements being antisymmetric under exchange) thus the ith element can not be asymmetric with respect to exchange with the element being represented by [imath]\vec{x}_1[/imath]. This would probably be a good time to introduce a definition of Fermions and Bosons (some convenient names which physicists use to describe this symmetric/antisymmetric thing). “Fermions” is a name given to any collection of identical elements which are antisymmetric with respect to exchange and “Bosons” is a name given to any collection of identical elements which are symmetric with respect to exchange. Two very specific things come from this identification: first, under my analysis, all valid elements have to be Fermions. Hypothesized elements may be either Fermions or Bosons; however, under the Dirac delta function interaction, interactions only occur with Bose (or symmetric) exchange. Please note that my picture is not identical to the ordinary perspective on Fermions and Bosons. There are some subtle differences here; in my presentation Fermions and Bosons can still be identical particles (if you want to call the "dust motes" we talked about particles). The status Fermion/Boson requires two particles and the "exchange" is defined between those elements, thus it is not a quality of the element but rather, a quality of the pairing expressed in [imath]\vec{\Psi}[/imath].

 

At any rate, except for “Pauli exclusion”, in my picture, all forces (and a force can be defined as something which changes momentum, which has already been defined) between elements arise through Boson exchange, a rather interesting occurrence.

 

But before I close, let us get back to those slew of terms where neither i nor j are equal to one. A little thought should convince you that these terms have to do with interactions between other elements in the universe (not the element under examination). The Dirac delta function in those cases will yield a non zero value only when two of those elements happen to be in the same place (interactions not involving the one we are interested it). The integrations in that case, in the final analysis, yield some number times the proper beta operator times [imath]\vec{\Psi}_1(\vec{x}_1, t)[/imath]. When we square the thing, that beta operator insures that the cross terms vanish and, in the final analysis it generates a number which is the sum over all possibilities (adjusted by their internal interactions) for the rest of the universe given that the element of interest is at the position of interest. Now that the anti-commuting operators have been removed (by the squaring), the term can be moved to the right side of the equation and seen as an adjustment to the energy. This factor can be removed via exactly the same procedures used to remove that “(S2+Sr)” in the original presentation.

Btw, just as a side-note, it is somewhat interesting in this context to remember how Planck's constant appeared in physics. Kind of as a desperate move, to come up with valid mathematical solution to a physics problem, without even having any idea as to what it might mean or why reality would behave that way :I
Yeah, that is an interesting issue and actually a question which led me to think [imath]\hbar[/imath] was circularly defined when I was a graduate student. That is another long story I have posted about somewhere. When I was a graduate student I read Gamow's Mr. Tompkins series and his story “Mr. Tompkins goes to quantum land!” where [imath]\hbar[/imath] was a large number was just plain wrong. I tried for a long time to discover what the world would look like if [imath]\hbar[/imath] were large and convinced myself it would look no different. The constant [imath]\hbar[/imath] is embedded in almost every fundamental calculation and I was finally convinced that it was really no more than the consequence of over defining the measures used by scientists: i.e., their system of units is circularly defined.

 

Have fun -- Dick

Posted

When I first made the separation, [imath]\vec{\Psi}=\vec{\Psi}_1\vec{\Psi}_2[/imath], my central concern was with the issue of asymmetry under exchange of arguments (plus the need to show the procedure by which [imath]\vec{\Psi}[/imath] could be factored).

 

Hmmm, I think there's a chance of confusion here too, with the choice of word "asymmetric". I almost got confused and went back to;

 

http://hypography.com/forums/philosophy-of-science/11733-what-can-we-know-of-reality-18.html#post204186

 

After refreshing my memory on that, I think when you say "asymmetric", you don't mean "not symmetric", but always "anti-symmetric", as in, an exchange of 2 arguments will cause a change in sign in [imath]\vec{\Psi}[/imath] (as oppose to an arbitrary change). Correct?

 

Another related thought that popped to my mind, the set of "symmetric under exchange", I suppose it means "any element of that set can be swapped with any other element of that set".

 

But I'm wondering, is it possible that some [imath]\vec{\Psi}[/imath] allows three different sorts of elements in regards of exchange symmetry.

- Set A which is always (exchange) symmetric within itself

- Set B which is always anti-symmetric within itself

- Set C which is symmetric in exchange against elements from A, but anti-symmetric against elements from B

 

Hmmm, I could be missing something here, don't know if this is logically valid...

 

The most important fact brought forth in that separation was that the Dirac delta function ends up yielding no contribution between elements taken from set #1. A side issue I don't think I mentioned was the fact that set #2 (the hypothesized elements required by the explanation) can include elements both symmetric and antisymmetric with regard to exchange. This has to be true as there must be no way to differentiate between “valid” elements and “hypothesized” (if there were a way, one could prove what elements were valid and the explanation would be flawed).

 

Well, this has been mentioned but still probably a good idea to hammer these things over because it's hard to keep all the relationships in order in my head :I

 

One thing that confuses me still; since the set of "valid" elements can only exhibit anti-symmetric behaviour under exchange, wouldn't that mean that any elements that exhibit symmetric behaviour are known to belong to the set of "hypothesized" elements?

 

Or was it rather the case that the "valid" elements must exhibit anti-symmetric behaviour with [imath]\vec{\Psi}_1[/imath] but not necessarily with [imath]\vec{\Psi}[/imath]? *scratching head*

 

What I am getting at here is that, in order to clear up your problem with the cross terms dropping out, (without telling you what I was doing) I changed my perspective to the single separation [imath]\vec{\Psi}=\vec{\Psi}_0\vec{\Psi}_r[/imath] (where there was no initial separation into set #1 and set #2). At the same time, I used the fact that the Dirac delta function vanishes for all elements which are asymmetric under exchange to reduce the sum to a sum over set #2 (even though I hadn't actually made the separation, what I meant by set #2 was still , in my mind, a defined set). So once again I apologize for the sloppiness of my presentation. I hope I have been a little better here.

 

Heh, okay, I guess that explains little bit why I wasn't able to figure out the details of how you ended up to that expression of [imath]g(x)[/imath]... Especially [imath]\beta_{i1}[/imath] was quite puzzling to me, I just took it on faith that it did indeed come from some logical place, and thought those details were not important to figure out now.

 

In any case, does this mean that if you take the route of the OP, and first make the separation [imath]\vec{\Psi}=\vec{\Psi}_1\vec{\Psi}_2[/imath], and only then [imath]\vec{\Psi}_1=\vec{\Psi}_0\vec{\Psi}_r[/imath], then also [imath]g(x)[/imath] also will look different from what you laid down in the previous posts?

 

The last point of contact that I have to explicitly laid down math, and what I've been looking at when trying to figure out what [imath]g(x)[/imath] looks like, is here: http://hypography.com/forums/philosophy-of-science/15451-deriving-schr-dingers-equation-my-fundamental-3.html#post234940

 

I.e:

[math]\sum_{\#1} \vec{\alpha}_i \cdot \vec{\nabla}_i \vec{\Psi}_1 + \left\{2 \sum_{i=\#1 j=\#2}\beta_{ij}\int \vec{\Psi}_2^\dagger \cdot \delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2 +\sum_i \vec{\alpha}_i\cdot \int\vec{\Psi}_2^\dagger \cdot \vec{\nabla}_i \vec{\Psi}_2dV_2 \right. +[/math]

[math] \left.\sum_{i \neq j (\#2)}\beta_{ij}\int \vec{\Psi}_2^\dagger \cdot \delta(\vec{x}_i -\vec{x}_j)\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1 = K\frac{\partial}{\partial t}\vec{\Psi}_1+K \left\{\int \vec{\Psi}_2^\dagger \cdot \frac{\partial}{\partial t}\vec{\Psi}_2 dV_2 \right\}\vec{\Psi}_1[/math]

 

Well, there are 3 different sums of integrals over set #2:

 

- One where each is multipied by a beta element, where "i=#1 j=#2"

- One where each is multiplied by an alpha element, where i is taken over both sets (I think)

- One where each is multiplied by a beta element, where both i & j are taken from set #2 (but are never chosen to be the same)

 

Are those all present in [imath]g(x)[/imath] also?

 

That being done, let us go to your current post.

It's worse than that; I got sloppy and was thinking about [imath]\vec{\Psi}=\vec{\Psi}_0\vec{\Psi}_r[/imath] when I wrote that down; doing the whole thing in one fell swoop. I also noticed some typos in the expression you quoted (which I have corrected). I should have written down

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r=2\sum_{i=\#2}^n \beta_{i1}\int\vec{\Psi}_r^\dagger (\vec{x}_i = \vec{x}_1)\cdot \vec{\Psi}_r (\vec{x}_i=\vec{x}_1)dV_(r \neq \vec{x}_i)[/math].

 

I had omitted the dagger and integration sign in the second expression.

 

Never noticed the missing dagger, but the missing integration sign caused some confusion :D I guess I should have suspected a typo since dV was there.

 

Again I apologize for being sloppy. The function, g(x) should be defined to be the result of integrating out all variables outside [imath]\vec{x}_1[/imath]. You can think of this case as beginning from [imath]\vec{\Psi}=\vec{\Psi}_0\vec{\Psi}_r[/imath] thus the step through “f” is avoided and the integral is directly over the Dirac delta function. (Where I have reduced the sum to a sum over set #2 for the reason given above: i.e., all other terms vanish. In fact, some of the terms from set #2 may also vanish as they may also be asymmetric under exchange.)

 

Okay, but just to be sure, is this just an example, or can the original presentation validly just take g(x) in this form?

 

Not quite right; I think you are thinking of the thing without the integral (my typo). If you look at the above, you need to understand that there are three different places where the index “i” appears. First, since we are summing over all elements remaining in the universe (after singling out the variable of interest, [imath]\vec{x}_1[/imath]), the sum over i and j, where i is not equal to j, of [imath]\beta_{i j}\delta(\vec{x}_i-\vec{x}_j)[/imath] is replaced by [imath]2\beta_{1i}\delta(\vec{x}_i-\vec{x}_1)[/imath] (the two is there because the Dirac delta function for (i,1) is identical to the one for (1,i): i.e., we originally were summing over both i and j and the two terms yielded by i=1 and j=1 are identical).

 

Well, thinking of the case [imath]\vec{\Psi}=\vec{\Psi}_0\vec{\Psi}_r[/imath] as analogous to [imath]\vec{\Psi}=\vec{\Psi}_1\vec{\Psi}_2[/imath], and looking at the equation I pasted from post #26, then [imath]2\beta_{1i}\delta(\vec{x}_i-\vec{x}_1)[/imath] sounds like an expected result. Except that, I have no idea what would happen to all those other integrals (that are laid down in that long equation I pasted from post #26)...

 

Anyway, just looking at this again...

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r=2\sum_{i=\#2}^n \beta_{i1}\int\vec{\Psi}_r^\dagger (\vec{x}_i = \vec{x}_1)\cdot \vec{\Psi}_r (\vec{x}_i=\vec{x}_1)dV_(r \neq \vec{x}_i)[/math]

 

...it looks to me like [imath](\vec{x}_i=\vec{x}_1)[/imath] is saying that the only integral in the sum of integrals that will yield a result from [imath]\vec{\Psi}_r[/imath] is the case where i = 1. But that must be a mis-interpretation because that single element of interest [imath]x_1[/imath] is not part of set #2 nor set "r", so that circumstance would never appear... Right?

 

There still exist a whole slew of terms where neither i nor j are equal to one and I will get back to them. For the time being I will concern myself with the limited set I have displayed above.

 

And now I'm confused since the "j" was just replaced by "1" :O This must refer to some earlier step that wasn't explicitly laid down? And I'm just, not sure what's going on :I

 

I think I understand your explanation as to how you jumped from one issue to a slightly different issue (skipping the separation to sets #1 and #2), but now I'm not sure, what exactly I should be trying to step through next... Not sure where do we stand exactly, can I continue with the route taken by the OP? :D

 

I think I should let you reply at this point... (The parts of your post that I'm not quoting, I've read them through few times carefully and if they are already answering to my questions I apologize for not being able to pick them up... wasn't able to concentrate very well while writing this either, but decided to post this avalanche of questions anyway because you can probably provide some help :P )

 

Just to clarify, where I'm at with the OP itself, is At this point we must turn to analysis of the impact of our axis, a pure creation of our own imagination...., but being little bit confused over what the [imath]g(x)[/imath] looks like exactly... ...if it's even important at this point; I get it's a sum of integrals with anti-commuting elements attached to them and the cross terms vanish etc.

 

-Anssi

  • 3 weeks later...
Posted
Hmmm, I think there's a chance of confusion here too, with the choice of word "asymmetric". I almost got confused and went back to;
I was once again sloppy; what I was saying would have been much clearer if I had used the word “antisymmetric”, which is a very specific type of asymmetry. Sorry about that.
After refreshing my memory on that, I think when you say "asymmetric", you don't mean "not symmetric", but always "anti-symmetric", as in, an exchange of 2 arguments will cause a change in sign in [imath]\vec{\Psi}[/imath] (as oppose to an arbitrary change). Correct?
Absolutely!
But I'm wondering, is it possible that some [imath]\vec{\Psi}[/imath] allows three different sorts of elements in regards of exchange symmetry.

- Set A which is always (exchange) symmetric within itself

- Set B which is always anti-symmetric within itself

- Set C which is symmetric in exchange against elements from A, but anti-symmetric against elements from B

 

Hmmm, I could be missing something here, don't know if this is logically valid...

In standard physics, when physicists talk about exchange symmetry (or anti-symmetry), they are always speaking of cases A and B above. They are always talking about identical particles: i.e., identical in the sense that no physical experiment exists which can differentiate between the original case and the exchanged case. Cases A and B thus divide the universe into two specific types of particles: bosons and fermions. However, as I have commented elsewhere, the type of particle one has (electron, proton, quark, etc.) is actually a statement of the surrounding phenomena associated with the identification (specific aspects of the environment being examined). It is the only way physicists can talk about these events without describing the specific environment which is used to specify that element; simultaneously specifying that environment makes the problem an n-body problem, a problem beyond their mathematics (essentially, a cover up of the true problem).

 

Essentially, the assumption is made that these entities are ontologically different. If one takes the position that we are talking about undefined ontological elements whose behavior is a function of their specific environment (essentially the position I have presented) cases A and B are certainly possible and the issue of case C becomes a reasonable question. I am an old man fast approaching senility and can't assure you my logic is dependable but I do think there are logical problems with set C. Suppose we have three elements (a, b, c) where exchange between a and b is symmetric ([imath]\vec{\Psi}(a,b,c)=\vec{\Psi}(b,a,c)[/imath]), exchange between a and c is antisymmetric ([imath]\vec{\Psi}(a,b,c)=-\vec{\Psi}(c,b,a)[/imath]), we can then ask about exchange between b and c (what is the value of [imath]\vec{\Psi}(a,c,b)[/imath]).

 

The important issue here is that we are speaking of the functional structure of [imath]\vec{\Psi}(a,b,c)[/imath] and not the actual labels a, b and c. Thus [imath]\vec{\Psi}(a,c,b)[/imath] can be obtained via three exchanges: first with the second (which is symmetric and yields [b,a,c]), first with the third (which is antisymmetric and yields -[c,a,b]) and then first with the second again (which is again symmetric and thus yields a final result -[a,c,b]). This implies exchange of b and c is an antisymmetric exchange. However, suppose we do the exchange in a different order. Exchange the first with the third (which is antisymmetric and yields -[c,b,a]) then the first with the second (which is symmetric and yields -[b,c,a]) and then first with the third again (which is again antisymmetric and thus yields +[a,c,b]). This implies exchange of b and c is a symmetric exchange. This leads to the conclusion that C is an illogical case.

 

Now I say that such a conclusion is not final because it suggests there are two ways of performing this multiple exchange. It is entirely possible that one can speak of two distinct triplet exchanges one which can be called a bose exchange and one which can be called a fermi exchange. One important aspect of that possibility is that exchange between bosons and fermions can exist and it seems to me that the existence of boson fermion exchange is central to the concept of exchange forces.

 

Though it seems to me that such an exchange is central to the idea of exchange forces between fermions, I have never in my whole life heard any professional physicist even bring up the issue. For the time being, I will just hold it as a phenomena worth thinking about; it is entirely possible that my thoughts on the subject are confused.

One thing that confuses me still; since the set of "valid" elements can only exhibit anti-symmetric behaviour under exchange, wouldn't that mean that any elements that exhibit symmetric behaviour are known to belong to the set of "hypothesized" elements?
That is a true statement. Under my construction and thus inherent in my deduction, “valid” elements must be fermions (must be case B from your example above) but “hypothesized” elements can be either bosons or fermions (can be either case A or case B).
In any case, does this mean that if you take the route of the OP, and first make the separation [imath]\vec{\Psi}=\vec{\Psi}_1\vec{\Psi}_2[/imath], and only then [imath]\vec{\Psi}_1=\vec{\Psi}_0\vec{\Psi}_r[/imath], then also [imath]g(x)[/imath] also will look different from what you laid down in the previous posts?
Not really, the result must be essentially the same. The difference lies in the representation of the integrals. The first route (which eliminates a whole slew of arguments) produces that intermediate function “f” which needs to be integrated over the arguments referred to as “r”. The second route (going directly to the “one”, “r” separation) leaves in all those arguments eliminated initially in the first route. This makes g(x) a simple sum over probability distributions for all those arguments; essentially replacing the complexity introduced by “f” with a great many additional terms developed from those probability distributions. They are just two different ways of looking at the same thing.
The last point of contact that I have to explicitly laid down math, and what I've been looking at when trying to figure out what [imath]g(x)[/imath] looks like,
I think here you might be going off on a wild goose chase. What I have laid out for you is the mechanism by which g(x) is obtained from the known form of your expectations for the rest of the universe (the environment which exists and influences the behavior of the ontological element referred by the label [imath]\vec{x}_1[/imath]). All factors of g(x) arise from integrations over Dirac delta functions or integrations over other arguments also mediated via Dirac delta functions. The function g(x) provides a mathematical mechanism for time dependent changes in the partials with respect to the components of [imath]\vec{x}_1[/imath]. These changes are time dependent because the momentum changes as [imath]\vec{x}_1[/imath] changes and change in [imath]\vec{x}_1[/imath] (a change in what we know) defines a change in time.

 

What you need to understand is that the partials with respect to the components of [imath]\vec{x}_1[/imath] have been defined to be “momentum” and that the standard classical definition of “force” is, “that which changes momentum”. So g(x) is the function which yields a mathematical representation of the forces on the ontological element referred by the label [imath]\vec{x}_1[/imath] which exist as a consequence of the rest of the universe. Fundamentally, g(x) is exactly what is generally called a “potential”. Since g(x) arises from the Dirac delta function (either directly or through feed back mechanisms also dependent upon Dirac delta functions), the forces only arise through the behavior of hypothesized elements obeying symmetry exchange statistics: i.e., bosons. At this point, you are deep into modern physics mathematics and entire graduate courses are devoted to the mathematical form of the consequences of boson exchange. Google “boson exchange forces” and I think you will begin to understand the versatility of “exchange forces” as an explanation of how and why things interact.

 

In my original proof of my fundamental equation, I proved that, no matter what distribution of valid ontological elements exist in the past (and that would be what is being explained) there always exists a set of hypothetical elements which, by way of the Dirac delta function interaction, can constrain the past to exactly what is known: i.e., it is no more than a device for keeping track of the information. As such, it should be clear to you that g(x) is exactly what is required to yield the behavior observed. That is why I referred to your wanting to see what g(x) looked like as a “wild goose chase”; it will provide whatever forces are necessary to yield your information (your observations).

Are those all present in [imath]g(x)[/imath] also?
In a word, yes!
Okay, but just to be sure, is this just an example, or can the original presentation validly just take g(x) in this form?
The final result for g(x) (together with the approximations I have expressed) will simply be a sum of anti-commuting operators weighted by specific functions of x obtained from the various integrations performed. All arguments except x will be integrated out and the exact form of the result cannot be known unless the correct [imath]\vec{\Psi}[/imath] was known. The standard approach used by professional physicists is that everything can be ignored except the specific interaction they are interested in. See the one pion exchange force. I know you won't understand that paper but I think the fact that everything except the specific interaction they are interested in is totally omitted should be quite clear. They do this because they have utterly no idea as to how the other factors could possibly be included.

 

You should note that I also have no idea as to how the other factors could possibly be included. What I have done is to deduce a relationship (my fundamental equation) which must be a valid representation if the entire universe is included (that doesn't mean I can solve that differential equation). What I am doing here is showing that the form of Schrödinger's equation is the inevitable result no matter what is or is not left out.

...it looks to me like [imath](\vec{x}_i=\vec{x}_1)[/imath] is saying that the only integral in the sum of integrals that will yield a result from [imath]\vec{\Psi}_r[/imath] is the case where i = 1.
That is false! What the statement is saying is that only the incidence when the integral yields a non zero result is when [imath](\vec{x}_i=\vec{x}_1)[/imath]: i.e., when the argument being integrated over has exactly the same value as [imath]\vec{x}_1[/imath] and that depends upon what value [imath]\vec{x}_1[/imath] has. Since [imath]\vec{\Psi}^\dagger \cdot \vec{\Psi}[/imath] is defined to be the probability of the distribution defined by those arguments, what we are talking about here is the probability that a specific boson will be at exactly the point referred to as [imath]\vec{x}_1[/imath]. In essence, the forces on the entity of interest are obtained by summing over all possible boson exchanges weighted by the probability of that specific exchange. Fermions do not take part in the exchange because the Dirac delta function vanishes; however, their distribution controls the distribution of the bosons. Remember, counter to what is done in standard physics, we are including “ALL” possibilities.
I get it's a sum of integrals with anti-commuting elements attached to them and the cross terms vanish etc.
Actually, that is the only important issue here. You need to understand that, when the left side of the fundamental equation is squared (given the specific approximations we have made), the result will be a sum of squared terms (with no cross terms) which, in the final analysis consist of a single differential term and “some function” of that argument which was not integrated out. The form of the function is, at this point, totally unimportant. The actual form of that function depends upon what other approximations are made (what is left out and what is included). The actual character of a specific case will arise when I show that Dirac's equation is an approximation to my equation for a rather interesting case. We will get into that as soon as you are persuaded that my proof that Schrödinger's equation is a nonrelativistic approximation to my equation when the referenced V(x) is a good approximation to the sum of those integrals.

 

The point of this whole exercise being that Schrödinger's equation is indeed an approximate solution to my fundamental equation. Essentially, no matter what the form of g(x) needs to be, there exists a set of function which will yield that form so don't worry, at least for the moment, what g(x) looks like.

 

Have fun -- Dick

Posted

Doctordick,

 

Concerning your statement:

....It is entirely possible that one can speak of two distinct triplet exchanges one which can be called a bose exchange and one which can be called a fermi exchange. One important aspect of that possibility is that exchange between bosons and fermions can exist and it seems to me that the existence of boson fermion exchange is central to the concept of exchange forces....I have never in my whole life heard any professional physicist even bring up the issue.

 

===

 

Possible exchange between boson and fermion is predicted by a model of the atomic nucleus that I am aware of. I will explain at the classical level and you will see how it relate to quark level. One way to consider how bosons and fermions have exchange forces between them requires that we give up the assumption that protons [P] and nucleons [N] interact as individual entities within nuclear shells. What is left is that nucleons must then occur in clusters. Not so crazy an idea, was presented in 1937 by John Wheeler, no light weight in physics, he called it his hypothesis of 'resonating group structures'. So, one possibility is these Wheeler group clusters of nucleons take different forms as bosons and fermions.

 

So, consider that the deuteron [NP] as a type of Wheeler resonating cluster is a boson with spin 1+, and triton [NPN] and helium-3 [PNP] are fermions with spin 1/2 +. Thus, the solution to the problem of boson-fermion exchange forces is such a simple little equation: 3 [NP] = 1 [NPN] + 1 [PNP]. (note: it helps to draw it out so you see immediately how the mesons predicted to act as exchange force particles). This simple equation predicts that all known beta stable isotopes do not have only one unique set of nucleon clusters, but that there are many different ways that bosons and fermions can be transformed into each other to form isotopes, a sum of all possibilities. But, only follow this path of thinking if you are ready to agree the possibility that [N] and [P] are not independent entities within nuclear shells in isotopes. Bty, you may have interest in putting your exceptional mathematical mind to work to help me with my post on matter and antimatter in the physics and mathematical area of the forum. In fact it relates directly to what I post here. Cordially...........

Posted

In standard physics, when physicists talk about exchange symmetry (or anti-symmetry), they are always speaking of cases A and B above. They are always talking about identical particles: i.e., identical in the sense that no physical experiment exists which can differentiate between the original case and the exchanged case. Cases A and B thus divide the universe into two specific types of particles: bosons and fermions. However, as I have commented elsewhere, the type of particle one has (electron, proton, quark, etc.) is actually a statement of the surrounding phenomena associated with the identification (specific aspects of the environment being examined). It is the only way physicists can talk about these events without describing the specific environment which is used to specify that element; simultaneously specifying that environment makes the problem an n-body problem, a problem beyond their mathematics (essentially, a cover up of the true problem).

 

Yup.

 

Suppose we have three elements (a, b, c) where exchange between a and b is symmetric ([imath]\vec{\Psi}(a,b,c)=\vec{\Psi}(b,a,c)[/imath]), exchange between a and c is antisymmetric ([imath]\vec{\Psi}(a,b,c)=-\vec{\Psi}(c,b,a)[/imath]), we can then ask about exchange between b and c (what is the value of [imath]\vec{\Psi}(a,c,b)[/imath]).

 

The important issue here is that we are speaking of the functional structure of [imath]\vec{\Psi}(a,b,c)[/imath] and not the actual labels a, b and c. Thus [imath]\vec{\Psi}(a,c,b)[/imath] can be obtained via three exchanges: first with the second (which is symmetric and yields [b,a,c]), first with the third (which is antisymmetric and yields -[c,a,b]) and then first with the second again (which is again symmetric and thus yields a final result -[a,c,b]). This implies exchange of b and c is an antisymmetric exchange. However, suppose we do the exchange in a different order. Exchange the first with the third (which is antisymmetric and yields -[c,b,a]) then the first with the second (which is symmetric and yields -[b,c,a]) and then first with the third again (which is again antisymmetric and thus yields +[a,c,b]). This implies exchange of b and c is a symmetric exchange. This leads to the conclusion that C is an illogical case.

 

Yes, it certainly seems so, at least I can't spot an error in your line of reasoning.

 

Now I say that such a conclusion is not final because it suggests there are two ways of performing this multiple exchange. It is entirely possible that one can speak of two distinct triplet exchanges one which can be called a bose exchange and one which can be called a fermi exchange. One important aspect of that possibility is that exchange between bosons and fermions can exist and it seems to me that the existence of boson fermion exchange is central to the concept of exchange forces.

 

Though it seems to me that such an exchange is central to the idea of exchange forces between fermions, I have never in my whole life heard any professional physicist even bring up the issue. For the time being, I will just hold it as a phenomena worth thinking about; it is entirely possible that my thoughts on the subject are confused.

 

Hmmm, okay, I saw Rade's comments on that too and it sounds like it'd certainly be a topic of its own so I'll put it aside for now :)

 

That is a true statement. Under my construction and thus inherent in my deduction, “valid” elements must be fermions (must be case B from your example above) but “hypothesized” elements can be either bosons or fermions (can be either case A or case B).

 

Hmm, my thoughts are little bit fuzzy on few things here... I'm thinking, doesn't it pose a problem if an element is known to be one of the hypothesized elements, which is known by it obeying Bose statistics? I.e. if it invalidates the picture, when an element is known to be just something that our explanation requires.

 

On the other hand, I'm also thinking that, since all worldviews function in terms of defined entities, i.e. in terms of ideas of what are "things with persistent identity", that step may well be already ontologically invalid, and thus it is possible that ALL prediction-wise valid worldviews contain hypothesized elements.

 

Ummm... But then all that may be just a distortion on what exactly was meant by "valid" and "hypothesized" elements... Eh, like I said, I find my thoughts on this exact issue little bit fuzzy. :( But I wouldn't want to stop on this issue now, I need to think about it more later.

 

Let's get to the actual issue;

 

The point of this whole exercise being that Schrödinger's equation is indeed an approximate solution to my fundamental equation. Essentially, no matter what the form of g(x) needs to be, there exists a set of function which will yield that form so don't worry, at least for the moment, what g(x) looks like.

 

Well, I'm not entirely sure what you refer to exactly when you say "what g(x) looks like".

 

But, what I meant was, that I don't know how you get to write g(x) down as:

 

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r=2\sum_{i=\#2}^n \beta_{i1}\int\vec{\Psi}_r^\dagger (\vec{x}_i = \vec{x}_1)\cdot \vec{\Psi}_r (\vec{x}_i=\vec{x}_1)dV_(r \neq \vec{x}_i)[/math]

 

I mean, I have SOME idea, but the details elude me.

 

If you meant, that I should not worry about the details as long as I understand that indeed it must be a sum of integrals with beta elements attached to them, then I think we are good to carry on.

 

But still I think it would be a valuable thing if I at least comment what parts of your explanations I don't quite understand, and where I think I have an okay grasp of what you are saying.... Just so you can get a better idea about what goes on in my head right now. This covers your past 2 replies to me:

 

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r=2\sum_{i=\#2}^n \beta_{i1}\int\vec{\Psi}_r^\dagger (\vec{x}_i = \vec{x}_1)\cdot \vec{\Psi}_r (\vec{x}_i=\vec{x}_1)dV_(r \neq \vec{x}_i)[/math].

 

The function, g(x) should be defined to be the result of integrating out all variables outside [imath]\vec{x}_1[/imath]. You can think of this case as beginning from [imath]\vec{\Psi}=\vec{\Psi}_0\vec{\Psi}_r[/imath] thus the step through “f” is avoided and the integral is directly over the Dirac delta function. (Where I have reduced the sum to a sum over set #2 for the reason given above: i.e., all other terms vanish. In fact, some of the terms from set #2 may also vanish as they may also be asymmetric under exchange.)

 

That paragraph I think I understand.

 

If you look at the above, you need to understand that there are three different places where the index “i” appears. First, since we are summing over all elements remaining in the universe (after singling out the variable of interest, [imath]\vec{x}_1[/imath]), the sum over i and j, where i is not equal to j, of [imath]\beta_{i j}\delta(\vec{x}_i-\vec{x}_j)[/imath] is replaced by [imath]2\beta_{1i}\delta(\vec{x}_i-\vec{x}_1)[/imath]

 

I commented before: "Thinking of the case [imath]\vec{\Psi}=\vec{\Psi}_0\vec{\Psi}_r[/imath] as analogous to [imath]\vec{\Psi}=\vec{\Psi}_1\vec{\Psi}_2[/imath], and looking at the equation I pasted from post #26, then [imath]2\beta_{1i}\delta(\vec{x}_i-\vec{x}_1)[/imath] sounds like an expected result."

 

But trying to think out the details, I do not know how exactly the j gets replaced by 1. Or, hmmm... Well, I do get that the sum has been reduced to a sum over set #2, but then I'm confused as to why does the sum over beta elements only pair each element from sum #2 with element "1"; what happened to pairing each element with each element?

 

I'm probably missing something stupid obvious here :P

 

Secondly we are integrating over every [imath]\vec{x}_i[/imath] where i is not equal to one. That is what I am symbolizing when I write [imath]dV_(r \neq \vec{x}_i)[/imath].

 

I find that very confusing, I was sort of expecting there to read [imath]dV_(r \neq \vec{x}_1)[/imath]... Or, I would expect the math expression to mean that the possibilities of the "i"th element are not integrated over. Perhaps there's a typo somewhere *scratching head*

 

Also, [imath]\vec{x}_1[/imath] was not part of the set "r" anyway...? Right?

 

The subscript “[imath](r \neq \vec{x}_i)[/imath]” means that the set being integrated over in this case also excludes the specific argument [imath]\vec{x}_i[/imath] as I have actually done that integral via the rule of integration over the Dirac delta function.

 

Ummmm... Well, helped by the commentary from your previous post, let me take a step back and try to interpret this again with little baby steps:

 

(FIRST ATTEMPT)

 

[math]2\sum_{i=\#2}^n \beta_{i1}\int\vec{\Psi}_r^\dagger (\vec{x}_i = \vec{x}_1)\cdot \vec{\Psi}_r (\vec{x}_i=\vec{x}_1)dV_(r \neq \vec{x}_i)[/math]

 

I'm imagining the first integral in the sum of integrals, which is done for some element from set #2, let's call it element "8"

 

[math]\int\vec{\Psi}_r^\dagger (\vec{x}_8 = \vec{x}_1)\cdot \vec{\Psi}_r (\vec{x}_8=\vec{x}_1)dV_(r \neq \vec{x}_8)[/math]

 

So first of all that integral essentially sums over all the possibilities of all the arguments of [imath]\vec{\Psi}_r[/imath], except it doesn't touch the value of [imath]\vec{x}_8[/imath]... ...no I probably already went wrong somewhere as within that integration we get any probabilities only when [imath](\vec{x}_8 = \vec{x}_1)[/imath] and if [imath]\vec{x}_8[/imath] is excluded from the integration we never get any results...

 

(SECOND ATTEMPT)

 

Let me try again; if I take [imath]dV_(r \neq \vec{x}_i)[/imath] as "integrate over every [imath]\vec{x}_i[/imath] where i is not equal to one" (even though I don't understand when the i could equal to one as i is taken from set #2)

 

[math]\int\vec{\Psi}_r^\dagger (\vec{x}_8 = \vec{x}_1)\cdot \vec{\Psi}_r (\vec{x}_8=\vec{x}_1)dV_(r \neq \vec{x}_i)[/math]

 

So this integral essentially sums over all the possibilities of all the arguments of [imath]\vec{\Psi}_r[/imath] that belong to set "r" (excluding "1" if it indeed even is part of set "r")

 

And in this integration, all the probabilities get discarded, except the ones where the value of [imath]\vec{x}_8[/imath] equals to the value of [imath]\vec{x}_1[/imath]. So what we get from this integration is the probability that the element "8" has the same value ("is in the same position") as element "1", and the whole context (the whole rest of the universe) contributes to that probability.

 

The sum of such integrals then gives us the probability that SOME element from set #2 exist in the same position as that singled out element "1"

 

Hmmm, I'm fairly confident to this second attempt but decided to leave the first attempt in just in case...

 

But before I close, let us get back to those slew of terms where neither i nor j are equal to one. A little thought should convince you that these terms have to do with interactions between other elements in the universe (not the element under examination). The Dirac delta function in those cases will yield a non zero value only when two of those elements happen to be in the same place (interactions not involving the one we are interested it).

 

Little bit confused over where the j re-appeared into the picture, if this was even supposed to refer to the above equation at all. But, I do think I understand the essence of what you are saying, in that, if we evaluate every element against every other element within the set #2, the dirac delta function will vanish every time the given elements do not exist in the same position, so we just get the probabilities of each element existing in the same position.

 

The integrations in that case, in the final analysis, yield some number times the proper beta operator times [imath]\vec{\Psi}_1(\vec{x}_1, t)[/imath]. When we square the thing, that beta operator insures that the cross terms vanish and, in the final analysis it generates a number which is the sum over all possibilities (adjusted by their internal interactions) for the rest of the universe given that the element of interest is at the position of interest.

 

I feel like I understand that now...

 

Now that the anti-commuting operators have been removed (by the squaring), the term can be moved to the right side of the equation and seen as an adjustment to the energy. This factor can be removed via exactly the same procedures used to remove that “(S2+Sr)” in the original presentation.

 

...but what that means is somewhat fuzzy (don't know how to look at it as an "adjustment to the energy")... But then, I suspect it's a side issue for now.

 

Okay, that was little bit more than "commentary on what I understand thus far"; I should mention that all of the above is written over the course of 2 days of head scratching and actually I feel somewhat less confused now than I did when I started! :D

 

I'd still want to hear your comments just to know if I'm finally getting it right.

 

In the meantime, I can now get back onto your latest post:

 

What I have laid out for you is the mechanism by which g(x) is obtained from the known form of your expectations for the rest of the universe (the environment which exists and influences the behavior of the ontological element referred by the label [imath]\vec{x}_1[/imath]). All factors of g(x) arise from integrations over Dirac delta functions or integrations over other arguments also mediated via Dirac delta functions.

 

I think I now understand what that means.

 

The function g(x) provides a mathematical mechanism for time dependent changes in the partials with respect to the components of [imath]\vec{x}_1[/imath]. These changes are time dependent because the momentum changes as [imath]\vec{x}_1[/imath] changes and change in [imath]\vec{x}_1[/imath] (a change in what we know) defines a change in time.

 

That, I don't see clearly. I don't know how to view g(x) as "mechanism for time dependent changes in the partials with respect to the components of [imath]\vec{x}_1[/imath]".

When you say "the momentum changes", that means, the momentum of [imath]\vec{x}_1[/imath]?

 

Is the issue, that, g(x) contributes to our expectations as to where are we going to find [imath]\vec{x}_1[/imath] in the future, i.e. how do we expect its velocity to change, or something like that? (little bit fuzzy here)

 

 

What you need to understand is that the partials with respect to the components of [imath]\vec{x}_1[/imath] have been defined to be “momentum” and that the standard classical definition of “force” is, “that which changes momentum”.

 

I'm not sure how to view the partials of the components of [imath]\vec{x}_1[/imath] as momentum :I

I remember you have commented on this issue before here and there but I was never far enough in my understanding as to pick up anything from it really...

 

In my original proof of my fundamental equation, I proved that, no matter what distribution of valid ontological elements exist in the past (and that would be what is being explained) there always exists a set of hypothetical elements which, by way of the Dirac delta function interaction, can constrain the past to exactly what is known: i.e., it is no more than a device for keeping track of the information.

 

Yes, that part makes perfect sense to me.

 

The final result for g(x) (together with the approximations I have expressed) will simply be a sum of anti-commuting operators weighted by specific functions of x obtained from the various integrations performed. All arguments except x will be integrated out and the exact form of the result cannot be known unless the correct [imath]\vec{\Psi}[/imath] was known. The standard approach used by professional physicists is that everything can be ignored except the specific interaction they are interested in. See the one pion exchange force. I know you won't understand that paper but I think the fact that everything except the specific interaction they are interested in is totally omitted should be quite clear. They do this because they have utterly no idea as to how the other factors could possibly be included.

 

That too.

 

You should note that I also have no idea as to how the other factors could possibly be included. What I have done is to deduce a relationship (my fundamental equation) which must be a valid representation if the entire universe is included (that doesn't mean I can solve that differential equation). What I am doing here is showing that the form of Schrödinger's equation is the inevitable result no matter what is or is not left out.

 

That is quote clear to me too.

 

What the statement is saying is that only the incidence when the integral yields a non zero result is when [imath](\vec{x}_i=\vec{x}_1)[/imath]: i.e., when the argument being integrated over has exactly the same value as [imath]\vec{x}_1[/imath] and that depends upon what value [imath]\vec{x}_1[/imath] has. Since [imath]\vec{\Psi}^\dagger \cdot \vec{\Psi}[/imath] is defined to be the probability of the distribution defined by those arguments, what we are talking about here is the probability that a specific boson will be at exactly the point referred to as [imath]\vec{x}_1[/imath]. In essence, the forces on the entity of interest are obtained by summing over all possible boson exchanges weighted by the probability of that specific exchange. Fermions do not take part in the exchange because the Dirac delta function vanishes; however, their distribution controls the distribution of the bosons.

 

So, now I can understand how this is about "the probability that a specific boson will be exactly at the point referred to as [imath]\vec{x}_1[/imath]", but I don't yet understand how that translates to exchange forces, unless it is all simply about our probabilistic expectations, and not about something akin to "boson collisions" (which would I guess go against the definition of bosons :I)

 

Likewise, I'm not sure if I understand the details of how fermions control the distribution of bosons.

 

Actually, that is the only important issue here. You need to understand that, when the left side of the fundamental equation is squared (given the specific approximations we have made), the result will be a sum of squared terms (with no cross terms) which, in the final analysis consist of a single differential term and “some function” of that argument which was not integrated out. The form of the function is, at this point, totally unimportant. The actual form of that function depends upon what other approximations are made (what is left out and what is included). The actual character of a specific case will arise when I show that Dirac's equation is an approximation to my equation for a rather interesting case. We will get into that as soon as you are persuaded that my proof that Schrödinger's equation is a nonrelativistic approximation to my equation when the referenced V(x) is a good approximation to the sum of those integrals.

 

Yeah, I was supposed to try and proceed to the next step this weekend already, but I feel quite exhausted after all of the above... :P

 

Will try to get around to it soon though

 

-Anssi

Posted

I apologize for being so slow to answer but life has become complex. As you know my mother-in-law's house burned and now a leak developed in our roof. It has been ten years since I have been on the roof and, when I went up there (after finding the leak in the attic) I discovered it needs some work. Actually, I am surprised it didn't leak earlier. Getting old is a pain in the ***!

 

Anssi, you are a find. Your only problem is that you haven't studied physics and mathematics. You remind me of myself fifty years ago. I went into physics, not because I wanted to “do physics” but because I wanted to understand “reality” and physicists were the only people who didn't seem to be feeding me circular bullshit. At least not until I got into advanced theoretical physics as a graduate student. At that point they began to get as adamant about their unsupported ideas as did any other field. Everyday I read many of the other posts on this forum and you are the only person seemingly capable of understanding the real source of the problems with modern physics.

 

I understand fully your difficulty with what I say. Believe me, it is one hundred percent due to your ignorance of physics and mathematics (at any rate, that part which isn't due to my lack of clarity).

Hmmm, okay, I saw Rade's comments on that too and it sounds like it'd certainly be a topic of its own so I'll put it aside for now :)
I will admit that my ideas about the situation are perhaps confused, but I am fully confident that Rade's ideas are indeed confused. I don't think he has the first comprehension of what I am doing.
On the other hand, I'm also thinking that, since all worldviews function in terms of defined entities, i.e. in terms of ideas of what are "things with persistent identity", that step may well be already ontologically invalid, and thus it is possible that ALL prediction-wise valid worldviews contain hypothesized elements.
Any “explanation” (that is any epistemological construct which includes “rules”) must include hypothesized elements (the underlying elements used in the expression of those rules). The only explanation which needs no hypothesized elements is that ”what is” is “what is” tabular explanation which we have discussed extensively earlier.

 

By the way, motion in a world view is a direct consequence of identifying a specific ontological element labeled as “xi” in one present as being the same ontological element labeled with a different numerical label in a different present. By identifying it as the same element one has specified it as having “moved” during the transition from the first “present” to the second “present”. This is, in fact, the actual definition of “motion”; a phenomena which simply does not exist in the ”what is” is “what is” explanation.

Ummm... But then all that may be just a distortion on what exactly was meant by "valid" and "hypothesized" elements... Eh, like I said, I find my thoughts on this exact issue little bit fuzzy. :( But I wouldn't want to stop on this issue now, I need to think about it more later.
There is no reason not to think about it now; however, the issue is actually quite simple: in the conventional perspective, the philosophical circumstance is divided into two categories: the realistic view and the solipsist view. Many people simply presume that the correct answer is either one or the other. What is significant about my view is that it sets forth a universe which neither: it consists of both realistic ontological elements (those which I have referred to as “valid” ontological elements) and solipsist elements (those which I have referred to as “invalid” or “hypothesized” ontological elements). The important fact here is that “solipsism” (everything is hypothesized) cannot be disproved. My presentation includes “solipsism” as a possibility (in fact, it includes every possibility as possible) but does not restrict the possibilities to “solipsism”. It does, however, restrict “realism” to the ”what is” is “what is” explanation. My presentation is actually totally restricted to “deduction”. “Induction” plays utterly no role except for the assumption that the future will be, to some reasonable extent, consistent with the past: i.e., the present (the specific new information) can not completely overwhelm the significance of the entire past (what is already known); otherwise, any attempt to explain anything will fail (as what you know is then of no significance).
But, what I meant was, that I don't know how you get to write g(x) down as:

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r=2\sum_{i=\#2}^n \beta_{i1}\int\vec{\Psi}_r^\dagger (\vec{x}_i = \vec{x}_1)\cdot \vec{\Psi}_r (\vec{x}_i=\vec{x}_1)dV_(r \neq \vec{x}_i)[/math]

 

I mean, I have SOME idea, but the details elude me.

First of all, I am presuming you understand what was meant by the expression

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r[/math]

 

The complete thing should be a sum over both i and j (which are simply two indexes), with no term in the sum where i=j as that term would be automatically infinite: i.e., the correct expression for the sum is, [imath]\sum_{i \neq j}[/imath]. Each individual term of that unbelievably extensive sum is essentially

[math]\beta_{ij}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_j)\vec{\Psi}_r dV_r[/math]

 

where every variable except [imath]x_1[/imath] and [imath]\tau_1[/imath] are being integrated over (that would be “all the remaining arguments”, the set being referred to as “r”). The Dirac delta function is inside the integral and, because it is being integrated over, the fact that it is an infinite spike when the argument is zero does not yield an infinite result (because dx and [imath]d\tau[/imath] are essentially zero). The result of integrating over a specific [imath]x_i[/imath] or [imath]\tau_i[/imath] is exactly [imath]\vec{\Psi}_r^\dagger \cdot \vec{\Psi}_r[/imath] evaluated at the point where [imath]x_i=x_j[/imath] (or when integrated over [imath]\tau_i[/imath], evaluated at the point [imath]\tau_i=\tau_j[/imath]). The integrals over all other arguments essentially yield “one”. Since we are not integrating over [imath]\vec{x}_1[/imath], that argument is still in the resultant function (what we have when we finish integrating over all the arguments in “r”, “the remaining arguments”).

 

When we integrate over [imath]\vec{x}_j[/imath], we get exactly [imath]\vec{\Psi}_r^\dagger \cdot \vec{\Psi}_r[/imath] evaluated at the point where [imath]x_i=x_j[/imath] (or when integrated over [imath]\tau_i[/imath], evaluated at the point [imath]\tau_i=\tau_j[/imath]) which is exactly the same thing we just got above (that is where the factor two comes from). Note that we get the noted result the first time an integration is performed over [imath]\delta(\vec{x}_i-\vec{x}_j)[/imath]; after that, the Dirac delta function is “gone” (it has been integrated over). The integrals over all other arguments essentially yield “one”; if, in the sum, i comes before j, [imath]x_j[/imath] is “an other argument”, if j comes before i, [imath]x_i[/imath] is “an other argument”. Essentially, the resultant of doing all the integrals is a probability (not one because of the existence of that Dirac delta function; if the Dirac delta function were not there, the integrals would each be essentially “one”).

 

Finally, since [imath]\vec{x}_1[/imath] is a “valid” element, the Dirac delta function explicitly vanishes for any [imath]\vec{x}_i[/imath] (or j) chosen from set #1. Thus there is no contribution from any terms taken from set #1 and the reason why I have shown the explicit sum as taken set #2. And lastly, since [imath]\vec{x}_1[/imath] is not being integrated over, the result of integrating over all the “other arguments” (when neither i nor j is equal to one) is simply a constant multiplying [imath]\vec{\Psi}_0[/imath] which essentially amounts to the potential energy of the rest of the universe and can be taken care of by adjusting the value of the energy (the partial with respect to t) by means of a factor of the form [imath]e^{iSt}[/imath]. Thus it is that the only integral we need worry about is the one where i is taken from set #2 and the other argument of the Dirac delta function is [imath]\vec{x}_1[/imath].

 

That should clear up the expression

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r[/math]

 

The second part of the expression which bothers you is no more than exactly this same expression after integration over that specific Dirac delta function has been done (the one which produces a non-zero result only when [imath]\vec{x}_i = \vec{x}_1[/imath]). That should clarify the second part, which is written as

[math]g(x)=2\sum_{i=\#2}^n \beta_{i1}\int\vec{\Psi}_r^\dagger (\vec{x}_i = \vec{x}_1)\cdot \vec{\Psi}_r (\vec{x}_i=\vec{x}_1)dV_(r \neq \vec{x}_i)[/math]

 

Note that dV should not include [imath]\vec{x}_i[/imath] because we have just integrated over that particular [imath]\vec{x}_i[/imath] (that's the integral over the Dirac delta function which constrained [imath]\vec{x}_i = \vec{x}_1[/imath].

But trying to think out the details, I do not know how exactly the j gets replaced by 1. Or, hmmm... Well, I do get that the sum has been reduced to a sum over set #2, but then I'm confused as to why does the sum over beta elements only pair each element from sum #2 with element "1"; what happened to pairing each element with each element?
I hope my comments above have clarified this a bit. The consequences of the pairing of all the other elements are out there influencing the rest of the universe in the background and (in terms of Schrödinger's equation which is to be obeyed by the element of interest) can essentially be ignored.
I'm probably missing something stupid obvious here :P
One man's “obvious” is another's “conundrum”. “Obvious” is probably one of the most overused terms in the scientific community.
I find that very confusing, I was sort of expecting there to read [imath]dV_(r \neq \vec{x}_1)[/imath]... Or, I would expect the math expression to mean that the possibilities of the "i"th element are not integrated over. Perhaps there's a typo somewhere *scratching head*
I have already covered this but I think it is worth mentioning again. The expression [imath]dV_r[/imath] is identical to the expression [imath]dV_(r \neq \vec{x}_1)[/imath] as “r” is “all the other arguments” (that is, not [imath]\vec{x}_1[/imath])

[imath]\vec{x}_1[/imath]was not part of the set "r" anyway...? Right?
Right!
(FIRST ATTEMPT)

 

[math]2\sum_{i=\#2}^n \beta_{i1}\int\vec{\Psi}_r^\dagger (\vec{x}_i = \vec{x}_1)\cdot \vec{\Psi}_r (\vec{x}_i=\vec{x}_1)dV_(r \neq \vec{x}_i)[/math]

 

I'm imagining the first integral in the sum of integrals, which is done for some element from set #2, let's call it element "8"

 

[math]\int\vec{\Psi}_r^\dagger (\vec{x}_8 = \vec{x}_1)\cdot \vec{\Psi}_r (\vec{x}_8=\vec{x}_1)dV_(r \neq \vec{x}_8)[/math]

 

So first of all that integral essentially sums over all the possibilities of all the arguments of [imath]\vec{\Psi}_r[/imath], except it doesn't touch the value of [imath]\vec{x}_8[/imath]... ...no I probably already went wrong somewhere as within that integration we get any probabilities only when [imath](\vec{x}_8 = \vec{x}_1)[/imath] and if [imath]\vec{x}_8[/imath] is excluded from the integration we never get any results...

What you are overlooking is the fact that [imath]\vec{x}_8[/imath] is excluded from the integration for the very simple fact that we have already done that integration; that integration is exactly what eliminated the Dirac delta function [imath]\delta(\vec{x}_8 -\vec{x}_1)[/imath] from the representation.
(SECOND ATTEMPT)

 

Let me try again; if I take [imath]dV_(r \neq \vec{x}_i)[/imath] as "integrate over every [imath]\vec{x}_i[/imath] where i is not equal to one" (even though I don't understand when the i could equal to one as i is taken from set #2)

 

[math]\int\vec{\Psi}_r^\dagger (\vec{x}_8 = \vec{x}_1)\cdot \vec{\Psi}_r (\vec{x}_8=\vec{x}_1)dV_(r \neq \vec{x}_i)[/math]

To be consistent with what you have written, [imath]dV_(r \neq \vec{x}_i)[/imath] should have been [imath]dV_(r \neq \vec{x}_8)[/imath]. The result was obtained when you integrated over [imath]\vec{x}_8)[/imath] and essentially evaluated the contribution of the Dirac delta function [imath]\delta(\vec{x}_8 -\vec{x}_1)[/imath]. With regard to your first sentence, the issue is not i=1 (the indexes are not the same) but rather [imath]\vec{x}_i=\vec{x}_1[/imath], the associated positions in the Euclidean frame of reference being the same.
So this integral essentially sums over all the possibilities of all the arguments of [imath]\vec{\Psi}_r[/imath] that belong to set "r" (excluding "1" if it indeed even is part of set "r")
The definition of “r” is that it is “all the other arguments”: i.e., all except argument #1.
And in this integration, all the probabilities get discarded, except the ones where the value of [imath]\vec{x}_8[/imath] equals to the value of [imath]\vec{x}_1[/imath]. So what we get from this integration is the probability that the element "8" has the same value ("is in the same position") as element "1", and the whole context (the whole rest of the universe) contributes to that probability.
”Discarded” is not really a good word to use here. Integration over any set of arguments in the expression [imath]\vec{\Psi}^\dagger \cdot \vec{\Psi}[/imath] essentially yields unity (it amounts to the probability of those arguments summed over all possibilities for those arguments). I say “essentially” because there are some subtleties there that should be discussed if one wants to be absolutely rigorous. Nevertheless, I will pretty well guarantee that the result of the complete integration over all the variables will amount to the sum of all possibilities that another element in the universe will be at exactly the point specified by [imath]\vec{x}_1[/imath] times the probability that each of those vast extensive specific possibilities will occur.
The sum of such integrals then gives us the probability that SOME element from set #2 exist in the same position as that singled out element "1"

 

Hmmm, I'm fairly confident to this second attempt but decided to leave the first attempt in just in case...

I hope I have clarified the thing for you. The notation I use is not really very satisfactory but I can't think of a better way to express what I am trying to say. Thanks for your attempts to understand; they are appreciated.
...but what that means is somewhat fuzzy (don't know how to look at it as an "adjustment to the energy")... But then, I suspect it's a side issue for now.
After thinking about it, I think this issue should be left to later as the meaning of the word “energy” has, at this point in the deduction, not yet been defined. Remember, my deduction has utterly nothing to do with actual reality; it is no more than a constructed tautology we are using to represent the numerical labels used to specify an arbitrary extensive collection of undefined ontological elements.
Okay, that was little bit more than "commentary on what I understand thus far"; I should mention that all of the above is written over the course of 2 days of head scratching and actually I feel somewhat less confused now than I did when I started! :D

 

I'd still want to hear your comments just to know if I'm finally getting it right.

I get the impression that you are doing a pretty good job but I should really wait and see your reaction to what I have just written.
That, I don't see clearly. I don't know how to view g(x) as "mechanism for time dependent changes in the partials with respect to the components of [imath]\vec{x}_1[/imath]".

When you say "the momentum changes", that means, the momentum of [imath]\vec{x}_1[/imath]?

Again, this issue should be left to later as again, at this point in the deduction, “momentum” has not yet been defined. Energy, momentum and mass are defined after Schrödinger's equation has been deduced.
So, now I can understand how this is about "the probability that a specific boson will be exactly at the point referred to as [imath]\vec{x}_1[/imath]", but I don't yet understand how that translates to exchange forces, unless it is all simply about our probabilistic expectations, and not about something akin to "boson collisions" (which would I guess go against the definition of bosons :I)
Again, these issues should be put off until after we have Schrödinger's equation.
Likewise, I'm not sure if I understand the details of how fermions control the distribution of bosons.
Remember, all “valid” ontological elements correspond to fermions and all bosons correspond to “hypothesized” ontological elements which were “hypothesized” in order to allow us to explain the behavior of those “valid” ontological elements. Clearly, it is the details of the distribution of those fermions which is to be explained via the existence of those bosons. Thus the actual distribution of those fermions (what you are trying to explain) which set the requirements for the distribution of the bosons (the existence of which you have hypothesized in order to create your explanation). “How” this is accomplished is part and parcel of that explanation.

 

We can begin to talk about a mental model of this explanation once we finish deducing Schrödinger's equation as that deduction allows us to relate our equation to many of the specific concepts used in modern physics (energy, momentum and mass being the very beginning of that association).

Yeah, I was supposed to try and proceed to the next step this weekend already, but I feel quite exhausted after all of the above... :P
Yeah, it is probably time to get to that next step.

 

Have fun -- Dick

Posted

Anssi, you are a find. Your only problem is that you haven't studied physics and mathematics. You remind me of myself fifty years ago. I went into physics, not because I wanted to “do physics” but because I wanted to understand “reality” and physicists were the only people who didn't seem to be feeding me circular bullshit. At least not until I got into advanced theoretical physics as a graduate student. At that point they began to get as adamant about their unsupported ideas as did any other field. Everyday I read many of the other posts on this forum and you are the only person seemingly capable of understanding the real source of the problems with modern physics.

 

Well, I was a bit disappointed with what was going on in the "What is spacetime" thread myself. I mean it's disappointing to not be able to communicate an issue.

 

I guess it's characteristic to internet forum discussions that people use relatively little effort to try and comprehend what is being said, and mainly only comprehend and respond to the tidbits they already knew. You know, whatever "sounds valid" from the get-go. And whatever sounds invalid is never thought over. Certainly it would take longer to "think it over" (and understand the perspective of the other party) than most people are willing to spend on their contributions.

 

I do get the impression that there were people there who would be perfectly capable of understanding this issue (and this thread) if they were willing to try, but right now it's just everyone merely "commenting from the top of their head" to everyone else who are doing exactly the same.

 

Btw, that's kind of the reason why I'm so slow to respond to this thread. It's time constraint, as in, I seldom have proper periods of time to be able to actually focus on the issue and think things over sufficiently for a meaningful reply. I usually get to read your responses fairly quickly but I'm not ready to reply very quickly.

 

Any “explanation” (that is any epistemological construct which includes “rules”) must include hypothesized elements (the underlying elements used in the expression of those rules). The only explanation which needs no hypothesized elements is that ”what is” is “what is” tabular explanation which we have discussed extensively earlier.

 

Yeah, so, in other words, while we can't prove which elements are part of "valid elements" - as the "hypothesized elements" also contain exchange anti-symmetric elements - we do know that the exchange symmetric elements are not part of the "valid elements". And knowing them to be part of "hypothesized elements" doesn't mean we could conclude that the given explanation is "invalid" (predictionwise)?

 

The above is referring to your comment in post #92:

set #2 (the hypothesized elements required by the explanation) can include elements both symmetric and antisymmetric with regard to exchange. This has to be true as there must be no way to differentiate between “valid” elements and “hypothesized” (if there were a way, one could prove what elements were valid and the explanation would be flawed).

 

By the way, motion in a world view is a direct consequence of identifying a specific ontological element labeled as “xi” in one present as being the same ontological element labeled with a different numerical label in a different present. By identifying it as the same element one has specified it as having “moved” during the transition from the first “present” to the second “present”. This is, in fact, the actual definition of “motion”; a phenomena which simply does not exist in the ”what is” is “what is” explanation.

 

Yup. I was commenting on this issue, in the "What is time" thread when the issue of whether "motion" or "change" is fundamental (metaphysical) was brought up.

 

I'd think it should be plain to see that we defined the identity of the elements we see as "persistent objects", and the ontological correctness of our position cannot really be investigated. I personally think it is quite likely (dare I say "obvious") that "persistent identity" itself is an idea completely in our head and has no ontological footing whatsoever, so I think that's why it's been relatively easy for me to follow your presentation. Certainly it seems many people don't see over their defined entities, and conclude that ontologically reality is a set of things that move from one place to another, and we are simply trying to figure out what those things are...

 

If anyone from "what is spacetime" or "what is time" thread is reading this, note that this thread has to do with what sorts of characteristics are forced upon the "persistent things" when our worldview self-coherently defines them. In regards of those threads, one characteristic is that the defined persistent entities must obey relativistic time behaviour against each others (you need to follow the logic to understand why).

 

That quite strongly implies, that the real source of relativistic time behaviour of objects is not in the ontological ideas that underly the theory of relativity (meaning; we can't say that the behaviour of the entities (that we defined) proves the existence of "relativistic spacetime" or "relativity of simultaneity" or "isotropy of light speed" in any ontological sense, even if it seems tempting to conclude so), but rather relativistic time behaviour is a consequence of defining reality into a self-coherent set of persistent entities.

 

The different ontological views that come along with the theory of relativity are basically, let's say, "semantical transformations" between worldviews, that each actually obey the same underlying relationships. They look very different because they are essentially different language (use different terminology) but they are explaining the same underlying data. The same relationships are apparently found by DD's analysis, which means that those relationships are imposed onto our worldviews by the original symmetry arguments (symmetries that are springing from the ignorance of the true meaning of the raw data "that is to be explained")

 

Wow, that's was so off-topic! And yet so relevant :) Well I assume people from the "time" threads are reading this thread too. I mean, I HOPE they are.

 

There is no reason not to think about it now; however, the issue is actually quite simple: in the conventional perspective, the philosophical circumstance is divided into two categories: the realistic view and the solipsist view. Many people simply presume that the correct answer is either one or the other. What is significant about my view is that it sets forth a universe which neither: it consists of both realistic ontological elements (those which I have referred to as “valid” ontological elements) and solipsist elements (those which I have referred to as “invalid” or “hypothesized” ontological elements). The important fact here is that “solipsism” (everything is hypothesized) cannot be disproved. My presentation includes “solipsism” as a possibility (in fact, it includes every possibility as possible) but does not restrict the possibilities to “solipsism”. It does, however, restrict “realism” to the ”what is” is “what is” explanation.

 

Yup, that sounds like you also view "identity" as an entirely epistemological issue.

 

My presentation is actually totally restricted to “deduction”. “Induction” plays utterly no role except for the assumption that the future will be, to some reasonable extent, consistent with the past: i.e., the present (the specific new information) can not completely overwhelm the significance of the entire past (what is already known); otherwise, any attempt to explain anything will fail (as what you know is then of no significance).

 

Yup.

 

First of all, I am presuming you understand what was meant by the expression

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r[/math]

 

I presume so too. Inside each individual integration, each evaluation where the value of [imath]\vec{x}_i[/imath] differs from the value of [imath]\vec{x}_1[/imath], gets multiplied by 0.

 

Whereas each evaluation where the value of [imath]\vec{x}_i[/imath] equals to the value of [imath]\vec{x}_1[/imath], gets multiplied by 1. I.e. from each integration we get the probability that the value of [imath]\vec{x}_i[/imath] is the same as the value of [imath]\vec{x}_1[/imath].

 

Btw, a lot of my confusion was caused by me misinterpreting whether we were talking about the indices of ontological elements being the same, or their value being the same. Many times I just thought about the indices themselves, and it was hard to understand what was being said when I did not understand to view the ontological elements as having different values, as I was still thinking about the perspective that we had with the "what is, is what is" table, where the label and the position (the value) of an element was essentially the same thing.

 

The complete thing should be a sum over both i and j (which are simply two indexes), with no term in the sum where i=j as that term would be automatically infinite: i.e., the correct expression for the sum is, [imath]\sum_{i \neq j}[/imath]. Each individual term of that unbelievably extensive sum is essentially

[math]\beta_{ij}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_j)\vec{\Psi}_r dV_r[/math]

 

where every variable except [imath]x_1[/imath] and [imath]\tau_1[/imath] are being integrated over (that would be “all the remaining arguments”, the set being referred to as “r”). The Dirac delta function is inside the integral and, because it is being integrated over, the fact that it is an infinite spike when the argument is zero does not yield an infinite result (because dx and [imath]d\tau[/imath] are essentially zero). The result of integrating over a specific [imath]x_i[/imath] or [imath]\tau_i[/imath] is exactly [imath]\vec{\Psi}_r^\dagger \cdot \vec{\Psi}_r[/imath] evaluated at the point where [imath]x_i=x_j[/imath] (or when integrated over [imath]\tau_i[/imath], evaluated at the point [imath]\tau_i=\tau_j[/imath]). The integrals over all other arguments essentially yield “one”. Since we are not integrating over [imath]\vec{x}_1[/imath], that argument is still in the resultant function (what we have when we finish integrating over all the arguments in “r”, “the remaining arguments”).

 

When we integrate over [imath]\vec{x}_j[/imath], we get exactly [imath]\vec{\Psi}_r^\dagger \cdot \vec{\Psi}_r[/imath] evaluated at the point where [imath]x_i=x_j[/imath] (or when integrated over [imath]\tau_i[/imath], evaluated at the point [imath]\tau_i=\tau_j[/imath]) which is exactly the same thing we just got above (that is where the factor two comes from).

 

Okay, I am a bit confused here.

 

Looking at an integration:

[math]\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_j)\vec{\Psi}_r dV_r[/math]

 

Conceptually, since that integration is over "all the possibilities for all the arguments", what I would expect the end result to be is the probability that [imath]\vec{x}_i[/imath] is found from the same position as [imath]\vec{x}_j[/imath] anywhere in the "x,tau,t"-space.

 

Reading your explanation, I get the impression that the end result is instead the probability that [imath]\vec{x}_i[/imath] is in the same position where [imath]\vec{x}_j[/imath] was in the original input arguments plus the probability that [imath]\vec{x}_j[/imath] was in the same position as [imath]\vec{x}_i[/imath] is in the original input arguments.

 

I guess those two probabilities would be very different from each others.

 

Consequently, I do not know what you mean by this:

 

Note that we get the noted result the first time an integration is performed over [imath]\delta(\vec{x}_i-\vec{x}_j)[/imath]; after that, the Dirac delta function is “gone” (it has been integrated over).

 

I really don't know what is meant by "the first time an integration is performed over [imath]\delta(\vec{x}_i-\vec{x}_j)[/imath]". I guess it cannot refer to the "first integral in the sum of integrals", it must refer to an integration pass of a specific input argument... I.e. the integration pass that is performed for [imath]\vec{x}_i[/imath] or [imath]\vec{x}_j[/imath]"? Whichever is first or once both are passed?

 

Is that occurring because after those passes, we are looking at a function that simply does not have [imath]\vec{x}_i[/imath] or [imath]\vec{x}_j[/imath] anymore?

 

The integrals over all other arguments essentially yield “one”; if, in the sum, i comes before j, [imath]x_j[/imath] is “an other argument”, if j comes before i, [imath]x_i[/imath] is “an other argument”.

 

Not entirely sure what this means; when does "j" come before "i", does that mean that when the index of the element "j" is smaller than the index of the element "i" in a specific integral...? If so, that implies that when an integration is performed there is some specific order to integrating the arguments that is somehow relevant?

 

If I understood the "disappearance of dirac delta function" correctly, this seems to play along with that idea; in that case I understand what is meant by "The integrals over all other arguments essentially yield “one”." I mean, I understand it to the extent that if the dirac delta function disappears at some point during a specific integration, we start getting some probabilities from that point on. Not sure how we know that the rest of the integrations sum up to exactly "one" as it seems that all the "earlier" integration passes still yielded "0".

 

Maybe that is exactly what you are referring to here:

 

Essentially, the resultant of doing all the integrals is a probability (not one because of the existence of that Dirac delta function; if the Dirac delta function were not there, the integrals would each be essentially “one”).

 

But then I don't know what you meant by the result being "one" in the earlier paragraphs...

 

I'm sorry I need to ask so many questions about this still; I really am so unfamiliar with this that my head is spinning with questions in every turn. I was able to resolve some by hard thinking, but I need help with the above questions... :P

 

Finally, since [imath]\vec{x}_1[/imath] is a “valid” element, the Dirac delta function explicitly vanishes for any [imath]\vec{x}_i[/imath] (or j) chosen from set #1.

 

...because in the cases that the valid elements exist in the same position, the probability function should yield exactly 0... Right?

 

Thus there is no contribution from any terms taken from set #1 and the reason why I have shown the explicit sum as taken set #2.

 

Yup.

 

And lastly, since [imath]\vec{x}_1[/imath] is not being integrated over, the result of integrating over all the “other arguments” (when neither i nor j is equal to one) is simply a constant multiplying [imath]\vec{\Psi}_0[/imath] which essentially amounts to the potential energy of the rest of the universe and can be taken care of by adjusting the value of the energy (the partial with respect to t) by means of a factor of the form [imath]e^{iSt}[/imath].

 

Okay, I don't see that clearly, but I can take it on faith at this point.

 

Thus it is that the only integral we need worry about is the one where i is taken from set #2 and the other argument of the Dirac delta function is [imath]\vec{x}_1[/imath].

 

That should clear up the expression

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r[/math]

 

Okay, well now I have an inkling of understanding where that expression came from... When it first appeared I think I had about 0% chance of ever figuring it out on my own. :P

 

The second part of the expression which bothers you is no more than exactly this same expression after integration over that specific Dirac delta function has been done (the one which produces a non-zero result only when [imath]\vec{x}_i = \vec{x}_1[/imath]). That should clarify the second part, which is written as

[math]g(x)=2\sum_{i=\#2}^n \beta_{i1}\int\vec{\Psi}_r^\dagger (\vec{x}_i = \vec{x}_1)\cdot \vec{\Psi}_r (\vec{x}_i=\vec{x}_1)dV_(r \neq \vec{x}_i)[/math]

 

Note that dV should not include [imath]\vec{x}_i[/imath] because we have just integrated over that particular [imath]\vec{x}_i[/imath] (that's the integral over the Dirac delta function which constrained [imath]\vec{x}_i = \vec{x}_1[/imath].

 

Ahha... So that also implies that indeed the order in which we integrate over the arguments is unimportant for the end result... ...if that's correct, then I guess the point with the "rest of the arguments" yielding essentially "one" (or at least very close to "1"?) within an integration makes sense... It does sound a bit odd as if I undestand the integration procedure correctly, every integration pass (single argument) inside a specific integral would simply get "blocked" by the dirac delta function as long as it is sitting there, and it's sitting there until we finally perform the integration over the specific argument "i" and/or "j"

 

...at this point it dawns on me that I may well be so confused over this that my questions don't even make sense to you... I hope you can figure out where the problem is! :P

 

I hope my comments above have clarified this a bit. The consequences of the pairing of all the other elements are out there influencing the rest of the universe in the background and (in terms of Schrödinger's equation which is to be obeyed by the element of interest) can essentially be ignored.

 

Okay...

 

I hope I have clarified the thing for you. The notation I use is not really very satisfactory but I can't think of a better way to express what I am trying to say. Thanks for your attempts to understand; they are appreciated.

 

Well thank you for your attempts to explain; they are appreciated :D

 

Yeah, it is probably time to get to that next step.

 

...almost. I feel like I've been here before many times; I already thought I just need the small final thrust to understand something completely, but then that final thrust only uncovers more confusion :I

 

But I feel good about learning more with every post.

 

-Anssi

Posted

Hi Anssi, I am going to answer your latest post somewhat out of order because it seems to me that the importance of the issues you bring up are not in the same order in which you bring them up. First of all, you are perhaps misconstruing the definition I am using for the word “valid” as it applies to ontological elements (certainly it is a difficulty most people have). Since it is absolutely central to understanding what I am saying, I am going to try and express what I mean again.

 

The issue of what actually exists is central to the subject of ontology. It is clear from the work of many thinkers over the centuries that actually answering such a question is impossible (i.e., solipsism, the idea that nothing actually exists, cannot be disproved). I am saying, maybe some things we think exist actually do exist and some things we think exist don't; the fact that we cannot answer such a question can not be taken as proof it cannot be so. Thus I used the word “valid” for the sole purpose of identifying which case I am talking about because the two parts (though one cannot ever actually identify which is which) must obey subtly different rules. Each and every truly flaw-free explanation must explain everything which actually exists while, on the other hand, it is possible that some ontological elements presumed to exist (by some specific explanation) may only be hypothesized and yet there can still exist a flaw-free explanation which require these elements: the central point being that the fact that an explanation is “flaw-free” is no guarantee that every element presumed in that explanation actually exists. This is an issue often forgotten by many serious scientists.

 

When I read

And knowing them to be part of "hypothesized elements" doesn't mean we could conclude that the given explanation is "invalid" (predictionwise)?
I presume you are referring to exactly that issue and I want to be sure that presumption is not in error.
I'd think it should be plain to see that we defined the identity of the elements we see as "persistent objects", and the ontological correctness of our position cannot really be investigated. I personally think it is quite likely (dare I say "obvious") that "persistent identity" itself is an idea completely in our head and has no ontological footing whatsoever, so I think that's why it's been relatively easy for me to follow your presentation. Certainly it seems many people don't see over their defined entities, and conclude that ontologically reality is a set of things that move from one place to another, and we are simply trying to figure out what those things are...
Here you are absolutely correct and the issue you bring up is the single most far reaching misrepresentation of reality which is out there blocking perception of what I am talking about. I would put this difficulty squarely in the same category as the presumption that “God exists” used in most medieval arguments prior to modern science; it so blocks their thinking that they are blind to the issues you and I are talking about. People who think along such lines simply can not be reached by any intellectual argument and it is really a waste of time to try.

 

I say that though even I am occasionally driven to try myself. :lol: :lol: I think, if I were in charge here, I would move “What is 'spacetime' really?” thread to the Theology forum as the issue is not really philosophy, it is actually just another form of theology. :shrug:

Well, I was a bit disappointed with what was going on in the "What is spacetime" thread myself. I mean it's disappointing to not be able to communicate an issue.
The “What is spacetime” thread is a perfect example of exactly what I was just talking about. The people posting there have absolutely no comprehension of the inadequacy of their beliefs.
I do get the impression that there were people there who would be perfectly capable of understanding this issue (and this thread) if they were willing to try, but right now it's just everyone merely "commenting from the top of their head" to everyone else who are doing exactly the same.
You are absolutely correct, no one is paying the slightest attention to anyone else. It reminds me of a room full of parrots. As I have said many times, thinking is not an easy process and most people would rather avoid it if at all possible. Even Modest, who is clearly a rather rational person, would rather spout off then actually try to understand anything new. I had hoped he was reading some of my stuff but, in his latest post to me (a rather long and involved missive), he made it quite clear that actually thinking about what I said was too much trouble.

 

If one defines “the past” to be “what we know” (or think we know), it is quite clear that the “ontological correctness of our position cannot really be investigated”. We cannot go back into the past and actually compare any given supposed ontological element occurrence to another perceived to exist at another time. The only valid question we can ask ourselves is, is the collection of “presents” that go to make up what we know consistent with our explanation. Essentially, motion is a mental construct allowing us to presume a given [imath]x_i(t_1)[/imath] is the same ontological element as is [imath]x_i(t_2)[/imath] (during the "time" between [imath](t_1)[/imath] and [imath](t_2)[/imath], "the ontological element represented by [imath]x_i[/imath] moved from one point represented in that coordinate system by "x" to another represented by a different "x").

 

If we don't make such presumptions, the only explanation possible is the ”what is” is “what is” explanation and it is quite difficult to predict the future from such an explanation. That is, in fact, exactly what I am doing: I am setting up a representation of such predictions consistent with any ”what is” is “what is” explanation (that issue is actually a definable mathematical problem). The concept of “exchange” allows us to include any and all possible conceivable persistences: i.e., [imath]x_i(t_1)[/imath] may be [imath]x_j(t_2)[/imath] (likewise, [imath]x_j(t_1)[/imath] may be [imath]x_i(t_2)[/imath] which becomes the very definition of “exchange”).

That quite strongly implies, that the real source of relativistic time behaviour of objects is not in the ontological ideas that underly the theory of relativity (meaning; we can't say that the behaviour of the entities (that we defined) proves the existence of "relativistic spacetime" or "relativity of simultaneity" or "isotropy of light speed" in any ontological sense, even if it seems tempting to conclude so), but rather relativistic time behaviour is a consequence of defining reality into a self-coherent set of persistent entities.
This is an excellent presentation of the situation. I am curious, have you read my thread, “An 'analytical-metaphysical' take on Special Relativity!” And, if you have, did you find it clear?
Wow, that's was so off-topic! And yet so relevant :) Well I assume people from the "time" threads are reading this thread too. I mean, I HOPE they are.
That would be nice but I am afraid I doubt it quite seriously. And, yes, it was both “off-topic” and absolutely relevant!
Yup, that sounds like you also view "identity" as an entirely epistemological issue.
Absolutely correct.

 

So, now let's get down to the issues central to your confusion!

 

The real problem seems to be your lack of mathematics. Because of your lack of practice in the field, your mind almost invariably jumps to misinterpretations of the symbolic representations. First of all, your comment,

Whereas each evaluation where the value of [imath]\vec{x}_i[/imath] equals to the value of [imath]\vec{x}_1[/imath], gets multiplied by 1.
is erroneous. In addition, I suspect you missed the point as to why I concerned myself with the single term of that sum over i
First of all, I am presuming you understand what was meant by the expression:

[math]g(x)= 2\sum_{i=\# 2}^n \beta_{i1}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_1)\vec{\Psi}_r dV_r[/math]

 

The complete thing should be a sum over both i and j (which are simply two indexes), with no term in the sum where i=j as that term would be automatically infinite: i.e., the correct expression for the sum is, [imath]\sum_{i \neq j}[/imath]. Each individual term of that unbelievably extensive sum is essentially

[math]\beta_{ij}\int \vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_j)\vec{\Psi}_r dV_r[/math]

 

where every variable except [imath]x_1[/imath] and [imath]\tau_1[/imath] are being integrated over (that would be “all the remaining arguments”, the set being referred to as “r”).

The point here is that the sum over i and j is a sum over that specified term where i and j have specific values (analogous to exactly what you did when you wrote down that expression where you set i=8). That is essentially exactly what arises in a mathematicians mind when he writes down something indexed by i and j without specifying what i and j are. The indices i and j can be whatever you wish them to be.
Btw, a lot of my confusion was caused by me misinterpreting whether we were talking about the indices of ontological elements being the same, or their value being the same. Many times I just thought about the indices themselves, and it was hard to understand what was being said when I did not understand to view the ontological elements as having different values, as I was still thinking about the perspective that we had with the "what is, is what is" table, where the label and the position (the value) of an element was essentially the same thing.
Yes, I understand that. It is exactly that “double notation” which allows me to go from the ”what is” is “what is” table to the persistent identification inherent in any common explanation and I should have made that issue clearer ("obvious" in one person's mind does not equate to "obvious" in another's). The “x” label is the numerical label attached to the ontological elements in the ”what is” is “what is”[/blue] table (essentially that finite and discrete description of the past upon which your explanation is based). The index “i” allows you to identify which of these labels persist into which discrete elements when the “hypothetical” concept of the continuum of time is introduced in that “flaw-free” explanation put forth to "explain" what you think you know.

 

As an aside at this point, I am reminded of Zeno's paradox. I have always been of the opinion that the true issue Zeno was trying to point out with his paradox was the necessity of that finite (and thus discrete) description of the actual information available. I am astonished that this issue was actually recognized almost twenty five hundred years ago and essentially ignored ever since. For over two thousand years, the best intellectuals in the world have utterly failed to recognize that single most important fact about what we are trying to explain. I have argued with Qfwfq on this very same issue with utterly no success (I think it is exactly that same mental block everyone seems to possess which you referred to earlier).

 

To quote Aristotle,

Zeno's contribution to Eleatic philosophy is entirely negative. He did not add anything positive to the teachings of Parmenides, but devoted himself to refuting the views of the opponents of Parmenides. Parmenides had taught that the world of sense is an illusion because it consists of motion (or change) and plurality (or multiplicity or the many). True Being is absolutely one; there is in it no plurality. True Being is absolutely static and unchangeable. Common sense says there is both motion and plurality. This is the Pythagorean notion of reality against which Zeno directed his arguments. Zeno showed that the common sense notion of reality leads to consequences at least as paradoxical as his master's.

 

So, back to your confusion with that sum.

Okay, I am a bit confused here. ...

 

Reading your explanation, I get the impression that the end result is instead the probability that [imath]\vec{x}_i[/imath] is in the same position where [imath]\vec{x}_j[/imath] was in the original input arguments plus the probability that [imath]\vec{x}_j[/imath] was in the same position as [imath]\vec{x}_i[/imath] is in the original input arguments.

The fact that you include the line, “in the original input arguments”, leads me to think that you do not understand the integration operation itself. Integration sums the function (in this case, [imath]\vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_j)\vec{\Psi}_rdV_r)[/imath] over all possible values of the arguments being integrated over (in our case of interest, all arguments except [imath]\vec{x}_1)[/imath].

 

When you perform that sum over [imath]\vec{x}_k[/imath] (where k is neither i nor j) the Dirac delta function (which only depends upon [imath]\vec{x}_i[/imath] and [imath]\vec{x}_j[/imath]) can be factored out. In essence, all you are integrating over is [imath]\vec{\Psi}_r^\dagger \cdot \vec{\Psi}_rdV_r (not\; x_i, \tau_i, x_j,\; nor\; \tau_j)[/imath] which amounts to the sum of all possibilities times the probability of each of those possibilities which (under the definition of probabilities) is one. Only when you go to integrate over either [imath]\vec{x}_i[/imath] or [imath]\vec{x}_j[/imath] does the Dirac delta function play a roll. In that specific case only (when you are either integrating over [imath]\vec{x}_i[/imath] or [imath]\vec{x}_j[/imath]) the Dirac delta function yields zero anytime these two arguments are different and infinity when they are the same.

 

You must remember that you are not, at that moment, integrating over both of them; you are integrating over only one or the other. Note that, due to the fact that the probability that [imath]\vec{x}_i=\vec{x}_j[/imath], where one or the other is some fixed value is only one possibility out of an infinite number of possibilities (that one fixed value is the one you are not integrating over). Thus the answer is “infinity times zero” ordinarily undefined; however, in this case by definition of the Dirac delta function it is simply the remainder of the function (i.e., don't include the Dirac delta function) evaluated when the argument being integrated over is equal to the one not being integrated over. Essentially, when summed over all possibilities (for the argument being integrated over) the answer is, the probability that the argument being integrated over is exactly equal to the argument not being integrated over no matter what in the universe that second argument might be.

 

Now, the Dirac delta function has been integrated over and is no longer there! When you go to integrate over each and every argument yet to be integrated over (including the other argument in the Dirac delta function we just integrated over) we are back to a situation where the integral is over what amounts to the sum of all possibilities for the argument being integrated over. The net result is, “one” times the probability that [imath]\vec{x}_i[/imath] and [imath]\vec{x}_j[/imath] are the same “period”.

 

Remember, we have not integrated over [imath]\vec{x}_1[/imath] so the net result of the integration to this point is still a function of [imath]\vec{x}_1[/imath]; however, we are presuming the value of this function is essentially not really dependent upon the value of [imath]\vec{x}_1[/imath] of interest to us. Consider the probability an element some where in a neighboring galaxy is in the same place as another some other hypothetical element in that galaxy, what impact do you think that should have upon the experiment we are doing in our laboratory. My point being, only when hypothetical elements are in the same place as the element we are interested in [imath](\vec{x}_1)[/imath] do we expect the behavior of [imath]\vec{x}_1[/imath] to depend upon that occurrence. So the great majority of those terms essentially amount to a background constant.

 

The only term of serious importance (in the deduction of Schrödinger's equation) are those terms arising from the Dirac delta function [imath]\delta(\vec{x}_1-\vec{x}_i)[/imath] and [imath]\delta(\vec{x}_i-\vec{x}_1)[/imath] and they will be identical. (Please remember that “i” is just an index, a replacement for some number like the 8 you used earlier, and when I use one, it makes no difference what letter I use to refer to it. The common letters used my mathematicians are i, j, k, l, m, n just like the common letters used for variables are x,y,z.)

Not entirely sure what this means; when does "j" come before "i", does that mean that when the index of the element "j" is smaller than the index of the element "i" in a specific integral...? If so, that implies that when an integration is performed there is some specific order to integrating the arguments that is somehow relevant?
No, the order of integration is of no importance. The issue here is that, when the integration is performed the result depends upon whether or not the Dirac delta function is part of that integration. Although the Dirac delta function seems to have two arguments, when it comes to integration it essentially has only one (the one not being integrated over). All the Dirac delta function actually does is set the function being integrated over to the value it would have if the argument being integrated over were set equal to the one not being integrated over. Until that integration is performed, the Dirac delta function can be factored and after it has been done, the Dirac delta function is gone.
Not sure how we know that the rest of the integrations sum up to exactly "one" as it seems that all the "earlier" integration passes still yielded "0".
No, it only yields zero when its two arguments are different and infinity when they are the same. The actual value is indeterminate until an integral over one of those arguments is performed.
...at this point it dawns on me that I may well be so confused over this that my questions don't even make sense to you... I hope you can figure out where the problem is! :P
...at this point, I hope I have cleared up your confusion a bit. The real problem here is that you are not familiar with integral calculus and to go straight into implied results of an integration over an infinite number of variables is a bit much to expect you to understand, but, from your comments, I get the impression that the whole thing is beginning to make sense to you. If you still have questions, I would be happy to try again. I would say that you are getting integral calculus from a rather strange perspective but I don't think it will hurt in the end.

 

Looking forward to hearing from you again!

 

Have fun --Dick

Posted
The issue of what actually exists is central to the subject of ontology. It is clear from the work of many thinkers over the centuries that actually answering such a question is impossible (i.e., solipsism, the idea that nothing actually exists, cannot be disproved). I am saying, maybe some things we think exist actually do exist and some things we think exist don't; the fact that we cannot answer such a question can not be taken as proof it cannot be so. Thus I used the word “valid” for the sole purpose of identifying which case I am talking about because the two parts (though one cannot ever actually identify which is which) must obey subtly different rules. Each and every truly flaw-free explanation must explain everything which actually exists while, on the other hand, it is possible that some ontological elements presumed to exist (by some specific explanation) may only be hypothesized and yet there can still exist a flaw-free explanation which require these elements: the central point being that the fact that an explanation is “flaw-free” is no guarantee that every element presumed in that explanation actually exists. This is an issue often forgotten by many serious scientists.

 

When I read

I presume you are referring to exactly that issue and I want to be sure that presumption is not in error.

 

Yes, I believe we are on the same page on this.

 

Here you are absolutely correct and the issue you bring up is the single most far reaching misrepresentation of reality which is out there blocking perception of what I am talking about. I would put this difficulty squarely in the same category as the presumption that “God exists” used in most medieval arguments prior to modern science; it so blocks their thinking that they are blind to the issues you and I are talking about. People who think along such lines simply can not be reached by any intellectual argument and it is really a waste of time to try.

 

I say that though even I am occasionally driven to try myself. :lol: :lol:

 

Well yeah, I wouldn't like to conclude it's a complete waste of time. Granted, when that issue about "persistent identity" has come up, I've usually had some people respond with "that's obviously nonsense" in the blink of an eye. But, then some people seem to understand the issue more readily. It can't be an intelligence issue, I think it's just a communication difficulty; I think there must be a way to explain this also to those who have never seen any problems with naive realism.

 

Hmmm, but I can see more and more clearly the problems you said you've had, with mathematicians, physicians and philosophers all thinking this is not really their department... At least philosophers should be interested, but I guess they are easily scared by all the math, as it looks deceptively like someone is yet again trying to prove a specific ontology with a lot of math, so they decide it is coming from someone who doesn't even understand the problem of ontology.

 

The “What is spacetime” thread is a perfect example of exactly what I was just talking about. The people posting there have absolutely no comprehension of the inadequacy of their beliefs.

You are absolutely correct, no one is paying the slightest attention to anyone else. It reminds me of a room full of parrots. As I have said many times, thinking is not an easy process and most people would rather avoid it if at all possible. Even Modest, who is clearly a rather rational person, would rather spout off then actually try to understand anything new. I had hoped he was reading some of my stuff but, in his latest post to me (a rather long and involved missive), he made it quite clear that actually thinking about what I said was too much trouble.

 

I saw it, and thought it didn't display a very good understanding of what you were trying to say. Well, at least he acknowledged in the beginning of the post that he did not have time to properly think about what was being said.

 

This is an excellent presentation of the situation. I am curious, have you read my thread, “An 'analytical-metaphysical' take on Special Relativity!” And, if you have, did you find it clear?

 

I've skimmed it through, and I would like to walk through the math after the Schrödinger's Equation bit (I will need help). Nevertheless, from what I understand (and have understood from your earlier commentary of that same issue), it seems entirely reasonable to me. When I first learned what Lorentz' transformation is (just by digging up information from the net), and realized exactly how it is a consequence of the assumption of isotropic speed of light, it was pretty evident that all the essential relationships of relativity could be expressed with framework that implies absolute simultaneity, or any kind of simultaneity one wishes for. Just had no idea how those frameworks would be laid down mathematically.

 

I actually made a brief comment about your thread, on my reply to Freeztar here:

http://hypography.com/forums/philosophy-of-science/17037-what-is-spacetime-really-44.html#post259609

 

Also I can of course see your commentary is an epistemological explanation of relativistic time behaviour, and not a suggestion of aether ontology.

 

A lot of the objections that I've seen, don't seem to be very thoughtful to me, and often times just plain odd. I guess the problem there also is that people just don't or can't give it the time to understand exactly how the perspective differs from whatever idea they have in their head about relativistic time relationships. Like that Modest' post, while I thought it reflects some desire to really understand what you are saying, it did also look like a first reaction commentary...

 

Well I assume people from the "time" threads are reading this thread too. I mean, I HOPE they are.

That would be nice but I am afraid I doubt it quite seriously.

 

Well you are probably right... That's a bit unfortunate. I wonder if Pyrotex had the chops to easily follow the math/logic itself... He's got physics background and he seems somewhat properly aligned philosophically to understand the discussion...

 

So, now let's get down to the issues central to your confusion!

 

The real problem seems to be your lack of mathematics. Because of your lack of practice in the field, your mind almost invariably jumps to misinterpretations of the symbolic representations. First of all, your comment,

Whereas each evaluation where the value of [imath]\vec{x}_i[/imath] equals to the value of [imath]\vec{x}_1[/imath], gets multiplied by 1.

is erroneous.

 

Ah, because the result of the dirac delta function is actually infinity there? It is really just the result of the integration over all the possibilities that allows one to say, what I said in the next sentence; "from each integration we get the probability that the value of [imath]\vec{x}_i[/imath] is the same as the value of [imath]\vec{x}_1[/imath].

 

?

 

The fact that you include the line, “in the original input arguments”, leads me to think that you do not understand the integration operation itself. Integration sums the function (in this case, [imath]\vec{\Psi}_r^\dagger \cdot \delta(\vec{x}_i-\vec{x}_j)\vec{\Psi}dV_r)[/imath] over all possible values of the arguments being integrated over (in our case of interest, all arguments except [imath]\vec{x}_1)[/imath].

 

When you perform that sum over [imath]\vec{x}_k[/imath] (where k is neither i nor j) the Dirac delta function (which only depends upon [imath]\vec{x}_i[/imath] and [imath]\vec{x}_j[/imath]) can be factored out.

 

You mean, when I perform an integration pass over [imath]\vec{x}_k[/imath] (i.e. some argument not in the Dirac delta function)...? So the Dirac delta function simply doesn't play a role at all at the other integration passes... When I was writing my previous post, that possibility crossed my mind and I actually wrote it down too... but for some reason that I can't remember, I decided it can't be the correct interpretation and scratched it before posting :D

 

(Just a tiny typo there btw, the [imath]\vec{\Psi}[/imath] is missing _r)

 

In essence, all you are integrating over is [imath]\vec{\Psi}_r^\dagger \cdot \vec{\Psi}dV_r (not\; x_i, \tau_i, x_j,\; nor\; \tau_j)[/imath] which amounts to the sum of all possibilities times the probability of each of those possibilities which (under the definition of probabilities) is one. Only when you go to integrate over either [imath]\vec{x}_i[/imath] or [imath]\vec{x}_j[/imath] does the Dirac delta function play a roll. In that specific case only (when you are either integrating over [imath]\vec{x}_i[/imath] or [imath]\vec{x}_j[/imath]) the Dirac delta function yields zero anytime these two arguments are different and infinity when they are the same.

 

You must remember that you are not, at that moment, integrating over both of them; you are integrating over only one or the other. Note that, due to the fact that the probability that [imath]\vec{x}_i=\vec{x}_j[/imath], where one or the other is some fixed value is only one possibility out of an infinite number of possibilities (that one fixed value is the one you are not integrating over). Thus the answer is “infinity times zero” ordinarily undefined; however, in this case by definition of the Dirac delta function it is simply the remainder of the function (i.e., don't include the Dirac delta function) evaluated when the argument being integrated over is equal to the one not being integrated over.

 

Right, I think I understand that now.

 

Essentially, when summed over all possibilities (for the argument being integrated over) the answer is, the probability that the argument being integrated over is exactly equal to the argument not being integrated over no matter what in the universe that second argument might be.

 

Right, that's, at a conceptual level, exactly what I was expecting to find, but didn't know how to interpret the math that way exactly. But, I think I am starting to understand how it works mathematically as well. E.g. after having performed the integration pass over the [imath]\vec{x}_i[/imath], we'd be looking at a function that still includes the [imath]\vec{x}_j[/imath], and hence it's value is still affecting the "result" we get from the integration of [imath]\vec{x}_i[/imath]. Hmm, that's probably a bit bad way to put it, but, essentially that allows us to say "the answer is, the probability that the argument being integrated over is exactly equal to the argument not being integrated over no matter what in the universe that second argument might be

 

Now, the Dirac delta function has been integrated over and is no longer there! When you go to integrate over each and every argument yet to be integrated over (including the other argument in the Dirac delta function we just integrated over) we are back to a situation where the integral is over what amounts to the sum of all possibilities for the argument being integrated over. The net result is, “one” times the probability that [imath]\vec{x}_i[/imath] and [imath]\vec{x}_j[/imath] are the same “period”.

 

Right, conceptually exactly what I was expecting to find.

 

Remember, we have not integrated over [imath]\vec{x}_1[/imath] so the net result of the integration to this point is still a function of [imath]\vec{x}_1[/imath]; however, we are presuming the value of this function is essentially not really dependent upon the value of [imath]\vec{x}_1[/imath] of interest to us. Consider the probability an element some where in a neighboring galaxy is in the same place as another some other hypothetical element in that galaxy, what impact do you think that should have upon the experiment we are doing in our laboratory. My point being, only when hypothetical elements are in the same place as the element we are interested in [imath](\vec{x}_1)[/imath] do we expect the behavior of [imath]\vec{x}_1[/imath] to depend upon that occurrence. So the great majority of those terms essentially amount to a background constant.

 

Right. The only part of that that I don't understand is, how and why those terms amount to a constant rather than "0".

 

The only term of serious importance (in the deduction of Schrödinger's equation) are those terms arising from the Dirac delta function [imath]\delta(\vec{x}_1-\vec{x}_i)[/imath] and [imath]\delta(\vec{x}_i-\vec{x}_1)[/imath] and they will be identical.

 

Right.

 

...at this point, I hope I have cleared up your confusion a bit. The real problem here is that you are not familiar with integral calculus and to go straight into implied results of an integration over an infinite number of variables is a bit much to expect you to understand, but, from your comments, I get the impression that the whole thing is beginning to make sense to you.

 

Yeah, I also get the impression that this is starting to make sense to me :D

But "phew", there are so many potential pitfalls here! Like I said before, I so feel like I'm walking on a mine field. And most steps I decide to try out are straight towards a mine!

 

Anyway, at this point, I went back to where this conundum with g(x) started; I reviewed my post #85 to make sure I still remembered how we got there. I did, so, where we stand with the OP is exactly here:

 

 

...we can write the differential equation to be solved as

[math] \nabla^2\vec{\Phi}(\vec{x},t) + G(\vec{x})\vec{\Phi}(\vec{x},t)= 2K^2\frac{\partial^2}{\partial t^2}\vec{\Phi}(\vec{x},t).[/math]

 

At this point we must turn to analysis of the impact of our [imath]\tau[/imath] axis, a pure creation of our own imagination and not a characteristic of the actual data defining the collection of referenced elements we need to explain. Since we are interested in the implied probability distribution of x, we must (in the final analysis) integrate over the probability distribution of tau.

 

I.e. find out what the probability distribution of x (of the element of interest), when tau is allowed to be anything at all?

 

Since tau is a complete fabrication of our imagination, the final [imath]P(x.\tau,t)[/imath] certainly cannot depend upon tau. It follows directly from this observation that the dependence of [imath]\vec{\Phi}[/imath] on tau must (at worst) be of the form [imath]e^{iq\tau}[/imath]. It follows directly from this observation that the differential equation can be written.

[math] \left\{\frac{\partial^2}{\partial x^2} - q^2 + G(x)\right\}\vec{\Phi}(x,t)= 2K^2\frac{\partial^2}{\partial t^2}\vec{\Phi}(x,t).[/math]

 

So I suppose this issue is similar to how the time derivative was removed earlier. I reviewed posts #54 - #60 where Bombadil helped me with it.

 

But, I was not able to apply that information here, apparently.

 

Focusing on:

 

[math]

\left\{ \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial \tau^2} \right\} \vec{\Phi}(x,t)

[/math]

 

Suppose I can write that down as:

 

[math]

\frac{\partial^2}{\partial x^2} \vec{\Phi}(x,t) + \frac{\partial^2}{\partial \tau^2} e^{iq\tau}

[/math]

 

So focusing on the latter term:

 

[math]

\frac{\partial^2}{\partial \tau^2} e^{iq\tau} = \left\{ iqe^{iq\tau} \right\}^2

[/math]

 

Actually don't really know how all those squares work in the algebraic manipulations... A lot of uneducated guesses in the above, and I really don't know how to proceed from that point on... :I (And I have a distinct feeling that I already went very wrong somewhere :D)

 

-Anssi

Posted

Doc, do you mind if I demonstrate why watching TV means you have to be tangent to bundle map B over V?

with colors, of course; [math] \psi_{tot}\, =\, \psi_r,\, \psi_g,\, \psi_b [/math]

 

you have to be in the neighborhood of a surface S, with color in it. These colors are because of stripes (in CRT ones) or a matrix of pixels.

 

There is a Euclidean plane aligned transverse to your line of sight; there is a vector space V.

 

this is all you need, the 'machine' is polarizing map M w/azimuthal angle generator z, that polarizes the vectors (the ones you see are projected normal to your xy plane)

 

stripes over S are 3-colored; sections across the stripes divide them equally and this is the submanifold or color map B(s);

z extends V a vector map or tangent bundle over E;

a brightness function b takes z to B

 

This B is a subgroup of the algebra of states (sigma-finite);

Colors over C are the complex phase-space,

 

The phase-space is then, for each color c (as above)

 

[math] \psi_{abc}\, =\, \alpha |0 \rangle\, +\, \beta |1 \rangle [/math]

 

so we can derive the appropriate Hamiltonian for z and b mixing.

 

if T then if V then TV, the tangent bundle's phase-space.

We need a map that takes abc to rgb (M) in the space; this is the Hamiltonian H(b,s) in a time-dependent form (or we could forget about colors and use electrons insted)

it's all in there; you simply need to align this with a time-dependent transfer function, which is time-independent when you switch off the TV.

 

Here we can claim that the function "taking color to" is "taking it from" M or B, so that S = (M|B), for the surface; this is the 'subspace in the machine' - we are seeing a map when we watch TV (or, are we really watching electrons?)

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...