Jump to content
Science Forums

Recommended Posts

Posted
If the purpose of an explanation is to produce internally consistent predictions (think expectations) and what is being predicted can be described with numbers (think computer files) then it is quite obvious that each and every possible explanation can be identified with a mathematical function. That is exactly the definition of my [math]\vec{\Psi}[/math]: i.e., the mapping of the circumstances into the expectations.
Hello Doctordick. I continue to think about your use of the word 'explanation', and this if-then statement is very important, for, if the purpose of "an explanation" (that is, any single possible explanation) is not to produce internally consistent predictions, then your [math]\vec{\Psi}[/math] by definition would not apply. Of course, many types of explanations have nothing at all to do with producing predictions. Thus, 'John, explain what happened', said the mother to the son when he came into the kitchen with a broken nose. The words that John would use to explain would not have as a purpose to produce predictions or expectations about the circumstance, but to make clear the events that resulted in the broken nose (I walked into the garage door :( ).
Posted

The words that John would use to explain would not have as a purpose to produce predictions or expectations about the circumstance, but to make clear the events that resulted in the broken nose (I walked into the garage door :( ).

Rade, just think this out a little. To "make clear the events that resulted in the broken nose" is exactly the sequence of events which makes "a broken nose" to be a reasonable expectation! Clearly, if the sequence of events described did not imply to John's mother that "a broken nose" was a reasonable expectation, she would never accept his reasoning as a rational explanation. Would you?

 

I am afraid you simply do not think out the consequences of your assertions but rather expect your emotional reactions to yield a valid estimation of the truth of your propositions.

 

Have fun -- Dick

Posted

I do not find any "most basic building blocks" (concepts) in understanding. Space is not a fundamental concept, one cannot have a concept of space without having a concept of boundary and a concept of something contained. Time is not fundamental, one cannot have a concept of time without entities with potential for motion. Matter is not fundamental, one cannot have a concept of matter without a concept of energy and motion. All concepts (as building blocks) are on an equal metaphysical footing, none individually are fundamental. Sure, when you build a wall from bricks, some first row of bricks must be placed. But this does not make the bricks of this first row "basic" building blocks, they have no greater importance than any of the bricks placed on top of them.

 

Yes exactly, and I did not say anything to the contrary.

 

Remember, we are basically talking about the very fundamentals of epistemology, and later on we will be arriving at the most fundamental concepts of our current world view. Those fundamental concepts would be the ideas that allow us to understand and perform any sort of human activity, including "reading an e-mail" and forming an idea of what it says etc.

 

I was just commenting that we won't be attempting to analyze the epistemological issues behind an activity we would commonly call "reading an e-mail". We will be arriving at some far more simple concepts, and then I just gave examples of simpler definitions. Certainly my examples are associated with other definitions which all need to be understood in some self-coherent manner.

 

Let's try not to get all tangled up to individual words here. Natural language just is sloppy.

 

Yes, of course, some words must be used. What I was saying is that I do not see that you need to use the words 'defined' and 'undefined'. So, the presentation begins with "information". Because it is a concept, it already has a definition--information is what information is. Next the information needs to be explained. The word explanation comes with a definition, the one provided by DD--a procedure to yield rational expectations for hypothetical circumstances. So, yes, use some words, just do not use the words undefined and defined, all they do is add confusion to the presentation, imo.

 

Yes, they add unwanted implications, but so does every other choice of words. So, all I really care about right now is whether you understand what the actual issue is that is being communicated by using the concept of "undefined information/data/whatever".

 

My understanding from your posts is that you understand what I'm getting at.

 

OK, for you, the noumena of Kant represent "facts of reality", the A is a representation of some fact, some phenomenon. However, Kant does not claim that you cannot "understand" noumena. He does not claim that you must have the translation of [noumena ---> A]. Understanding of noumena occurs without any translation via intuition. Thus, if noumena represent facts of reality, then you can have understanding of them via intuition, which means without the aid of any of the senses. But, I am not sure this is what you mean, so, I would suggest you be very careful bringing forth the Kant concept of noumena to help explain what you mean by |undefined information|. The reason being is that Kant would say you most surely can understand |undefined information| as an intuition without the need for any prior transformation into any phenomenon that you can sense, such as the letter A.

 

Ahem... Far too many philosphers have already contributed far too much intellectual masturbation towards the issue of "what Kant may or may not have meant with what he said". Maybe instead focus onto the actual topic he is trying to discuss, and try not to get all entangled to what his opinion may or may not have been on some surrounding details.

 

So, what I was referring to is what seems to be the most common interpretation of what he said; that he is referring to the nature-in-itself by calling it "noumena", and the human conception of that as "phenomena".

 

...the term [noumena] is better known from Kantian and post-Kantian philosophy, where noumena are regarded as unknowable to humans. The term is generally used in contrast with, or in relation to "phenomenon", which in philosophy refers to anything that appears, or objects of the senses.

 

Note that his definition of the word "phenomenon" is almost the opposite of how you are using it. He uses it to refer to how something appears to us via our senses, i.e in terms of our human conceptualization, i.e AFTER the "unknown transformation". Thought I would mention because I know you can be careless with these things :)

 

Now, regardless of if you agree with whether he meant to say exactly that or not, the problem he is referring to is the same as what DD attempts to communicate by saying "undefined information". If you have no definitions, you have no conceptual understanding whatsoever either. Even though we can't mentally handle anything "undefined" directly, there must still be something behind our concepts. Or, as Kant puts it;

 

"...though we cannot know these objects as things in themselves, we must yet be in a position at least to think them as things in themselves; otherwise we should be landed in the absurd conclusion that there can be appearance without anything that appears."

 

I am fully aware that I am cherry picking things to my liking here, and like I said, swarms of philosophers have made careers out of arguing the petty details around the issue. Let's try to look over that, and instead of arguing what exactly went through Kant's mind, try to focus on onto the topic of "reality in itself" vs. "human conceptualization of reality".

 

With that in mind, I simply said that my perception of "letter A" is just an interpretation of reality, and the information standing behind that percetion is ultimately some amount of undefined information, which I just so happen to define as "A", and thus that is my interpretation of that information, whatever it "really" is.

 

Whereas you referred to the "A" as "fact of reality". That was not what I meant by "facts of reality", I was referring essentially to "whatever stands behind our definitions".

 

OK, so you agree (with the 'of course') that the presentation of DD cannot be limited to definitions of entities as things, it also must include definitions of activities as things, for these are two completely different types of things (entities and activities). This was all I was trying to clarify.

 

I just said that a definition of activity "X" would be just a decision that some specific behaviour of some defined entities is to be called activity "X". A good example of such defined activity is "time measurement", from the derivation of special relativity.

 

This leads me to a question. Did we not say that the unknown transformation of |undefined information| is an activity ? If so, that would make this activity a thing that could be understood via some specific behavior of a defined entity (which is what you said above).

 

The definitions and ideas we commonly call "neuroscience" is one example of that. Let's not go there right now... Let's first learn how to crawl :)

 

Only that no presentation can be true by definition (such as this one by DD), for this would assume that all agree that definitions have fundamental and universal meaning that all agree with, but they do not. Again, this was the point Wittgenstein was making with his example of the word 'game'. And, unless I read you wrong, you did agree that there is no universal accepted definition for any word (concept) such as 'game' or 'explanation' or 'information' or 'expectation' or 'rational' or 'circumstance' or 'element' etc., etc. (all the concepts being used in the presentation).

 

Of course, just like if one does not agree with the definitions of the english language words that I am using right now, this post is also just gibberish.

 

Let me make you a really simple analogy. Let's say I give the following presentation;

 

This is a quadratic equation;

[math]

ax^2 + bx + c = 0

[/math]

 

Here's a derivation...

 

...of a solution;

 

[math]

x = \frac{-b \pm \sqrt{b^2 - 4ac} }{2a}

[/math]

 

Certainly that presentation is valid only if the reader actually agrees with the intented definitions of the symbols I am using. Otherwise, it does not make sense.

 

So people have the ability to misinterpret my message. Does that mean the presentation may be "invalid"? I.e. does the ability to misinterpret my representation mean that the relationship that I am trying to communicate, might not exist?

 

Now look;

 

All I am saying is that we keep in mind that we cannot assume that the presentation of DD is logically valid, that the definitions as used are the proper definitions to use for the concepts presented. They may be valid, they may not be valid, there may be types of explanations (those that use a different definition of the term) where the fundamental equation does not apply.

 

The definitions that DD is using are there just for the purpose of being able to handle the logical considerations that are performed in the derivations. What he is trying to communicate is that such and such epistemological requirements yield such and such unobvious relationships (which just so happen to tell us something about the things that physicists currently see as mysterious aspects of nature).

 

What you are saying there is simply that, if one refuses to use his definition, they will not understand what relationships he is talking about... Well indeed :shrug:

 

I believe the real reason you are bringing that up is that you are not really up to speed with why his definition of "explanation" actually just analytically and universally covers all that is required from a functional world view. Just ask yourself, do we know something about the information-to-be-explained, before we manage to explain it? And do we have anything else to judge the validity of our explanation, other than its prediction-wise validity?

 

I'm afraid there are very many ways to twist the semantics here and end up to exactly the kind of intellectial masturbation as Kant's writings have been subjected to. I guess the only thing I can say is that this just requires quite a bit of mental effort from the reader.

 

I believe that it is the constraint of induction logic that causes other physicists to not agree with the approach of DD. I do not want to put words into the mouth of Qfwfq, but I would think that this is one aspect of the presentation of DD that he does not agree with, that it uses inductive logic to derive the fundamental equation.

 

The derivation of the fundamental equation uses the fact that any explanation of undefined information, which does not contain undefendable assumptions, is operating under inductive logic. The fundamental equation itself is just a representation of the requirements yielded by that fact.

 

-Anssi

Posted

Rade, I have just read Anssi’s response to your complaints about my use of the terms “defined and undefined” information. I agree with everything Anssi has said; however, it seems to me that you are totally missing the absolute necessity of these two categories. My presentation concerns “exactly what follows from the definition of an explanation and nothing else. I am sure you must comprehend that one cannot even begin to discuss that issue without having a clear and exact definition of “an explanation”. I have given you mine; which you apparently refuse to accept in spite of the fact that you cannot produce an explanation which violates that definition. Until you come to your senses, nothing can be done about that issue.

 

The requirement of the categories, defined and undefined information, goes directly to that "and nothing else" constraint on the analysis. Please consider the following three situations.

 

Case #1:

 

We have an ancient Egyptian Alchemist who explaining to his apprentice some interesting process he can accomplish.

 

Case #2

 

We have a Chemistry professor at a major university explaining a process he is demonstrating to his graduate student in 2011.

 

Case #3

 

We have a brilliant scholar who has just been recognized for the greatest breakthrough in science achieved in the last ten thousand years. He is explaining to one of his compatriots how a certain process can be seen through his discovery.

 

Now, suppose all three just happen to be explaining exactly the same process. Note that each of them can have totally different concepts regarding exactly what the relevant fundamental concepts necessary to understand his explanation might be.

 

Since the process being explained is the same in all three cases, the actual events or circumstances being explained are the same. Now it should be clear to you that all three of them will describe their process in terms which are apt to be quite different: i.e., they each have their own collection of “fundamental elements” in mind. Nevertheless, since they are all three explaining the same process, there must be a set of fundamental elements underlying all three explanations (plus some hypothetical elements demanded by their personal explanation).

 

If one tries to create a representation capable of representing “all possible explanations”, one must first have a notation which can represent all the elements required by the specific explanation being represented (that would be the numeric labels “i” which are defined by that specific explanation some of which are actual and some of which are hypothetical). But we must also be able to associate those elements from one explanation to another (for all explanations of whatever is being explained). That is what the “x” index is all about. The x numerical label refers to the underlying element the explanation is explaining. So how do you propose we define that specific x element? What kinds of concepts do you think that brilliant scholar of the far flung future uses to present his explanation?

 

The alchemist certainly wouldn’t be using the same collection of elements in his explanation as does the modern chemist. This problem is a fundamental aspect of representing “all possible explanations”. It follows that actually defining the underlying elements is impossible: ergo, the requirement of a category I call “undefined information”.

 

From that perspective you should also comprehend Kant’s noumena (what actually lies behind our explanations) as an essentially undefined thing.

 

I hope that makes a little sense to you.

 

Have fun -- Dick

Posted

I just have two simple questions about mathematics so I decided to just ask via PM;

 

[math]

\frac{d}{da}P(z_1,z_2,\cdots ,z_n,t)=0

[/math].

 

must be an absolute requirement of all explanations put forward in the functional representation [math]\vec{\Psi}[/math].

 

Part 3: A major problem embedded in this conclusion

 

 

The deduced derivative can be multiplied by [math]da[/math] and integrated over [math]a[/math] yielding the result that P=k (where k is a constant).

I suspect I am missing something very simple!

I suspect what you are missing is the switch from viewing the arguments [math]x_i[/math] as variables to viewing them as constants; see the definition of partial derivatives. In standard differential notation, the expression [math]\frac{d}{dq}[/math], as opposed to [math]\frac{\partial}{\partial q}[/math], is commonly called the total derivative.

 

At any rate, the real issue here is that, if [math]z_i=x_i+a[/math], [math]da[/math] amounts to nothing more than a change in the position being used for the origin of the reference frame. Since the actual position of the origin used can have no impact on the calculated probability of the distribution (that is the definition of shift symmetry) you must obtain the same result no matter what [math]da[/math] is: i.e., the derivative with respect to [math]a[/math] must be zero.

 

Excuse a stupid question, but when you multiply by [math]da[/math], do you simply get [math]P(z_1,z_2,\cdots ,z_n,t)=0[/math]?

The issue is that [math]\frac{d}{da}P\equiv\frac{dP}{da}=0[/math]. When you multiply that equation by [math]da[/math] you get [math]dP = 0da[/math] and zero times [math]da[/math] is zero.

 

I'm not sure by what route the integration over [imath]a[/imath] yields P=k...

The final integration ends up being over dP and, since [math]dP=0[/math], it must be identical to the integration over zero. The integral [math]\int_{-\infty}^{\infty}0da[/math] being integration over zero, must be a constant. (Integration is the inverse of differentiation and the derivative of a constant is zero.) By the way, the indefinite integral of "unity" with respect to P (all possibilities) is [math]\int dP=P[/math].

 

 

If we take the trouble to analyze this scalar product from the perspective that

 

[math]

\vec{\Psi}=\sum_{k=1}^{dim}\psi_k\hat{q}_k

[/math]

 

we have:

 

[math]

P=\sum^{dim}_{k=1}\psi^2_k=0.

[/math]

...but here I get lost. I don't understand the notation.

 

Sorry about that. Originally we wrote [math]P=\vec{\Psi}\cdot\vec{\Psi}[/math] where [math]\vec{\Psi}[/math] was a vector in some abstract vector space (the dimensionality of which was never specified). All I am doing here is writing out that “dot” product in detail: i.e., if each dimension in that abstract space were identified as "[math]d_i[/math]" why not just call it dimension “i”?

[math]

\vec{\Psi}= \psi_{d_1} \hat{d}_1 + \psi_{d_2} \hat{d}_2+\cdots \equiv \psi_1 \hat{1} +\psi_2 \hat{2}+\cdots

[/math]

 

where that “[math]\cdots[/math]” just includes all the terms out to whatever the dimensionality of that abstract space happens to be. It then follows directly that the required dot product can be written

[math]

\vec{\Psi}\cdot\vec{\Psi}=\psi_1^2 + \psi_2^2 +\psi_3^2 +\cdots

[/math]

 

Thus the indicated dot product defining P can be written [math]P=\sum_{d=1}^{dim}\psi^2_d[/math] where the sum is from d=1 to whatever the dimensionality is (which I have written “dim”)..

 

If you think your response might be beneficial to other readers too, feel free to respond to the thread.

 

Thanks Anssi. If anyone finds this response unclear let me know.

 

Have fun -- Dick

Posted

At any rate, the real issue here is that, if [math]z_i=x_i+a[/math], [math]da[/math] amounts to nothing more than a change in the position being used for the origin of the reference frame. Since the actual position of the origin used can have no impact on the calculated probability of the distribution (that is the definition of shift symmetry) you must obtain the same result no matter what [math]da[/math] is: i.e., the derivative with respect to [math]da[/math] must be zero.

 

I guess you meant to say, the derivative with respect to [imath]a[/imath]. In which case, yup.

 

The issue is that [math]\frac{d}{da}P\equiv\frac{dP}{da}=0[/math]. When you multiply that equation by [math]da[/math] you get [math]dP = 0da[/math] and zero times [math]da[/math] is zero.

 

Ahha, yeah, that's what was tripping me over, ending up with [imath]dP[/imath] seemed nonsensical to me and I was just trying to find a way out of that... :P

 

The final integration ends up being over dP and, since [math]dP=0[/math], it must be identical to the integration over zero. The integral [math]\int_{-\infty}^{\infty}0da[/math] being integration over zero, must be a constant. (Integration is the inverse of differentiation and the derivative of a constant is zero.)

 

Ahha, right...

 

By the way, the indefinite integral of dP over all possibilities is [math]\int dP=P[/math].

 

Hmm, I don't really understand what that means :I

 

Originally we wrote [math]P=\vec{\Psi}\cdot\vec{\Psi}[/math] where [math]\vec{\Psi}[/math] was a vector in some abstract vector space (the dimensionality of which was never specified). All I am doing here is writing out that “dot” product in detail

...

Thus the indicated dot product defining P can be written [math]P=\sum_{d=1}^{dim}\psi^2_d[/math] where the sum is from d=1 to whatever the dimensionality is (which I have written “dim”)..

 

Right okay, that's clear now.

 

Reading onwards, the next spot that tackled me was;

 

Mapping a collection of real numbers into half as many complex numbers

 

Consider a two dimensional abstract vector space. Multiplication can be defined which is totally analogous to what is ordinarily called multiplication of “complex numbers”. One axis can be taken to represent “real” numbers and the other can represent “imaginary” numbers. Any point in that two dimensional space can be seen as representing a complex number. If any two complex numbers are represented in polar coordinates, [math](r_1,\theta_1)[/math] and [math](r_2,\theta_2)[/math], their product can be represented by [math](r_1r_2,\theta_1+\theta_2)[/math]. The two representations (multiplication of two complex numbers and the above operation) uniformly give identical results. Thus it is that we can see [math]\psi_k[/math] as producing complex numbers in the form c+id (essentially collecting them by pairs). We can then define the dagger notation to change every c+id into c-id. In such a case, the product [math]\psi^\dagger_k \psi_k=(c-id)(c+id)=c^2+d^2[/math] which is once more positive definite.

 

I couldn't find that same definition for multiplication of two vectors in polar coordinates (i.e. [math](r_1r_2,\theta_1+\theta_2)[/math]), so I just have to take it on faith, unless you can explain it a bit more.

 

I'm not sure what you are referring to with the link to Wikipedia's trigonometry page, there's a lot of information there.

 

Also I do not know where that quote is from, so I do not know what you mean by "the above operation". If it's not really supposed to be a quote but just a highlighted paragraph, then I guess you are referring to [math]\vec{\Psi}^\dagger \cdot \vec{\Psi}[/math]?

 

Little help...?

 

-Anssi

Posted

I guess you meant to say, the derivative with respect to [imath]a[/imath]. In which case, yup.

I apologize for my sloppiness once again. I have fixed both that entry and the later entry.

 

By the way, the indefinite integral of dP over all possibilities is [math]\int dP = P[/math].

Which I changed to “of ‘unity’ over P is”. The real issue here is a little sloppiness in the common terminology used by a lot of people. The derivative operator is generally written as [math]\frac{d}{dx}[\cdot \cdot][/math] and the “anti-derivative” (the indefinite integral) operator is generally written [math]\int [\cdot \cdot]dx[/math] where I am using [math][\cdot \cdot][/math] as a stand in for the function being operated on. As you say, differentiation and integration is with respect to (in this case) x. I have heard people often refer to this as differentiation and integration with respect to dx. It is of course wrong but is seldom misinterpreted and people seldom comment about it when that error is made. I apologize anyway as sloppiness is sloppiness. Thank you for catching it.

 

But I might be misinterpreting your question. The indefinite integral is defined to be the function which, when differentiated with respect to the indicated differential element (whatever follows the “d” in the operator specification) yields the function being integrated over. That would be the thing I represented by [math][\cdot \cdot][/math] in the above paragraph. Since [math]\frac{dP}{dP}=1[/math], it follows by definition that [math]\int dP=P[/math]. What is going on here is quite simple but the details of the description really isn’t. In some ways, the description is actually more confusing than what is being represented. That is the real source of the tendency towards sloppiness here.

 

I couldn't find that same definition for multiplication of two vectors in polar coordinates (i.e. [math](r_1r_2,\theta_1+\theta_2)[/math]), so I just have to take it on faith, unless you can explain it a bit more.

 

I'm not sure what you are referring to with the link to Wikipedia's trigonometry page, there's a lot of information there.

 

Also I do not know where that quote is from, so I do not know what you mean by "the above operation". If it's not really supposed to be a quote but just a highlighted paragraph, then I guess you are referring to [math]\vec{\Psi}^\dagger \cdot \vec{\Psi}[/math]?

 

Little help...?

Once again, I was somewhat sloppy. My link to Wikipedia was the wrong link. If you look down through the trigonometry page referred to, you will find a paragraph headed, “Extending the definitions” which brings up the expression [math]e^{x+iy}=e^x(cosx+isiny)[/math]. A little algebra and that relation can produce the relationship I gave you; however, it is much easier to go to the “Euler” link immediately below that expression and go down to the paragraph headed, “Applications in complex number theory”. There you will find some excellent drawings (which should clarify the relationships demonstrated there) together with the expressions

[math]

z=x+iy=|z|(cos \phi+isin \phi)=re^{i \phi}

[/math]

 

and

 

[math]

\bar{z}=x-iy=|z|(cos \phi-isin \phi)=re^{-i \phi}

[/math]

 

The bar above the z indicates what is commonly called the “complex” conjugate; exactly the same as what I indicate with the exponential [math]\dagger[/math] in my presentation. At any rate, if you look at the final term you should realize the expression I give is exactly the same as multiplying standard complex numbers. The radial part multiplies just like standard numbers and, when you multiply the angular components, since they are the angles expressed as imaginary exponents of “e”, you just add exponents.

 

When I referred to “the above operation” I was referring to an earlier portion of that very paragraph, “multiplying the radial part and adding the angular part”. Finally, I put it in as a quote only to highlight it.

 

I hope that clears things up a bit.

 

Have fun -- Dick

Posted

Since [math]\frac{dP}{dP}=1[/math], it follows by definition that [math]\int dP=P[/math].

 

Ah, right, yeah that's what I hadn't figured out...

 

There you will find some excellent drawings (which should clarify the relationships demonstrated there) together with the expressions

[math]

z=x+iy=|z|(cos \phi+isin \phi)=re^{i \phi}

[/math]

 

and

 

[math]

\bar{z}=x-iy=|z|(cos \phi-isin \phi)=re^{-i \phi}

[/math]

 

The bar above the z indicates what is commonly called the “complex” conjugate; exactly the same as what I indicate with the exponential [math]\dagger[/math] in my presentation.

 

Right, okay. Yeah I remember we have talked about this issue earlier too, just can't remember how it all worked in detail. Well, I won't dwell on it more, just seeing that [imath](x+iy)(x-iy)=x^2+y^2[/imath] is enough for me.

 

Clearly, if the dimensionality of the abstract vector space is greater than two, that dimensionality may be reduced to half by collecting the components in pairs and converting all of the output to complex numbers. If the dimensionality happens to be odd, we can allow that stand alone component to be real. The usefulness of that odd term is exactly the same as the usefulness of that single real expression brought up earlier.

 

I couldn't figure out what that last sentence is referring to exactly... Are you referring to the issue of "...since the functions [math]\psi_k[/math] are to be real, also requires [math]\vec{\Psi}[/math] to vanish." ?

 

If someone can find a rational use for such term let them do so. I will simply ignore such a circumstance as not representing a useful explanation since I personally have not managed to define a mechanism for extracting the constraint necessary to conserve our ignorance: i.e., including such a term essentially implies we know something which we do not know.

 

I suppose you mean that such term could end up violating the shift symmetry? Or rather that, even having such term in the first place would imply that an undefendable assumption has been made at some point when generating that particular [math]\vec{\Psi}[/math]?

 

Meanwhile, dealing with the complex output is relatively straight forward. Once again, we can use the arguments above to show that the derivative with respect to [math]a[/math] (that shift parameter introduced earlier) of every element of that scalar product must vanish: i.e., [math]\frac{d}{da}c^2=\frac{d}{da}d^2=0[/math] implies c and d must still be constants. However, now the lone terms have been combined as a complex number and the expression c+id can be represented by [math]Ae^{ika}[/math]: i.e., the correlations of interest I spoke of earlier have been extracted into the expression [math]e^{ika}[/math] (see common trigonometric relationships).

 

Of significance is the fact that [math]A=\sqrt{c^2+d^2}[/math] has absolutely no dependence on “[math]a[/math]” whatsoever as c and d are independent constants. What is important here is that the exponential term drops out in the multiplication (in the calculation of P). Thus it is that the fact that [math]\frac{d}{da}P=0[/math] (derived from the fact that P could not be a function of the parameter shift "a") no longer extends to [math]\vec{\Psi}[/math]. The entire consequence of any shift in "[math]a[/math]" resides in the the complex phase correlations expressed in the function [math]e^{ika}[/math].

 

Hmm, this involves some of those details that I have partially forgotten...

 

So, [imath]Ae^{ika} = \sqrt{c^2+d^2} e^{ika} [/imath]

 

I suppose the [imath]k[/imath] is still referring to a particular component of [math]\vec{\Psi}[/math].

 

So, the exponential term canceling out is;

 

[math]

\psi^\dagger_k \psi_k=(Ae^{-ika}) (Ae^{ika}) = A^2

[/math]

 

My actual understanding of the mechanism involved there are very slim, that result is simply something I got after playing around with Wolfram Alpha again, just trying different things out. So I trust it is correct, but I don't understand why.

 

Anyway, that result also explains to me why you defined [math]A[/math] with the square root.

 

I couldn't figure out though, why there's the [imath]a[/imath] in the exponent...?

 

The derivative of [math]\Psi[/math] with respect to “[math]a[/math]” turns out to somewhat different from what we had earlier. If the abstract space was originally two dimensional (in which case the abstract dimensionality of the complex [math]\vec{\Psi}[/math] is unity), we have now the expression

 

[math]\frac{d}{da}\Psi=ik \Psi[/math]

 

Hmmm... But now I'm lost again... The derivative of [imath]\Psi[/imath] with respect to [imath]a[/imath] is [imath]ik[/imath] times the [imath]\Psi[/imath], where [imath]k[/imath] is the number of the particular component...? I don't think I understand what that expression is trying to say exactly. :P

 

-Anssi

Posted
...So how do you propose we define that specific x element? What kinds of concepts do you think that brilliant scholar of the far flung future uses to present his explanation? The alchemist certainly wouldn’t be using the same collection of elements in his explanation as does the modern chemist. This problem is a fundamental aspect of representing “all possible explanations”. It follows that actually defining the underlying elements is impossible: ergo, the requirement of a category I call “undefined information”. From that perspective you should also comprehend Kant’s noumena (what actually lies behind our explanations) as an essentially undefined thing.
Hello Doctordick. I just noticed a question to me, and I can provide one answer, the answer Kant would give to you. Kant would say that you would DEFINE that specific x element as "something that is empirically real, at the same time, transcendentally ideal". But, while that is how you would DEFINE element x, Kant would say that BECAUSE element x is transcendentally ideal you cannot KNOW element x as a thing in itself. Thus the noumena of Kant can be a defined thing, but it cannot be a known thing. That is, we can have intuition (outside of knowledge) of something outside of us, regardless of whether we known that something as a thing in-of-itself (as an object), or if that something is actually anything in-of-itself, such as space.

 

So, your argument about "undefined things" does not mesh with the concept of noumena of Kant. For Kant, the noumena is a dialectic of what is real and ideal, what can be defined and known.

 

Perhaps it would be more useful for you to change 'undefined thing' as that which lies behind explanation to 'unknown thing'. This wording would be more in line with Kant.

Posted

The central problem here is that it isn't P we should be looking at anyway. We need to see how this constraint effects [math]\vec{\Psi}[/math], the actual mathematical representation of the explanation. That vector is in an abstract space and the probability density is to be given by the scalar product [math]\vec{\Psi}\cdot \vec{\Psi}[/math]. If we take the trouble to analyze this scalar product from the perspective that

 

I guess the dagger should be included there in the expression of the scalar product.

 

Continuing from where I left...

 

[math]\frac{d}{da}\Psi=ik \Psi[/math]

 

If the abstract dimensionality of this complex [math]\vec{\Psi}[/math] is greater than one, the partial with respect to the shift parameter yields an independent parameter k for each direction in that abstract dimensionality.

 

I think you should explain that bit to me a bit more, because I'm not really picking up the details. I suppose the point of it all though is that the derivative of [imath]\Psi[/imath] with respect to [imath]a[/imath] can yield some constant, which gets canceled out in [math]\vec{\Psi}^\dagger\cdot \vec{\Psi}[/math], thus still yielding shift symmetry in [imath]P[/imath].

 

Part 5: The succinct expression of the constraint required by shift symmetry

 

There is but one quite simple step required to achieve the exact mathematical constraint required by shift symmetry. The required step is a well known relationship between derivatives and partial derivatives.

 

[math]

\frac{d}{dk}=\sum^n_{i=1}\frac{\partial x_i}{\partial k}\frac{\partial}{\partial x_i}

[/math]

 

I didn't find that exact expression from behind the link...

 

Nevertheless, thinking about that a bit, I must be missing some math details... I'm thinking about what does the operation expressed as [imath]\sum^n_{i=1}\frac{\partial x_i}{\partial k}\frac{\partial}{\partial x_i}[/imath] actually mean. If it's essentially;

 

[imath]\frac{\partial}{\partial x_1}\Psi + \frac{\partial}{\partial x_2}\Psi + \frac{\partial}{\partial x_3}\Psi...[/imath]

 

then, intuitively, it would seem like each individual partial differentiation could throw the output wildly out of whack, and summing these results together would yield something completely different from [imath]\frac{d}{dk}\Psi[/imath].

 

What am I missing?

 

Meanwhile, taking it on faith that these procedures do yield identical results;

 

In this case, we are particularly concerned with the shift parameter "[math]a[/math]" and the undefined numerical labels "[math]x_i[/math]" to which that shift is applied. The expression of interest is

 

[math]

\frac{d}{da}=\sum^n_{i=1}\frac{\partial z_i}{\partial a} \frac{\partial}{\partial z_i}

[/math]

 

where [math]z_i=x_i+a[/math]

 

...

 

Of supreme importance here is that the partial with respect to “[math]a[/math]” of "[math]z_i[/math]" is exactly unity: i.e., the impact of shift symmetry is exactly the same across all arguments. It is this unique combination which implies that we may write

 

[math]

\frac{d}{da}=\sum^n_{i=1} \frac{\partial}{\partial z_i}

[/math]

 

Yup.

 

[math]

\frac{d}{da}\Psi(z_1,z_2,\cdots,z_n,t)=\sum^n_{i=1} \frac{\partial}{\partial z_i}\Psi(z_1,z_2,\cdots,z_n,t)=ik\Psi(z_1,z_2,\cdots,z_n,t)

[/math].

 

Returning to our original notation we can write

 

[math]

\sum^n_{i=1} \frac{\partial}{\partial x_i}\Psi(x_1,x_2,\cdots,x_n,t)=ik\Psi(x_1,x_2,\cdots,x_n,t)

[/math].

 

The details of that [imath]ik[/imath] are still a bit shrouded in my mind, but taking its validity on faith until you respond, it all looks correct.

 

This is an exact expression of the dynamic constraint on the function [math]\Psi[/math] enforced by the requirement of shift symmetry in the "x" labels. Essentially, it is the basis of the underlying phenomena in dynamics commonly designated by the idea of conservation of momentum. If the mathematical operator [math]-i\frac{\partial}{\partial x_i}[/math] operating on the function [math]\Psi[/math] is defined to be “a momentum operator” (and please note that, except for the factor [math]\hbar[/math], this is exactly the “momentum operator” defined in classical quantum mechanics) then the constraint required by shift symmetry in any data collection is “conservation of momentum”.

 

Excuse another stupid question, but is there that [imath]-i[/imath] in the definition just for later convenience, or does it have something to do with the before mentioned [imath]ik[/imath]

 

Likewise, the two people brought up above could use a different zero for the t index and they would still get exactly the same probabilities so long as the specific “t” index used by each of them referred to exactly the same circumstance the other was examining. This fact can be used to prove, in exactly the manner done above, that the following is also a required constraint on the functional representation of any valid flaw-free explanation.

 

...

 

Thus it is that we know immediately that the correct [math]\Psi[/math] must also satisfy the constraint that

 

[math]

\frac{\partial}{\partial t}\Psi(x_1,x_2,\cdots,x_n,t)=iq\Psi(x_1,x_2,\cdots,x_n,t)

[/math].

 

Yup.

 

After the above little details have been sorted out, I have no further questions about this OP.

 

-Anssi

Posted

I guess the dagger should be included there in the expression of the scalar product.

Indeed, this is the kind of distraction that goes unnoticed in a research publication because the mathematician's eye puts the missing symbol in automatically, but stands out a mile to the teacher when correcting students' test papers. :hihi:

 

I think you should explain that bit to me a bit more
No doubt he worded it poorly. He just means that the derivative of [imath]\Psi[/imath] is of the same kind as [imath]\Psi[/imath] itself. This is simply because it is the limit of a difference between two values of that kind. If [imath]\vec{\Psi}[/imath] is an aggregate of [imath]n[/imath] complex values, so is its derivative.

 

I didn't find that exact expression from behind the link...
Indeed, that expression does not require an exact differential, he should have linked to something on the total derivative; you need to bear a bit of patience because he does make a muddle or two here and there.

 

If it's essentially;

 

[imath]\frac{\partial}{\partial x_1}\Psi + \frac{\partial}{\partial x_2}\Psi + \frac{\partial}{\partial x_3}\Psi...[/imath]

No, because here you are leaving out the factors given by each of the derivatives [imath]\frac{\partial x_i}{\partial k}[/imath] which "fix up the total" so to speak.

 

The details of that [imath]ik[/imath] are still a bit shrouded in my mind, but taking its validity on faith until you respond, it all looks correct.
The requirement implies that the modulus of [imath]\Psi[/imath] is independent of the variable of derivation, so this has the effect of multiplying by the derivative of the unimodular factor which is hidden inside the belly of [imath]\Psi[/imath] and certainly has a pure imaginary value. Thus he calls it [imath]ik[/imath] with [imath]k[/imath] being pure real. This argument doesn't really determine much else about [imath]k[/imath] and its value could depend on various thing or even be zero, but at the moment I can't entirely remember my reckonings of when I examined the OP, usually I'm not sure on which grounds he assumes there being one constant value of [imath]k[/imath]; on a purely mathematical footing, a complex [imath]\Psi[/imath] with constant modulus could have various spectral properties.

 

Excuse another stupid question, but is there that [imath]-i[/imath] in the definition just for later convenience, or does it have something to do with the before mentioned [imath]ik[/imath]
It should suffice to consider the defining property of the imaginary unit, by which its reciprocal is equal to its negative, so just multiply both sides of the equation you quote by [imath]-i[/imath] and you can see that the operator in what you quote has the same effect as multiplying by [imath]k[/imath].
Posted

No doubt he worded it poorly. He just means that the derivative of [imath]\Psi[/imath] is of the same kind as [imath]\Psi[/imath] itself. This is simply because it is the limit of a difference between two values of that kind. If [imath]\vec{\Psi}[/imath] is an aggregate of [imath]n[/imath] complex values, so is its derivative.

 

Right, so let me take a step back and make sure I got this absolutely right;

 

The solution of the difficulty is to show that any collection of real functions (the components [math]\psi_k[/math] mentioned above) can be represented as one half as many functions with complex values.

 

The dimensionality of [imath]\vec{\Psi}[/imath] refers to the dimensionality of the output vector.

 

The [math]\psi_k[/math] refers to a function returning the value of one dimension.

 

And "functions with complex values" refers to a function that would return a complex value, essentially corresponding to two [math]\psi_k[/math] functions since it would return a real and imaginary value.

 

So;

 

If the abstract space was originally two dimensional (in which case the abstract dimensionality of the complex [math]\vec{\Psi}[/math] is unity), we have now the expression

 

[math]\frac{d}{da}\Psi=ik \Psi[/math]

 

"the abstract dimensionality of the complex [math]\vec{\Psi}[/math] is unity" means it ouputs one complex value?

 

And if the complex [math]\vec{\Psi}[/math] had a higher dimensionality to its output, its derivative would contain a complex value for each dimension.

 

That makes sense to me; to express a change of n-dimensional (complex) vector you need n (complex) values.

 

The last bit I don't understand just has to do with notation. I don't understand how [imath]ik \Psi[/imath] means "a complex value". :P

 

Indeed, that expression does not require an exact differential, he should have linked to something on the total derivative; you need to bear a bit of patience because he does make a muddle or two here and there.

 

Okay that looks correct.

 

No, because here you are leaving out the factors given by each of the derivatives [imath]\frac{\partial x_i}{\partial k}[/imath] which "fix up the total" so to speak.

 

Hmmm, I'm still very confused...

 

I'll switch to talking about the derivative with respect to [imath]a[/imath], i.e;

 

The expression of interest is

 

[math]

\frac{d}{da}=\sum^n_{i=1}\frac{\partial z_i}{\partial a} \frac{\partial}{\partial z_i}

[/math]

 

where [math]z_i=x_i+a[/math] was introduced in the earlier proof that

 

[math]

\frac{d}{da}P(z_1,z_2,\cdots,z_n,t)=0.

[/math]

 

(Please note that the fact that I have called the arguments of P in one case "x" and in a second case "z" is of utterly no real significance.) Of supreme importance here is that the partial with respect to “[math]a[/math]” of "[math]z_i[/math]" is exactly unity: i.e., the impact of shift symmetry is exactly the same across all arguments.

 

Now, thinking about this more, I have another question about one detail;

 

The "partial with respect to “[math]a[/math]” of "[math]z_i[/math]" is exactly unity", I suppose that cannot mean that:

 

[math]\frac{\partial}{\partial a}\vec{\Psi} = \frac{\partial}{\partial z_i}\vec{\Psi}[/math]

 

So I suppose it just means we get the unity once we sum those partial derivatives, like DD wrote in the OP;

 

[math]\frac{\partial}{\partial a}\vec{\Psi} = \sum^n_{i=1}\frac{\partial}{\partial z_i}\vec{\Psi}[/math]

 

Now that expression still raises the same question in my head, why is it, that each individual [imath]\frac{\partial}{\partial z_i}\vec{\Psi}[/imath] is guaranteed to yield some appropriate fraction of the total sum, so that the total sum ends up being exactly the same as that total derivative.

 

I can see it basically states that way in the mathworld link you provided, so it's not too hard to take it on faith. Still, would be nice to understand it because it just sounds like each individual partial derivative could easily be completely out of whack by some large degree. I'm probably missing something fairly simple again :I

 

The requirement implies that the modulus of [imath]\Psi[/imath] is independent of the variable of derivation, so this has the effect of multiplying by the derivative of the unimodular factor which is hidden inside the belly of [imath]\Psi[/imath] and certainly has a pure imaginary value. Thus he calls it [imath]ik[/imath] with [imath]k[/imath] being pure real. This argument doesn't really determine much else about [imath]k[/imath] and its value could depend on various thing or even be zero, but at the moment I can't entirely remember my reckonings of when I examined the OP, usually I'm not sure on which grounds he assumes there being one constant value of [imath]k[/imath]; on a purely mathematical footing, a complex [imath]\Psi[/imath] with constant modulus could have various spectral properties.

 

I don't have any math training so I can easily pick up technical terms wrong. After some mathworld reading, I suppose "modulus of [imath]\Psi[/imath]" refers to the length of the output complex vector. I didn't figure out what's a "unimodular factor".

 

Also I don't know what "various spectral properties" means, but what I'm picking up on this issue is that the requirement of shift symmetry on P allows this value for the derivation of [imath]\vec{\Psi}[/imath] with respect to the shift parameter, since it gets canceled out in the scalar product. But, I'm not really sure how that canceling out works out...

 

It should suffice to consider the defining property of the imaginary unit, by which its reciprocal is equal to its negative, so just multiply both sides of the equation you quote by [imath]-i[/imath] and you can see that the operator in what you quote has the same effect as multiplying by [imath]k[/imath].

 

Well had to refer back to the properties of imaginary numbers but yeah, it would remove the [imath]i[/imath] from the [imath]ik[/imath]... Not really sure why that is relevant :I

 

Thank you for your help!

 

-Anssi

Posted

I suspect that my last post got lost over the holidays as other then a part of it that AnssiH replied to I can’t find a reply to it. It probably doesn’t make much difference at this point as I don‘t think that there is really anything that is hard to understand so I’ll just continue from where I left off.

 

The dagger is there to denote the conversion of each specific number in the “value” of [math]\vec{G}[/math] to a representation whose product with identical specific number in the output of the original function is a positive definite number. This extension in possibilities for the specific numbers in the “value” of [math]\vec{G}[/math] (for example those values can now be imaginary numbers and, in fact, many other more complex constructs) allows us the option of building into [math]\vec{G}(\vec{x})[/math] some important internal correlations which will be shown to be of extreme value later. For the moment it should be recognized that those same correlations can be expressed by multiple real numbers: i.e., I am not changing the definition of [math]\vec{G}(\vec{x})[/math], I am merely allowing some very interesting (and it turns out necessary) expressions of internal correlations.

 

How do we know that any consequence of using complex numbers or any other set of numbers has any necessary consequences? Are we just trying to give P a sufficiently complicated structure that any constraints on P are a function of the number set that is being used? Or are you saying that the constraints on P only exist in some number constructs and we have to find one that we can consistently define P in.

 

Clearly, if the dimensionality of the abstract vector space is greater than two, that dimensionality may be reduced to half by collecting the components in pairs and converting all of the output to complex numbers. If the dimensionality happens to be odd, we can allow that stand alone component to be real. The usefulness of that odd term is exactly the same as the usefulness of that single real expression brought up earlier.

 

I’m not really sure as to what you are referring here, I thought that you where placing [math] \vec{\Psi} [/math] in a complex space but here you seem to have a more complicated space in mind. It seems that you are trying to place a vector in a smaller dimensional complex space. But I don’t see how this is going to help. Isn’t the probability function P still determined by [math] \vec{\Psi} [/math] and P is defined to be a given real number between 0 and 1 so the only thing that has changed is that [math] \vec{\Psi} [/math] can now be rotated in the complex plane.

 

I just can’t really see how you have changed the problem at this point other then that you have pointed out that you will allow [math] \vec{\Psi} [/math] to take any form and complex rotations are now possible.

 

Of significance is the fact that [math]A=\sqrt{c^2+d^2}[/math] has absolutely no dependence on “[math]a[/math]” whatsoever as c and d are independent constants. What is important here is that the exponential term drops out in the multiplication (in the calculation of P). Thus it is that the fact that [math]\frac{d}{da}P=0[/math] (derived from the fact that P could not be a function of the parameter shift "a") no longer extends to [math]\vec{\Psi}[/math]. The entire consequence of any shift in "[math]a[/math]" resides in the the complex phase correlations expressed in the function [math]e^{ika}[/math].

 

But isn’t the probability of something now a function of A so that while we can now consider [math] \vec{\Psi} [/math] to no longer be constant we can’t say anything about the actual probability of something as it is still a totally undefined value.

Posted (edited)
"the abstract dimensionality of the complex [math]\vec{\Psi}[/math] is unity" means it ouputs one complex value?
Keep in mind a fact, counterinituitive but fundamental: The field of complex numbers can be regarded as a one dimensional vector space, despite the fact that a complex value can be represented by means of a pair of real values! Indeed there is a sense in which it is more appropriate to regard it as a single dimension: any complex (not only real) value can be used as a scalar. This seems absolutely zany to folks that are used to defining them as [imath]z=a+ib[/imath] whereas it is obvious to folks like Hilbert.

 

Nevertheless, it also remains useful to understand comlex numbers in terms of real ones, both as real and imaginary part, and as modlulus and phase.

 

I don't understand how [imath]ik \Psi[/imath] means "a complex value".
If [imath]\Psi[/imath] is a complex value, multiplying it by any other ones (including pure real and pure imaginary) give another complex value as a result.

 

Hmmm, I'm still very confused...
Not surprising, but I can see exactly what you're missing...:P

 

The "partial with respect to “[math]a[/math]” of "[math]z_i[/math]" is exactly unity", I suppose that cannot mean that:

 

[math]\frac{\partial}{\partial a}\vec{\Psi} = \frac{\partial}{\partial z_i}\vec{\Psi}[/math]

Actually... it can, in the specific case, beacause Dick defines each of the [math]z_i[/math] as a fixed quantity plus [math]a[/math]. It's pretty simply really: A change in [math]a[/math] gives the same change in each [math]z_i[/math].

 

So it doesn't mean we get the unity once we sum those partial derivatives, it's just in the specific case (and for each addend) that the general definition reduces to:

 

[math]\frac{\partial}{\partial a}\vec{\Psi} = \sum^n_{i=1}\frac{\partial}{\partial z_i}\vec{\Psi}[/math]

 

Therefore it doesn't state that way in the mathworld site, so it's not so good to take it on faith. In general the [imath]\frac{\partial z_i}{\partial a}[/imath] factors are essential (they are the intermediate dependencies). A proper and formal understanding would take a better calculus course, with the notion of infinitesimals of first and nth order and of differentials. These discussions have been far from it.

 

I didn't figure out what's a "unimodular factor".
The value of any exponential with a pure imaginary exponent has a modulus of one. The imaginary exponent gives the phase in radians (by removing the imaginary unit). For any complex value, Euler's formula can be used to write:

 

[math]Ae^{ix}=A\cos x + iA\sin x[/math]

 

which relates the "real and imaginary parts" to the "modulus and phase". A drawing on paper based on elementary knowledge of trig should make it clear.

 

Also I don't know what "various spectral properties" means, but what I'm picking up on this issue is that the requirement of shift symmetry on P allows this value for the derivation of [imath]\vec{\Psi}[/imath] with respect to the shift parameter, since it gets canceled out in the scalar product. But, I'm not really sure how that canceling out works out...
It's pretty complicated in the general case. Let's reduce it to a single [imath]x[/imath] "coordinate" and consider a real-valued function of [imath]x[/imath]:

 

[math]\Psi(x)=Ae^{if(x)}[/math]

 

It is easy to see that the probability density is [imath]A^2[/imath] for any value of [imath]x[/imath]. If one further constrains [imath]f[/imath] to be continuous and derivable, the same differential operator can be applied to it but it is not an eigenfunction (as in the case Dick examines, which has [imath]k[/imath] as an eigenvalue). So far I haven't found the restriction to be fatal to what comes next in Dick's presentation, so I might assume he is just keeping it simple by avoiding discussion in terms of spectral analysis.

 

How do we know that any consequence of using complex numbers or any other set of numbers has any necessary consequences? Are we just trying to give P a sufficiently complicated structure that any constraints on P are a function of the number set that is being used? Or are you saying that the constraints on P only exist in some number constructs and we have to find one that we can consistently define P in.
This is a good question, although I don't find it to be a problem as serious as that of the choice of Lie algebra. Complex value constructs are not the only way to suit the purpose of having a probability desity from something with a more general algebra but they constitute the handiest tool and I would strain to think of a case, in greater generalization, which can't also be covered by the formalism so constructed. For example, in quantum electrodynamics, one can think of the photon's wavefunction as being the classical EM field via the correspondence principle. Edited by Qfwfq
fixed after hectic trouble
Posted

I suspect that my last post got lost over the holidays as other then a part of it that AnssiH replied to I can’t find a reply to it. It probably doesn’t make much difference at this point as I don‘t think that there is really anything that is hard to understand so I’ll just continue from where I left off.

Sorry Bombadil but I have been sick and have not been on the forum for roughly a week. There has been so much activity (most of it not worth reading) that I haven't bothered to check any of it out carefully. Just ask any questions you still have and I will try to answer.

 

How do we know that any consequence of using complex numbers or any other set of numbers has any necessary consequences?

We don't! Not unless we specifically show those consequences.

 

The issue is actually quite simple. Since all circumstances can be represented by a collection of numerical labels and the probability of any circumstance can be represented by a number bounded by zero and one, the mechanism for coming up with the probability of a specific circumstance can be seen as a mathematical function (one set of numbers transformed into another set). In my logical examination, I wish to omit no mathematical relation from the collection under consideration (to omit a mathematical relation would be to make a presumption that there existed no explanation which required that relation). It is the fact that probability is bounded by zero and one (by definition) which yields a problem here.

 

How do I eliminate all functions not bounded by zero and one without omitting any possible mathematical relation? The answer is to recognize the fact that any mathematical expression can be converted into one bounded by zero and one. That is why I choose the notation of a normalized abstract vector [math]\vec{\Psi}[/math]. If there exists a functional relationship [math]P(x_1,x_2,\cdots,x_n,t)[/math] which represents a specific explanation then [math]P=\vec{\Psi}\cdot\vec{\Psi}[/math] must be a possible representation of P as there is no constraint upon the mathematical relationship being represented. Look at it this way I am simply getting rid of the constraint that probability is bounded by zero and one by interpreting that constraint as the normalized magnitude of some abstract vector function (that abstract dimensionality can be anything thus no mathematical relations are omitted).

 

The representation certainly allows for correlations in the various components of those abstract vectors. It turns out that a correlation totally analogous to complex numbers serves a very valuable purpose. It allows us to insert into the representation an internal correlation which identically satisfies the required shift symmetry without constraining the representation in any way. The only real constraint is that the transformation used to define [math]\vec{\Psi}^{\dagger}[/math] will leave [math]\vec{\Psi}^{\dagger}\cdot\vec{\Psi}[/math] positive definite

 

I’m not really sure as to what you are referring here, I thought that you where placing [math] \vec{\Psi} [/math] in a complex space but here you seem to have a more complicated space in mind. It seems that you are trying to place a vector in a smaller dimensional complex space. But I don’t see how this is going to help. Isn’t the probability function P still determined by [math] \vec{\Psi} [/math] and P is defined to be a given real number between 0 and 1 so the only thing that has changed is that [math] \vec{\Psi} [/math] can now be rotated in the complex plane.

I think you have a twisted mental picture of what is going on here. You are thinking in terms of having some given function [math] \vec{\Psi} [/math]; whereas my perspective is constraining “any function” to a form which satisfies shift symmetry without making any constraint on that function at all.

 

I just can’t really see how you have changed the problem at this point other then that you have pointed out that you will allow [math] \vec{\Psi} [/math] to take any form and complex rotations are now possible.

I haven't changed anything. In fact, I have not asserted that the number of dimensions of that abstract space can not be odd. There very well may be other more complex correlations between those abstract vectors which will eliminate other logical problems the representation may bring up. Things that have not occurred to me yet.

 

But isn’t the probability of something now a function of A so that while we can now consider [math] \vec{\Psi} [/math] to no longer be constant we can’t say anything about the actual probability of something as it is still a totally undefined value.

It is the explanation which defines the probability, not my logic. We are not talking about “actual probability” here, we are talking about our expectations which arise from the explanation we believe is valid: i.e., it is our expectations which the explanation provides, not the actual probabilities (those are fundamentally unknowable).

 

Have fun -- Dick

Posted

Keep in mind a fact, counterinituitive but fundamental: The field of complex numbers can be regarded as a one dimensional vector space, despite the fact that a complex value can be represented by means of a pair of real values! Indeed there is a sense in which it is more appropriate to regard it as a single dimension: any complex (not only real) value can be used as a scalar. This seems absolutely zany to folks that are used to defining them as [imath]z=a+ib[/imath] whereas it is obvious to folks like Hilbert.

 

Okay, I see, I think... I assume that when complex numbers are regarded as a one dimensional vector space, each different complex number really is unique in that vector space? I.e. what you are referring to is something more complicated than a matter of only seeing the modulus of each complex number?

 

Nevertheless, it also remains useful to understand comlex numbers in terms of real ones, both as real and imaginary part, and as modlulus and phase.

 

Yup.

 

If [imath]\Psi[/imath] is a complex value, multiplying it by any other ones (including pure real and pure imaginary) give another complex value as a result.

 

Riiiight! Since I'm trying to learn these necessary concepts on the run, I am sometimes juggling with so many unfamiliar concepts at once that I miss the most obvious things...

 

Let's try and see if I'm getting it absolutely right this time... So, I suppose,

 

[math]\frac{d}{da}\Psi=ik \Psi[/math]

 

is stating that the output of [imath]\Psi[/imath] can change along with the shift parameter [imath]a[/imath], in the manner that that rate of change can be expressed as a product between an imaginary number [imath]ik[/imath] and the original output?

 

So, it's an imaginary number because it would be invalid to see that derivative as a product between a real number and [imath]\Psi[/imath]...?

 

I'm really just guessing at this point, but could it be that a multiplication between a complex number and an imaginary number means that just the imaginary component changes? And thus, during [imath]\vec{\Psi}^\dagger\cdot \vec{\Psi}[/imath], that imaginary component gets canceled out anyway, thus hiding the effect of the shift parameter from [imath]P[/imath]? Is that it?

 

I'm sure you can see, I can't be very confident in my own interpretations of this stuff, I'm just trying to see it all in some way that makes sense and probe you guys if I'm picking it up correctly :)

 

The "partial with respect to “[math]a[/math]” of "[math]z_i[/math]" is exactly unity", I suppose that cannot mean that:

 

[math]\frac{\partial}{\partial a}\vec{\Psi} = \frac{\partial}{\partial z_i}\vec{\Psi}[/math]

Actually... it can, in the specific case, beacause Dick defines each of the [math]z_i[/math] as a fixed quantity plus [math]a[/math]. It's pretty simply really: A change in [math]a[/math] gives the same change in each [math]z_i[/math].

 

Hmm, there's something here that I still don't quite understand... I mean, I had picked up that any change in [math]a[/math] yields identical change in each [math]z_i[/math], but I still don't understand why is it that;

 

[math]\frac{\partial}{\partial a}\vec{\Psi}[/math] - A simultaneous shift of all the input parameters of [imath]\vec{\Psi}[/imath], always yields the same effect to the output as;

 

[math]\frac{\partial}{\partial z_i}\vec{\Psi}[/math] - A shift of just one input parameter, while the rest of the parameters are left as they were.

 

This is of course related to what you commented before, and now again, that "In general the [imath]\frac{\partial z_i}{\partial a}[/imath] factors are essential (they are the intermediate dependencies)", which makes sense to me. Also it makes sense to me that in this particular case those intermediate dependencies don't have an effect.

 

It could be I have some erroneous ideas about this whole procedure in my mind, somewhere fairly deep and it'll take some digging to figure out what I'm getting wrong.

 

Well, I have two more or less sensible interpretations in my head right now, tell me if either one is correct;

 

1. That;

[math]\sum^n_{i=1}\frac{\partial}{\partial z_i}\vec{\Psi}[/math]

implies a process where the change to [imath]z_i[/imath] occurs to all the components of the stated sum simultaneously (instead of taking the partial derivatives one by one and then summing the results)

 

Or

 

2. Even if those partial derivates are performed one by one, it always yields the same results as a simultaneous shift, because of some subtleties with the idea of "infinitesimal change", which I am just not familar with.

 

If it's neither, I need to think this more...

 

The value of any exponential with a pure imaginary exponent has a modulus of one. The imaginary exponent gives the phase in radians (by removing the imaginary unit). For any complex value, Euler's formula can be used to write:

 

[math]Ae^{ix}=A\cos x + iA\sin x[/math]

 

which relates the "real and imaginary parts" to the "modulus and phase". A drawing on paper based on elementary knowledge of trig should make it clear.

 

Ahha, right. Yeah I remember we used this concept at one point of the derivations already. I'll get it back to my mind the next time I need it again :)

 

It's pretty complicated in the general case. Let's reduce it to a single [imath]x[/imath] "coordinate" and consider a real-valued function of [imath]x[/imath]:

 

[math]\Psi(x)=Ae^{if(x)}[/math]

 

It is easy to see that the probability density is [imath]A^2[/imath] for any value of [imath]x[/imath]. If one further constrains [imath]f[/imath] to be continuous and derivable, the same differential operator can be applied to it but it is not an eigenfunction (as in the case Dick examines, which has [imath]k[/imath] as an eigenvalue). So far I haven't found the restriction to be fatal to what comes next in Dick's presentation, so I might assume he is just keeping it simple by avoiding discussion in terms of spectral analysis.

 

Right okay, but my interpretation of how the "ik" ends up not having any effect on "P" via [imath]P=\vec{\Psi}^\dagger\cdot \vec{\Psi}[/imath], does that look correct to you?

 

-Anssi

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...