Jump to content
Science Forums

Recommended Posts

Posted (edited)

I have been exploring a possibility and wanted to know what others thoughts were here. I have been trying to mathematically compose a theory which treats the very beginning of space (which according to current belief would involve a time dimension) as being highly unstable due to the uncertainty regarding to matter and space between particles. In short, there was little to no space at all in the beginning, meaning that particles where literally stacked up on top of each. This completely violates the uncertainty principle and I conjecture it caused ''space to grow exponentially'' between particles to allow them degrees of freedom and to bring a halt to the violation of the quantum mechanical principle.

 

Of course, how do you speak about space or even time if niether existed fundamentally? Fotini Markoupoulou has been using a special model. In her recent idea's, she believes that space is not fundamental.

 

In her model, simply put, particles are represented by points which are nodes which can be on or off, which represents whether the nodes are actually interacting. Only at very high temperatures, spacetime ceases to exist and many of us will appreciate this as Geometrogenesis. The model also obeys the Causal Dynamical Triangulation which is a serious major part of quantum loop gravity theory which must obey the triangle inequality in some spin-state space. Spin state spaces may lead to models we can develop from the Ising Model or perhaps even Lyapanov Exponential which measures the seperation of objects in some Hilbert spaces preferrably. We may in fact be able to do a great many things.

 

Heisenberg uncertainty is a form of the geometric Cauchy Schwarz inequality law and this might be a clue to how to treat spacetime so unstably at very early beginnings when temperatures where very high.

 

http://www.scribd.co...ainty-Principle

 

Since Markoupoulou's work is suggesting that particles exist on Hilbert Spaces in some kind of special sub-structure before the emergence of geometry, then now I can approach my own theory and answer it in terms of the uncertainty principle using the Cauchy-Shwartz inequality because from this inequality one can get the triangle inequality.

 

So, this is my idea. Space and time emerged between particles because particles could not be allowed to infinitely remain confined so close to other particles, that the uncertainty forbid it and created degrees of freedom in the form of the vacuum we see expanding all around us.

 

The mathematical approach

 

 

So let me explain how this model works. First of all, it seems best to note that in most cases we are dealing with ''three neighbouring points'' on what I call a Fotini graph. Really, the graph has a different name and is usually denoted with something like [math]E(G)[/math] and is sometimes called the graphical tensor notation. In our phase space, we will be dealing with a finite amount of particles [math]i[/math] and [math]j[/math] but asked to keep in mind that the neighbouring particles are usually seen at a minimum three and that each particle should be seen as a configuration of spins - this configuration space is called the spin network. I should perhaps say, that to any point, there are two neighbours.

 

Of course, as I said, we have two particles in this model [math](i,j)[/math], probably defined by a set of interactions [math]k \equiv (i,j)[/math] (an approach Fotini has made in the form of on-off nodes). In my approach we simply define it with an interaction term:

 

[math]V = \sum^{N-1}_{i=1} \sum^{N}_{i+1} g(r_{ij})[/math]

 

I have found it customary to place a coupling constant here [math]g[/math] for any constant forces which may be experienced between the two distances made in a semi-metric which mathematicians often denote as [math]r_{ij}[/math].

 

If [math]A(G)[/math] are adjacent vertices and [math]E(G)[/math] is the set of edges in our phase space, (to get some idea of this space, look up casual triangulation and how particles would be laid out in such a configuration space), then

 

[math](i,j) \in E(G)[/math]

 

It so happens, that Fotini's approach will in fact treat [math]E(G)[/math] as assigning energy to a graph

 

[math]E(G) = <\psi_G|H|\psi_G>[/math]

 

which most will recognize as an expection value. The Fotini total state spin space is

 

[math]H = \otimes \frac{N(N-1)}{2} H_{ab}[/math]

 

Going back to my interaction term, the potential energy between particles [math](i,j)[/math] or all [math]N[/math]-particles due to pairwise interctions involves a minimum of [math]\frac{N(N-1)}{2}[/math] contributions and you will see this term in Fotini's previous yet remarkably simple equation.

 

[math]K_N[/math] is the complete graph on the [math]N[/math] - vertices in a Fotini Graph i.e. the graph in which there is one edge connecting every pair of vertices so there is a total of [math]N(N-1) = 2[/math] edges and each vertex has a degree of freedom corresponding to [math](N-1)[/math].

 

Thus we will see that to each vertex [math]i \in A(G)[/math] there is always an associated Hilbert space and I construct that understanding as

 

[math]H_G = \otimes i \in A(G) H_i[/math]

 

From here I construct a way to measure these spin states in the spin network such that we are still speaking about two particles [math](i,j)[/math] and by measuring the force of interaction between these two states as

 

[math]F_{ij} = \frac{\partial V(r_{ij})}{\partial r_{ij}} \hat{n}[/math]

 

where the [math]\hat{n}[/math] is the unit length. The angle between two spins in physics can be calculated as[math]\mu(\hat{n} \cdot \sigma_{ij}) \begin{pmatrix} \alpha \\ \beta \end{pmatrix} = \mu(\frac{1 + cos \theta}{2})[/math]

 

Thus my force equation can take into respect a single spin state, but denoted for two particles [math](i,j)[/math] as we have been doing, it can describe a small spin network

 

[math]F_{ij} = \frac{\partial V(r_{ij})}{\partial r_{ij}} \mu(\hat{n} \cdot \sigma_{ij})^2 = \frac{\partial V(r_{ij})}{\partial r_{ij}} \mathbf{I}[/math]

 

with a magnetic coefficient [math]\mu[/math] on the spin structure of the equation and [math]\mathbf{I}[/math] is the unit matrix.

 

I now therefore a new form of the force equation I created with an interaction term, as I came to the realization that squaring our spin state part

 

[math]-\frac{\partial V (r_{ij})}{\partial r_{ij}} \mu(\hat{n} \cdot \vec{\sigma}_{ij})^2[/math]

 

[math] = -\frac{\partial V (r_{ij})}{\partial r_{ij}} \begin{bmatrix}\ \mu(n_3) & \mu(n_{-}) \\ \mu(n_{+}) & \mu(-n_3) \end{bmatrix}^2[/math]

 

Sometimes it is customary to represent the matrix in this form:

 

[math]\begin{bmatrix}\ \mu(n_{3}) & \mu(n_{-}) \\ \mu(n_{+}) & \mu(-n_{3}) \end{bmatrix}[/math]

 

As we have in our equation above. The entries here are just short hand notation for some mathematical tricks. Notice that there is a magnetic moment coupling on each state entry. We will soon see how you can derive the Larmor Energy from the previous equation.

 

Sometimes you will find spin matrices not with the magnetic moment description but with a gyromagnetic ratio, so we might have

 

[math]\frac{ge}{2mc}(\hat{n} \cdot \sigma_{ij}) = \begin{bmatrix}\ g \gamma(n_3) & g \gamma(n_{-}) \\ g \gamma(n_{+}) & g \gamma(-n_3) \end{bmatrix}[/math]

 

The compact form of the Larmor energy is [tex]-\mu \cdot B[/tex] and the negative term will cancel due to the negative term in my equation

 

[math]-\frac{\partial V (r_{ij})}{\partial r_{ij}} \mu(\hat{n} \cdot \vec{\sigma}_{ij})^2[/math]

 

[math]= -\frac{\partial V (r_{ij})}{\partial r_{ij}} \begin{bmatrix}\ \mu(n_3) & \mu(n_{-}) \\ \mu(n_{+}) & \mu(-n_3) \end{bmatrix}^2[/math]

 

The [math]L \cdot S[/math] part of the Larmor energy is in fact more or less equivalent with the spin notation expression I have been using [math](\hat{n} \cdot \sigma_{ij})[/math], except when we transpose this over to our own modified approach, we will be accounting for two spins.

 

We can swap our magnetic moment part for [math]\frac{2\mu}{\hbar Mc^2 E}[/math] and what we end up with is a slightly modified Larmor Energy

 

[math]\Delta H_L = \frac{2\mu}{\hbar Mc^2 e} \frac{\partial V (r_{ij})}{\partial r_{ij}} (\hat{n}\cdot \sigma_{ij}) \begin{pmatrix} \alpha \\ \beta \end{pmatrix}[/math]

 

This is madness I can hear people shout? In the Larmor energy equation, we don't have [math](\hat{n}\cdot \sigma) \begin{pmatrix} \alpha \\ \beta \end{pmatrix}[/math] we usually have [math](L\cdot S)[/math]?

 

Well yes, this is true, but we are noticing something special. You see, [math](L\cdot S)[/math] is really

 

[math]|L| |S|cos \theta[/math]

 

This is the angle between two vectors. What is [math](\hat{n}\cdot \sigma) \begin{pmatrix} \alpha \\ \beta \end{pmatrix}[/math] again? We know this, it calculates the angle between two spin vectors again as

 

[math]\frac{1 + cos \theta}{2}[/math]

 

So by my reckoning, this seems perfectly a consistent approach.

 

Now that we have derived this relationship, it adds some texture to the original equations. If we return to the force equation, one might want to plug in some position operators in there - so we may describe how far particles are from each other by calculating the force of interaction - but as we shall see soon, if the lengths of the triangulation between particles are all zero, then this must imply the same space state, or position state for all your [math]N[/math]-particle system. We will use a special type of uncertainty principle to denote this, called the triangle inequality which speaks about the space between particles.

 

As distances reduce between particles, our interaction term becomes stronger as well, the force between particles is at cost of extra energy being required. Indeed, for two particles [math](i,j)[/math] to experience the same position [math]x[/math] requires a massive amount of energy, perhaps something on the scale of the Planck Energy, but I have not calculated this.

 

In general, most fundamental interactions do not come from great distance and focus to the same point, or along the same trajectories. This actually has a special name, called Liouville's Theorem. Of course, particles can be created from a point, this is a different scenario. Indeed, in this work I am attempting to built a picture which requires just that, the gradual seperation of particles from a single point by a vacua appearing between them, forced by a general instability caused by the uncertainty principle in our phase space.

 

As I have mentioned before, we may measure the gradual seperation of particles using the Lyapunov Exponential which is given as

 

[math]\lambda = \epsilon e^{\Delta t}[/math]

 

and for previously attached systems eminating from the same system, we may even speculate importance for the correlation function

 

[math]<\phi_i, \phi_j> = e^{-mD}[/math]

 

where [math]D[/math] calculates the distance. Indeed, you may even see the graphical energy in terms maybe of the Ising model which measures the background energy to the spin state [math]\sigma_0[/math] - actually said more correctly, the background energy

 

[math]\sum_N \sigma_{(1,2,3...)}[/math]

 

acts as coefficient of sigma zero. Thus the energy is represented by a Hamiltonian of spin states

 

[math]\mathcal{H} = \sigma(i)\sigma(j)[/math]

 

Now, moving onto the implications of the uncertainty principle in our triple intersected phase space (with adjacent edges sometimes given as [math](p,q,r)[/math], there is a restriction that [math](p+q+r)[/math] is even and none is larger than the sum of the other two. A simpler way of trying to explain this inequality is by stating: [math]a[/math] must be less than or equal to [math]b+c[/math], [math]b[/math] less than or equal to [math]a+c[/math], and [math]c[/math] less than or equal to [math]a+b[/math].

 

It actually turns out that this is really a basic tensor algebra relationship of the irreducible representions of [math]SL(2,C)[/math] according to Smolin. If each length of each point is necesserily zero, then we must admit some uncertainty (an infinite degree of uncertainty) unless some spacetime appeared appeared between each point. Indeed, because each particle at the very first instant of creation was occupied in the same space, we may presume the initial conditions of BB were highly unstable. This is true within the high temperature range and can be justified by applying a strong force of interactions in my force equation. The triangle inequality is at the heart of spin networks and current quantum gravity theory.

 

For spins that do not commute ie, they display antisymmetric properties, there could be a number of ways of describing this with some traditional mathematics. One way will be shown soon.

 

Spin has close relationships with antisymmetric mathematical properties. An interesting way to describe the antisymmetric properties between two spins in the form of pauli matrices attached to particles [math]i[/math] and [math]j[/math] we can describe it as an action on a pair of vectors, taking into assumption the vectors in question are spin vectors.

 

This is actually a map, taking the form of

 

[math]T_x M \times T_x M \rightarrow R[/math]

 

This is amap of an action on a pair of vectors. In our case, we will arbitrarily chose these two to be Eigenvectors, derived from studying spin along a certain axis. In this case, our eigenvectors will be along the [math]x[/math] and [math]z[/math] axes which will always yield the corresponding spin operator.

 

[math](d \theta \wedge d\phi)(\psi^{+x}_{i}, \psi^{+z}_{j})[/math]

 

with an abuse of notation in my eigenvectors.

 

It is a 2-form (or bivector) which results in

 

[math]=d\theta(\sigma_i)d\phi(\sigma_j) - d\phi(\sigma_j)d\phi(\sigma_i)[/math]

 

This is a result where [math]\sigma_i[/math] and [math]\sigma_j[/math] do not commute.

 

the following work will demsontrate a way to mathematically represent particles converging to a single point and highlighting why uncertainty at the big bang is inherently important.

 

We should remind ourselves, that there are three neighbours which form a triangle in our phase space. Our original phase space constructed of Fotini's approach for a pairwise interaction which had the value [math]\frac{N(N-1)}{2}[/math]. It is still quite convienient not to involve any other particle yet, just our simple two-particle system; more specifically, two quantum harmonic oscillators. It seems like a normal approach according to Fotini to assume the energy of the system as a pair of interactions given as [math](i,j) \equiv k[/math] where [math]k \in \mathcal{I}[/math] where [math]\mathcal{I}[/math] is the set of interactions. Using this approach, I construct a Hamiltonian for myself which has the physics of describing the convergence of two oscillations into a single seperation neighbouring point/position. First I begin with the simple form of the Hamiltonian

 

[math]\mathcal{H} = \sum_i E_{i_{(x,y)}} + \sum_{k \in \mathcal{I}} h_k + x \Leftrightarrow y[/math]

 

Where [math]h_k[/math] is the Hermitian Operator. This equation describes the Hamiltonian of our pairwise interactive system which can be exchanged for particle [math]i[/math] satisfying, say for example, position [math]x[/math] and particle [math]j[/math] in position [math]y[/math]. These two particles form two sides of the triangle, so if we invoke the idea of two particle converging to a single point, space position [math]z[/math] then it will follow this tranformation [math](x,y) \rightarrow z[/math]. Before I do this, since I am working in a phase space with potentially the model known as the spin network, it might concern me then to change the energy term in the Hamiltonian for [math]\sigma(i)\sigma(j)[/math] which is just the Ising Energy. So our Hamiltonian would really look like:

 

[math]\mathcal{H} = \sum_{ij}\sigma(i)\sigma(j) + \sum_{k \in \mathcal{I}} h_k + x \Leftrightarrow y[/math]

 

Now, for a Hamiltonian describing two particles converging to the adjecent edge [math]E(G)[/math] we should have

 

[math]\mathcal{H} = \sum_{ij}\sigma(i)\sigma(j) + \sum_{k \in \mathcal{I}} h_k + (x,y) \Leftrightarrow z[/math]

 

As one of a few possibilities. There are six possible solutions in all for different coordinates. The spins in our space is assigning energy to our particles [math](i,j)[/math], in fact perhaps a very important observation of the model we are using, is that energy is assigned to points in this space we are dealing with. In fact, as has been mentioned before, if [math]A(G)[/math] are adjecent vertices and [math]E(G)[/math] are the neighbouring edges, then on each edge there is some energy assigned in our Hilbert Space. It seems then, you can really only deal with energy if there are really adjecent vertices and neighbour edges to think about. Remember, I am saying that it might be possible to state that the uncertainty principle could have tempted spacetime to expand, but this was because there was really no spacetime, no degree's of freedom for energy to move in -- which seems to be the way nature intended. So if there are no degrees of freedom, we cannot really think about energy normally in our model, since we define energy assigned to points in a Hilbert Space, which deals with a great deal more particles/points. But for this thought experiment, we have chosen two particles, and another possible position for convergence, so the equation

 

[math]H = \sum_{ij}\sigma(i)\sigma(j) + \sum_{k \in \mathcal{I}} h_k + (x,y) \rightarrow z[/math]

 

Actually looks very innocent. But it cannot happen in nature, not normally. Nature strictly refuses two objects to converge to a single point like [math](x,y) \rightarrow z[/math]. One way to understand why, is the force required to make two objects with angular momentum to occupy the same region in space. I won't recite it again right below my OP, but my force along a spin axis could determine such a force, or atleast, the force required to do so - which would in hindsight even seem impractical thinking about it... But it does give us some insight into what kind of conditions we might think about mathematically if somehow the singularity of the big bang can be overcome with some solution. In my force equation with the spin between two vectors, would state that as the angle between the vector closed in to complete convergence, the force should increase exponentially. I haven't came to an equation which describes this exponential increase, however, I do know that this is what experimentation would agree on.

 

The same is happening in our Hamiltonian. The force equation, with it's rapid increase of energy is proportional to the Hamiltonian experiencing an increase of energy from the spin terms [math]\sigma(i)\sigma(j)[/math] through it's crazy transformation [math](x,y) \rightarrow z[/math]. In field theory, this would be the same as saying that the distortions of spacetime of some quantum field(s) are converging to a single point in spacetime.

 

Let's study this equation a bit more:

 

[math]\mathcal{H} = \sum \sigma (i) \sigma (j) + \sum_{k \in \mathcal{I}} h_k + (x,y) \rightarrow z[/math]

 

What we have in our physical set-up above, is some particle oscillations which presumably, under a great deal of force, being measured to converge to position [math]z[/math]. In our phase space, we are using the triangulation method of dealing with the organization of particles. At [math]z[/math] we may assume the presence of a third spin state, let's denote it as [math]\sigma_z \in (-1,+1)[/math] which seems to be a favourable way to mathematically represent the spin state of a system, meaning quite literally, ''the spin state at vertex z''. [1] Let us just quickly imagine that at any positions, [math](x,y,z)[/math] to make any particle move to another position where a particle is already habiting it requires a force along a spin axis. (I can't stress enough this is not what happens in nature), this is only a demonstration to explain things better later. Sometimes working backwards, from maybe illogical presumptions can lead to a better arguement. The calculation to measure the angle between two spin states is

 

[math]\mu(\hat{n} \cdot \sigma_{ij}) \begin{pmatrix} \alpha \\ \beta \end{pmatrix} = \frac{1 + cos \theta}{2}[/math]

 

Thus my force equation can take into respect a single spin state, but denoted for two particles [math](i,j)[/math] so you may deal with either spin respectively.

 

[math]F_{ij} = \frac{\partial V(r_{ij})}{\partial r_{ij}} \mu(\hat{n} \cdot \sigma_{ij})^2 = \frac{\partial V(r_{ij})}{\partial r_{ij}} \mathbf{I}[/math]

 

But perhaps, more importantly, you may decompose the equation for both particles. Let us say, particle [math]i[/math] is in position/vertex [math]x[/math] and particle [math]j[/math] is in position [math]y[/math], meaning our final spin state is [math]z[/math]. In the force equation, making all lengths of your phase space go to zero, means that your are merging your spin state's together. Hopefully this can be intuitively imagined, but here is a good diagram: http://en.wikipedia....pin_network.svg provided by wiki. If we stood in the z-vertex, and made the xy-vertices merge to the zth [math](xy) \rightarrow z[/math] then obviously the lengths of each side would tend to zero. This means, whilst the force between particles may increase by large amounts, the angle between the vectors also goes to zero. The unit length, or unit vector which seperates particles from an origin on an axis will also tend to zero. Indeed, if you draw a graph, and make the [math]xy[/math]-axis the two lengths of both particles [math]i[/math] and [math]j[/math], where the origin is vertex spin state [math]z[/math] then by making the lengths go to zero would be like watching the [math]xy[/math] axes shrink and fall into the origin. So when complete convergence has been met, the force equation has been mangled completely of it's former glory. We no longer have an angle seperating spin states, nor can we speak about unit vectors, because they have shrank as well. Using a bit of calculus, we may see that

 

[math]F_{ij} = \frac{\partial V(r_{ij})}{\partial r_{ij}}\lim_{\hat{n} \rightarrow 0} \hat{n}[/math]

 

Then naturally it follows that the force once describing the seperation of particles no longer exists, because anything multiplied by zero is of course zero. Here we have violated some major principles in quantum mechanics. Namely the uncertainty principle and for the fact that particles do not converge like this. By making more than one particle occupy the same space is like saying that either particle will have a definate position and this of course from the quantum mechanical cornerstone, the uncertainty principle is forbidden. May we then speculate that the universe was born of uncertainty? Uncertainty has massive implications for statistical physics. In the beginning of the universe, most physicists would agree that statistical mechanics will dominate the quantum mechanical side... quantum mechanics is afterall a statistical theory at best. Perhaps then, no better way to imagine the beginning of the universe other than through the eye's of Heisenberg?

 

[1] - http://www.math.bme....swork/ising.pdf

 

I have more to post

Edited by Aethelwulf
Posted (edited)

In my thread, I explain how (at least) one problem with unification of all physics is that when you wind the clock back in the universe to the big bang, you come to a point with zero spacetime - this condition, the point where everything is believed to have come from called the big bang is a problem for physics and has been known for a while, because the more you squeeze particle into a single confined position (or point), you inevitably violate the uncertainty principle. One of the consequences of doing this I explained, was that the force to do this would be tremendously large, and I derived a force equation to help explain this phenomenon:

 

[math]F_{ij} = \frac{\partial V(r_{ij})}{\partial r_{ij}} \mu(\hat{n} \cdot \sigma_{ij})^2 = \frac{\partial V(r_{ij})}{\partial r_{ij}} \mathbf{I}[/math]

 

Which took advantage of a spin network, to also explain an uncertainty of the spacetime. -- keep in mind that [math]\hat{n}^{2}_{i} = 1[/math]

 

[math]\nabla \times \vec{F}_{ij} = \begin{vmatrix} \hat{n}_1 & \hat{n}_2 & \hat{n}_3 \\ \frac{\partial}{\partial x} & \frac{\partial}{\partial y} & \frac{\partial}{\partial z} \\ \hat{F_x} & \hat{F}_y & 0 \end{vmatrix}[/math]

 

(Incidently, the lowercase ij denotes particle's 1 and particles 2 in the [math]F_{ij}[/math] term and shouldn't be mistaken for the unit vectors, however, if they were, they work out on the same column as would be found in the determinant matrix.

 

This gives

 

[math]\nabla \times \vec{F}_{ij} = \frac{\partial F_y}{\partial z} \hat{n}_1 + \frac{\partial F_x}{\partial z} \hat{n}_2 + (\frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y}) \hat{n}_3[/math]

 

Now, this just gives

 

[math]\nabla \times \vec{F}_{ij} = (\frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y}) \hat{n}_3[/math]

 

and

 

[math]\nabla \times \vec{F}_{ij} \cdot \hat{n} = (\frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y}) [/math]

Edited by Aethelwulf
Posted (edited)

The Larmor equation is

 

[math]\Delta H = \frac{2\mu}{\hbar Mc^2 e} \frac{\partial V(r_{ij})}{\partial r_{ij}} ( L \cdot S)[/math]

 

What I kept deriving was:

 

[math]\vec{F}_{ij} \cdot \hat{n} = \frac{\partial V(r_{ij})}{\partial r_{ij}}[/math]

 

What we really need is the original derivation

 

[math]\vec{F}_{ij} = \frac{\partial V(r_{ij})}{\partial r_{ij}} \hat{n}[/math]

 

Then taking the curl of F gives

 

[math]\nabla \times \vec{F}_{ij} = \frac{\partial V(r_{ij})}{\partial r_{ij}}[/math]

 

Which removes the unit vector because [math]\nabla[/math] again is 1/length.

 

[math]\Delta H_L = \frac{2\mu}{\hbar Mc^2 e} (\nabla \times \vec{F}_{ij}) L \cdot S[/math]

 

Which is a type of new Larmor equation.

Edited by Aethelwulf
Posted

I have been exploring a possibility and wanted to know what others thoughts were here.....I have more to post

Hi Aethelwulf,

 

I have to admit that I never read posts longer then one screenful. Its just too much. If I may gie you some advice; put all of that in an MS Word file or a PDF file and then attach it to the first post in a thread. In the first post write an abstract for the subject matter. Then people can read the abstract and decide if they want to read it. And if they do want to read it then they can download the file, open it and then print it out so that they can read it in comfort instead of reading something so long off a computer screen. Personally I find it very difficult to read a paper like that.

 

Just a little friendly advice

Posted

My thinking is that the field is getting more complex and it is requiring the emersion of minds into the the field long before college. I used to think greed was the road we were on as a specie and that the goal of the elites was low stress that would lead to island living without contaminating nuclear, etc. where they would practice science sans stress. I now think that the fields are so complex, the island isolation is necessary to prevent polluting thinking, so the focus can be acute. This would lead to an elite separated from the many specifically as to zero in on a particular aspect. The population would be immersed from birth. I now believe that is the only way science can progress in understanding the universe.

Although both Russia through its scientists working alone and with China evidently building an entire intellectual segement new from the ruins of the 'red shirts' reeducation, show that the individual is the key, there must be support for the like-thinkers or the brilliant light from one could grow dim shining alone in the dark.

Posted

In a way, all my topics are somehow linked - from my thoughts that time is not fundamental to the idea of spin triangulation and my thoughts on unification using this model. Here there is a must-see video explanation this ''emergent spacetime'' from a spacetime triangulation model

 

Posted

The video does a good job explaining how different ''points'' on a map bring about different geometrical properties. If one takes the Cauchy-Schwartz Inequality correctly then my new idea of bringing the geometrical version of the uncertainty principle together with the spin network will tell you how stable a spacetime is.

Posted

Hi Athelwulf,

 

If one takes the Cauchy-Schwartz Inequality correctly then my new idea of bringing the geometrical version of the uncertainty principle together with the spin network will tell you how stable a spacetime is.

What does your analysis reveal with regards to the missing wavelengths in the video?

Posted

Hi Athelwulf,

 

 

What does your analysis reveal with regards to the missing wavelengths in the video?

 

 

By the way, I am a musician as well, so I can understand his arguments partially without the math.

 

I believe his work tells him there is a beginning to time... within the wave lengths of light which reach us from the most distant of galaxies, as we turn our heads to them, the most distant of galaxies turn out more ''clumpier'' as we turn the hands of time back. This is evidence of a premature and adolescent universe, that something had happened that brought about the young universe - a big bang. Galaxies where not always there almost perfect disk shape appearances - our current evidence points to some ''disorder'' about these galaxies as we look further back (1).

 

These ''missing wavelengths'' could be something to do with the superpositioning principle which dominated the universe (to get to your question in it's fullest). As quantum mechanics played a large part in the evolution of our universe, we must suspect that all those ''possible conditions'' which arose at the beginning of time (without any super observer) there to collapse the wave function, we must expect that certain conditions arose side by side without being detected as our universe evolved and became more... concrete. Perhaps these missing wavelengths are the probabilities which never took form and by a consequence we are only just realizing these missing pieces today, because today, we observe what reality did come about`because the reality we see is the most probabilistic.

 

 

 

(1) - We are often told that the universe arose from a near perfect order - that as time goes by, entropy became more and more disordered. This can be challenged in a way - it seems from our perspective that the universe could have became more ordered as time goes on, like a story which is told to us from the beginning and only the finale of that story makes sense when you finish your book. Order in this universe, which should be taken synonymous with entropy is something which is a ''relative point of view'' and should have no directionality in time, much how the arrow of time makes no sense when you consider how the order of events breaks down at the subatomic level. Causality therefore, is not preserved in quantum mechanics, which has actually been proved mathematically.

Posted (edited)

Hi Aethelwulf, CDT is a most interesting concept due to its geometric perspective.

 

I believe his work tells him there is a beginning to time... within the wave lengths of light which reach us from the most distant of galaxies, as we turn our heads to them, the most distant of galaxies turn out more ''clumpier'' as we turn the hands of time back. This is evidence of a premature and adolescent universe, that something had happened that brought about the young universe - a big bang. Galaxies where not always there almost perfect disk shape appearances - our current evidence points to some ''disorder'' about these galaxies as we look further back (1).

 

These ''missing wavelengths'' could be something to do with the superpositioning principle which dominated the universe (to get to your question in it's fullest). As quantum mechanics played a large part in the evolution of our universe, we must suspect that all those ''possible conditions'' which arose at the beginning of time (without any super observer) there to collapse the wave function, we must expect that certain conditions arose side by side without being detected as our universe evolved and became more... concrete. Perhaps these missing wavelengths are the probabilities which never took form and by a consequence we are only just realizing these missing pieces today, because today, we observe what reality did come about`because the reality we see is the most probabilistic.

From what I gathered in the video he indicated that the wavelengths were missing because they were not present at the big bang singularity point and as a result the universe was finite in size.

 

Order in this universe, which should be taken synonymous with entropy is something which is a ''relative point of view'' and should have no directionality in time, much how the arrow of time makes no sense when you consider how the order of events breaks down at the subatomic level. Causality therefore, is not preserved in quantum mechanics, which has actually been proved mathematically.

Observation of universal order can be made in 3 different ways. When I studied maths of finance in the early 90's I came to the conclusion that correct solutions could only be calculated by either calculating entirely forward in time or entirely backwards in time. If you used both in the one calculation you would always get the wrong answer. So, while CDT goes a good way to reconcile geometry and the wave function the problem of unifying physics seems to boil down to the issue of consistent observation of universal order.

 

Think of it like this, your |+ time| answer must equal your |- time| answer, if you then consider that these discrete and equal answers (calculation procedures) are in a superposition and use both at the same time in your calculation you can not get the correct answer (i.e. 50% one way and 50% the other). An example would be one of Doctordicks later efforts where he had one step towards the end with a function that had limits from -i to +i. This is interesting in the context of the differences between improper integrals and indefinite integrals.

 

You will find the following in chapter 4. http://www.math.wisc.edu/~keisler/calc.html

 

Figure 4.4.7

We do not know how to find the indefinite integrals in this example. Nevertheless the answer is 0 because on changing variables both limits of integration become the same.

This quote and figure was in relation to a cyclic function with a changing variable and a sub part with infinite limits similar to a divergent integral. So divergent indefinite integrals can equal 0 if their infinite limits are equal and they are part of a higher level cycle but convergent integrals are only improper and equal to 0 by themselves when they have finite limits.

 

The interesting bit is that the cyclic field constructs that have limits from - infinity to + infinity and represent one complete cycle of a higher level process count, that can only equal 1, appear to have the same properties as defined limit improper integrals. I have to retrospectively give Doctordick credit for consolidating the translation of minkowski space to euclidian space into an integral mathematical structure/process that conforms to the rules of pure calculus and (much of) the status quo. This type of cyclic construct can also be used to represent a time based continuum and in effect change a linear limit from 0 to infinity into a cyclic limit from 0 to a very large number. Hmmm.

 

The attached image shows the conceptual differences that lie at the core of the problem.

post-2995-0-23362300-1343371031.jpg

Edited by LaurieAG
Posted

Hi Laurie

 

keep in mind I was pressed to speculate an answer - I don't wish to be taken too seriously when speculating. I will say one thing, your idea, especially this part

 

''Think of it like this, your |+ time| answer must equal your |- time| answer, if you then consider that these discrete and equal answers (calculation procedures) are in a superposition and use both at the same time in your calculation you can not get the correct answer (i.e. 50% one way and 50% the other). An example would be one of Doctordicks later efforts where he had one step towards the end with a function that had limits from -i to +i. This is interesting in the context of the differences between improper integrals and indefinite integrals. ''

 

Is very very similar to what is called the Transactional Interpretation. One wave packet moves in a positive time direction whilst another one moves negatively through time. Both packets are conjugates of each other and when the square they create something real.

Posted

Hi Aethelwulf,

 

keep in mind I was pressed to speculate an answer - I don't wish to be taken too seriously when speculating.

Don't underrate speculation as many scientific discoveries these days hang off fine threads that only exist because of the amount of data that is rejected and not measured. There is movement happening in the medical research sphere to get all of the failed research published and not just the successful trials. This would at least give you an opportunity to compare the incremental differences between the failures and the eventual success to see what was actually changed to get the success.

 

Is very very similar to what is called the Transactional Interpretation. One wave packet moves in a positive time direction whilst another one moves negatively through time. Both packets are conjugates of each other and when the square they create something real.

But if there was only one real particle/packet going from point A to point B that was observed from both points A and B during its journey you would get similar maths but it's still only one real particle being observed.

 

In communications terminology a single negative packet would be equivalent to sending a packet through your receiver or receiving a packet through your sender. You could send packets through your sender and receiver at the same time but you would just have two packets (that were conjugates of each other) going in opposite directions. It seems that Transactional Interpretation as described is equivalent to observing a single object from 2 different locations or observing two opposite packets departing from one location with one of them going down the incorrect channel.

 

http://en.wikipedia.org/wiki/Transactional_interpretation

 

In TIQM, the source emits a usual (retarded) wave forward in time, but it also emits an advanced wave backward in time; furthermore, the receiver also emits an advanced wave backward in time and a retarded wave forward in time. The phases of these waves are such that the retarded wave emitted by the receiver cancels the retarded wave emitted by the sender, with the result that there is no net wave after the absorption point. The advanced wave emitted by the receiver also cancels the advanced wave emitted by the sender, so that there is no net wave before the emitting point either. In this interpretation, the collapse of the wavefunction does not happen at any specific point in time, but is "atemporal" and occurs along the whole transaction, and the emission/absorption process is time-symmetric. The waves are seen as physically real, rather than a mere mathematical device to record the observer's knowledge as in some other interpretations of quantum mechanics.
Posted (edited)

I've wrote this before based on the musings of Dr. Cramer who had an on-line page but it no longer exists.

 

The way the transactional interpretation treats the wave function is the idea of a positive time wave and a negative time wave are able to move from the future to the past and from the past to the future. The wave moving forward in time is an advanced and moving back the retarded wave functions; and as you may guess, the waves are solutions of different quantum information packets which upon the absolute square amplitude they will define real existing things.

 

The emmiter could be an electron, radiating a photon, which is caused by producing a field. The field is time-symmetric under the Wheeler-Feynman description which and as John Cramer describes it ”time-symmetric combination of a retarded field which propagates into the future and an advanced field which propagates into the past.”

 

He considers a net field which consists of a retarded plane wave form F1

 

[math]F_1 = e^{i(kr-\omega t)}[/math]

 

for t1. Here, t1 is the instant of emission. The advanced solution G1 is simply

 

[math]G_1 = e^{-i(kr - \omega t)}[/math]

 

for t2. The idea is that the the absorbing electron responding to the incident of the retarded field F1 in such a way it will gain energy, recoil, and produce a new retarded field F2=-F1 which exactly cancels the incident field F1. The net field is zero.

 

[math]F_{net} = (F_1 + F_2) = 0[/math]

Edited by Aethelwulf

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...