Jump to content
Science Forums

Recommended Posts

Posted
We don't necessarily have [x, y] = xy-yx.
Oh yes we do! This is part of the definition of the Lie bracket.

 

Maybe I misunderstood your somewhat unusual notation. Lemme me show how EVERY text I have ever seen puts it:

 

A Lie algebra [math]\mathfrak{g}[/math] is a vector space together with a skew-symmetric bilinear map [math][\,,\,]: \mathfrak{g} \times\mathfrak{g} \to \mathfrak{g}[/math] that satisfies the Jacobi identity.

 

This what my proof addressed. Perhaps you meant something different?

 

[x, x] = 0 always implies skew-symmetry,
As you see from the above, [math][X,X]=0[/math] is not part of the definition, but rather it is, as I showed earlier, a consequence of it
Posted

No, I think there's a bit misunderstanding on your part. Read the definition of a Lie algebra:

 

Introduction to Lie algebras and ... - Google Books

 

[x, y] does NOT meant "xy-yx". This presupposes we have some sort of operation between the vectors (since a Lie algebra is first a vector space) that is analogous to multiplication. For instance, a concrete way of defining a bracket operation to turn a two-dimensional vector space (this is just the vector space F^2, over an arbitrary field F) into a field is to define [(a, B), (c, d)] = (ad-bc, 0). This in turn satisfies the axioms for a Lie algebra. If we took the usual definition of multiplications that makes F^2 into a ring (a semilocal Noetherian ring, in fact!) and we assumed [x, y] meant xy-yx in that sense, then we would be defining a trivial bracket operation and this doesn't make sense when you want to define the bracket operation I did above. It doesn't necessarily have the structure of "xy-yx" for two vectors x, y in F^2. Note the bracket operation I defined does satisfy [(1, 0), (0, 1)] = (1, 0) for these two basis vectors.

Posted

Your definition would work if you considered some type of linear case, for instance if we consider the set of all n x n matrices over the complex numbers with trace zero, then [x, y] = xy-yx for x, y n x n matrices with complex entries and the usual matrix multiplication and addition defines the structure of a Lie algebra over this vector space.

Posted

Yes, well, I confess these are the only Lie algebras I know of, that's why your counterexample threw me. I CAN prove my point for case you describe, but it is rather technical, and probably of no interest to the general reader. (It involves the concept of a derivation a.k.a the Leibniz Law)

 

And, so long as I have my hair shirt on, lemme also confess the following; this is a deep and at times difficult subject. I skated over a lot of detail, and was far from rigourous for the following reason:

 

Although I don't agree with with your rather unkind comment that "people here are bad at math", I acknowledge they are mostly physicists, who, reasonably enough, are not interested in the minutiae of mathematics; I was trying (unsuccessfully, it seems) to grab their interest with a brief overview of a theory many of them use daily.

 

Sorry if I mangled things - I don't believe I did as badly as all that.....

Posted
Yes, well, I confess these are the only Lie algebras I know of, that's why your counterexample threw me. I CAN prove my point for case you describe, but it is rather technical, and probably of no interest to the general reader. (It involves the concept of a derivation a.k.a the Leibniz Law)

 

I'm relatively sure his doesn't require the concept of a derivation. Maybe, not too sure. I'm familiar with it, if you want to give it a go.

 

 

And, so long as I have my hair shirt on, lemme also confess the following; this is a deep and at times difficult subject. I skated over a lot of detail, and was far from rigourous for the following reason:

 

Although I don't agree with with your rather unkind comment that "people here are bad at math", I acknowledge they are mostly physicists, who, reasonably enough, are not interested in the minutiae of mathematics; I was trying (unsuccessfully, it seems) to grab their interest with a brief overview of a theory many of them use daily.

 

Notice I said "bad" at math and this was more or less commenting on the interest in physics here and perhaps the large amount of people posting about crackpot ideas (for instance Don Blazys and some guy attempting to prove pi was 3.3333 or something along those lines) about mathematics. And it is rather disappointing there's not more mature discussion of mathematics and it is nice to see you attempting to do such a thing.

 

Sorry if I mangled things - I don't believe I did as badly as all that.....

 

You did quite well! Just be careful with definitions next time.

Posted

This, then, is as best I remember it.

 

We will have Lie group [math]G[/math] with identity (obviously) [math]e[/math], and a vector space there I shall call [math]T_eG[/math]. Let us suppose that [math]X \in T_eG[/math] is a vector; it is moreover a differential operator, call it, oh I dunno, say [math]\sum \nolimits _i \frac{\partial}{\partial x^i}[/math] where the [math]\{x^i\}[/math] are the coordinate functions at [math]e[/math] (Recall that [math]G[/math] is a manifold!)

 

So consider the derivation [math]X(f\cdot g) = (Xf)\cdot g + f \cdot (Xg)[/math], where [math]f,\,g[/math] are [math]C^{\infty}[/math] maps[math]G \to G[/math]. Thus, this operator is an element in a differentiable algebra. Fine. (The "cdot" is function composition, btw)

 

Now consider [math]XY[/math]. By the above I will have that

 

[math]XY(f \cdot g) = X[(Yf) \cdot g + f \cdot (Yg)] =[/math]

 

[math](XYf) \cdot g + (Yf) \cdot (Xg) +(Xf) \cdot (Yg) +f \cdot (XYg)[/math]

 

Clearly this is NOT a derivation, and therefore not in our algebra. But lookee here....

 

The only reason(s) why not are the second and third terms above. But notice this equality is completely symmetric in [math]X[/math] and [math]Y[/math] and since they are arbitrary vectors, let's interchange them giving

 

[math](YXf) \cdot g + (Xf) \cdot (Yg) +(Yf) \cdot (Xg) +f \cdot (YXg)[/math], so by subtracting of one from the other, I kill the offending middle terms, but at this cost:

 

[math]XY(f \cdot g) - YX(f \cdot g) = [(XYf) \cdot g - (YXf) \cdot g]+[ f \cdot (XYg)- f \cdot (YX g)][/math]

 

[math]= [(XY-YX)f]\cdot g +f\cdot[(XY-YX)g][/math], which now IS a derivation, and thus an element in our algebra.

 

Convention dictates that the LHS is written as [math] XY - YX \equiv [X,Y][/math].

 

OK folks?

Posted

Well, okay, not quite. This is sort of going away from the abstract approaching I was really looking for it and not quite answering the question I posed. Consider the following argument:

 

If (L, [,]) is a Lie algebra of dimension two over a field F and dim([L, L]) = 1 and x, y is a basis for L, then [x, y] is nonzero since dim([L, L]) = 1, thus we may assume [x, y] = ax, for some a in F - (0) (otherwise replace x by this vector and replace y by some vector linearly independent of this vector), then we may also assume a = 1 (otherwise replace y by (1/a)y). Thus we have [x, y] = x.

 

Then using this, you can show this is isomorphic to the Lie algebra in my first post. This is not difficult after the previous step.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...