Jump to content
Science Forums

Recommended Posts

Posted

Hope you all enjoyed your brief respite from Boring Ben. Welcome back to my world.....

 

So,the Lie groups are of some passing interest to physicists; they are also nice to study in their own right.

 

Let's see.

 

The Lie groups describe continuous symmetries. Suppose, for example, I hand you a real physical object, and you want to discover if it is symmetrical. Your first impulse, let us say, is to rotate it around every which way, making sure you don't blink (smooth operation). And if you discover that is "looks the same" under each of these rotations, you will report that it have rotational symmetry.

 

Of course, "shape symmetry" may not be all that I am interested in. In fact, physicists use the term symmetry to refer to something rather more subtle. Here's an example:

 

one of the postulates of the Special Theory is that Newton's Laws are invariant under any arbitrary, continuous, linear coordinate transformation. This is a symmetry in the broadest sense - in fact there is a Lie group that describes this symmetry, the details of which may perhaps startle some of you!

 

OK, so there is a class of such transformations, each of which give you information about different sorts of continuous symmetries.

 

I guess I can turn this around, sort of; there is a class of smooth linear transformations, each of which preserves some symmetry properties of some abstract object. Each set in this class has the algebraic structure of a group, which in due course, I'll explain.

 

A Lie group is a manifold endowed with the structure of a group. The converse will serve equally as a definition, obviously.

 

As it is always nice to start on familiar ground, so to speak, we will start with a pair of real vector spaces, and consider the totality of linear maps (transformations, operators) [math]V \to W[/math]. It is relatively easy to show that this collection of maps is itself a vector space over the reals, but since this fact is at present of no interest to us, we may pass over it. Let's write this vector space as [math]L(V,W)[/math].

 

By my notation, the space [math]L(V,V)[/math] has as elements (vectors) all the maps [math]V \to V[/math]. Of course, I introduce no ambiguity by writing this as [math]L(V)[/math].

 

Now we know that any linear transformation on a real space can be written as a matrix, whose entries are real; specifically, for an n-dimensional real space, these transformation matrices are real n x n matrices. We now want to restrict our attention to those maps which are invertible i.e. they are "isomorphisms", though here, the correct term is automorphism.

 

In terms of the matrix representation of our automorphisms, this must mean that our matrices have non-zero determinant (why, anybody?). By stipulating the existence of all inverses, and recognizing the identity matrix we will find that the vector space of all automorphisms, written as matrices, is in fact a group. This group we will call as [math]GL(V)[/math], the general linear group.

 

So the group [math]GL(V)[/math] has matrix multiplication as the group operation, the identity [math]I=\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\\

\end{pmatrix}[/math], say, and all inverse matrices such that [math]AA^{-1} = A^{-1}A = I[/math]. Let us observe that, in general, for matrices [math]AB \ne BA[/math].

 

Now we will ultimately be rewarded by concentrating, not on what groups do but rather by what they are. The obvious abstraction is to rewrite this group as [math]GL(n,R)[/math] to signify this. Thus

 

the general linear group [math]GL(n,R) [/math] is the group of all real n x n matrices with non-zero determinant (we call such a matrix "non-singular", btw). This is an example of a Lie group, though to show this, we need to convince ourselves it is indeed a manifold.

 

So, let [math] A \in GL(n,R)[/math] be a group element, a matrix as defined above. [math]A[/math] is an n x n matrix, i.e. there are [math]n^2[/math] entries in [math]A[/math], which I will write as [math]a^i_j[/math], where i labels the rows, and j labels the columns, say. For ease of notation let's set [math] n^2 = m[/math].

 

Then evidently there is a mapping [math]A \to R^m,\,\, a^i_j \mapsto (a^1, a^2,...., a^m)[/math]. Now provided we are consistent in our mappings, say the i-th element in our m-tuple maps onto the jk-th entry in our matrix, we see that the the m-tuple [math](a^1,a^2,....,a^m)[/math] uniquely determines a matrix in [math]GL(n,R)[/math].

 

Thus we have an isomorphism [math]GL(n,R) \simeq R^m[/math].

 

If we can show continuity of this map, we will have a homeomorphism.

 

Without attempting any degree of rigour, I offer this. Let [math]B[/math] be the matrix with entries [math]b^i_j[/math]. Then whenever the point [math](b^1,b^2,....,b^m) \in R^m[/math] is arbitrarily close to the point [math](a^1,a^2,....,a^m)[/math] so the matrix whose entries are [math]b^i_j[/math] is arbitrarily close the matrix whose entries are [math] a^i_j \in GL(n,R)[/math].

 

This would imply a continuous isomorphism - a homeomorphism - so this group would indeed be a manifold.

 

PS. I am not all satisfied with this last bit (though it's not entirely wrong) - see if I can cook up a more rigourous argument

Posted
Hope you all enjoyed your brief respite from Boring Ben. Welcome back to my world.....
Welcome back Ben! :)

 

(why, anybody?)
Gosh, it is one of my dogmas of faith, it just plain couldn't be otherwise. Don't be so blasphemous as to question the truth of such a fundamental tenet, you'd be asking me to scavenge back through memories of theorems that have faded away, their place taken by the manifest self-evidence that the linear transformation is injective and surjective, one to one and onto, iff the determinant is zero!

:magic: :hypnodisk: :highfive: :rockon2:

 

PS. I am not all satisfied with this last bit (though it's not entirely wrong) - see if I can cook up a more rigourous argument
It would involve specifying what the metric in [math]GL(V)[/math] is and I guess the wisest thing would be giving it in terms of the metric over images of a same given argument.

 

What I'll be most looking forward to are the details of exponential mapping from the Lie algebra to the group, especially in the case of [math]SO(3,\mathbb{R})[/math], but I don't want to solicit overstepping anything just and wise along the righteous path, so I'll let you work toward that goal.

Posted
the linear transformation is injective and surjective, one to one and onto, iff the determinant is zero!
All true, but being a pedant, I do this:

 

By the definition of a matrix inverse, [math]MM^{-1} = M^{-1}M = I[/math], the identity matrix.

 

By the property of determinants we must have that [math]\text{det}(MM^{-1}) = \text{det}(I) =1[/math].

 

Also by the property of determinants we have that [math] \text{det}(MM^{-1}) = (\text{det}M)(\text{det}M^{-1}) = \text{det}(I)=1 \Rightarrow \text{det}(M^{-1}) = \frac{1}{\text{det}M} \Rightarrow \text{det}M \ne 0[/math]

 

What I'll be most looking forward to are the details of exponential mapping from the Lie algebra to the group,
Yeah, this is a neat trick, but note this: you can ONLY exponentiate a diagonal matrix, so do you really want me to go into these details?

 

Anyway, we have the Lie group [math]GL(n,\mathbb{R})[/math] of real n x n matrices with non-zero determinant. Notice first that this determinant, being integer, may be positive or negative. Now since the identity matrix has determinant = 1, we may "group" the elements (matrices) with strictly positive determinant together with the identity to form a subgroup, namely [math]SL(n,\mathbb{R})[/math] which is called the "special linear group". (Don't forget the elements of this group are still linear transformations on a real vector space)

 

This is an important construction. Notice also that, since the group [math]GL(n,\mathbb{R})[/math] has but a single identity, those matrices with negative determinant cannot possibly be a subgroup.

 

Anyway, it's my turn to cook dinner, so...... apron on, and away I go

Posted

:idea: I hadn't thought of that! I was trying to relate the determinant to matters of linear independent vectors and kinda supposing one would have to argue in terms of the n-volumes!

 

you can ONLY exponentiate a diagonal matrix
Sez who? The generators of rotations aren't diagonal, as in most cases (indeed they don't commute, when there's more than one of them). The exponential can be defined as its Taylor series so the argument need only have n-th powers, it's done all the time!

 

Notice first that this determinant, being integer, may be positive or negative.
Why integer? :painting: If the entries can be real, so can the determinant!!
Posted
Sez who? The generators of rotations aren't diagonal, as in most cases (indeed they don't commute, when there's more than one of them). The exponential can be defined as its Taylor series so the argument need only have n-th powers, it's done all the time!
Oops, as always, you are right. I mis-remembered by notes. Let us leave that for now.

 

Why integer? If the entries can be real, so can the determinant!!
Right again, I mis-spoke (curse you; may you awake a vegetarian)

 

Anyway, blushing fiercely.......

 

So, recall we have our first Lie group, the general linear group of real non-singular n x n matrices, which we called [math]GL(n,R)[/math].

 

First thing to note is that often, not always, when we see an "R" given for any Lie group we may replace it with a "C" to denote the fact we are working over the complex numbers.

 

Second that the groups [math]GL(n,R),\,GL(n,C)[/math] are the mummy and daddy of all the matrix Lie groups, since all matrix Lie groups come to us as subgroups of one of them. We had a look a look at one of them, the group of all real n x n matrices with strictly positive determinant, [math]SL(n, \mathbb{R})[/math]. By the above, we may also have it's big sister [math]SL(n,\mathbb{C})[/math].

 

Now, the transformations of particular interest to physicists are those that preserve the "length" of vectors and the "angle" between them - in the jargon they preserve the norm and the inner product.These transformations are represented by the groups whose elements are the matrices that satisfy [math]AA^{\dagger} = A^{\dagger}A = I[/math], where the "dagger" means the transpose of the matrix whose entries have been complex-conjugated.

 

Since complex conjugation has no meaning for real numbers, we recognize 2 general constructions, the unitary groups [math]U(n)[/math] and the orthogonal groups [math]O(n)[/math] which latter simply satisfy [math]A^TA = AA^T = I[/math] (Superscripted "T" means transpose, btw).

 

Notice this; In the first case I have no need to specify the field, as the term "unitary" includes it. In the second case I simply choose not to; although orthogonal groups (i.e. transposed but not conjugated) can be imposed on the complex field, in the words of my tutor "only a madman or a masochist would study them".

 

So the group [math]U(n)[/math] consists of the unitary matrices with entries from the complex field and determinant [math]\pm1[/math]. (Proof, anyone?) As before, we find the subgroup [math]SU(n)[/math] with [math]\text{det} = +1[/math]. Likewise, the group [math]O(n)[/math] is the group of real orthogonal matrices with [math]\det = \pm1[/math] with subgroup [math]SO(n),\,\, \text{det} = +1[/math].

 

Not only are these groups important in physics, they are relatively easy to work with, but not so easy as to render them trivial. For this reason, I propose to restrict my attention to them.

Posted
These transformations are represented by the groups whose elements are the matrices that satisfy [math]AA^{\dagger} = A^{\dagger}A = I[/math], where the "dagger" means the transpose of the matrix whose entries have been complex-conjugated.
Oooops again! :) I guess you meant the Hermitian conjugate here but no worry, the show goes on...;)

 

in the words of my tutor "only a madman or a masochist would study them".
:lol: We want you to work out everything about [imath]O(n, \mathbb{C})[/imath] and [imath]SO(n, \mathbb{C})[/imath] for us!

 

So the group [math]U(n)[/math] consists of the unitary matrices with entries from the complex field and determinant [math]\pm1[/math]. (Proof, anyone?)
Nah, I won't attempt to prove something that ain't true.

 

Instead I can prove that the determinant must be unimodular because the determinant of the conjugate is the complex conjugate of the determinant, so therefore [imath]d^*d=1[/imath].

Posted
Oooops again! :) I guess you meant the Hermitian conjugate here but no worry, the show goes on...;)
I have no idea what you mean here. Do we really need to get into operator theory? I had rather hoped not.

 

OK, I confess to being a little sketchy about the unitary operators/matrices. Lemme make amends....

 

Given the operator [math]H[/math] acting on a complex vector space, it is defined to be Hermitian iff [math] H= H^{\dagger} [/math] where [math] H^{\dagger} = (H^*)^T[/math] again by definition. (The "star" denotes complex conjugation, btw)

 

On the contrary, the operator [math]U[/math], again acting on a complex vector space is said to unitary iff [math]U^{\dagger}= U^{-1} \Rightarrow U^{\dagger}U = UU^{\dagger} = U^{-1}U = UU^{-1} = I[/math].

Posted

Alright, given your edit just after I was reading, it seems we are in less of a quarrelsome mood now. :)

 

...where [math] H^{\dagger} = (H^*)^T[/math] again by definition. (The "star" denotes complex conjugation, btw)
I agree with this definition in terms of symbol, but it shows that [imath]H^{\dagger}[/imath] means more than transposing and I've always called it Hermitian conjugate or the adjoint, with the definition of "Hermitian matrix" becoming "equal to its own HC/adjoint", except that the two names aren't synonymous for operators in general but only for finite dimensionality (offhand, I can't remember for sure from my QM courses if countable is sufficient for them to coincide).

 

Hermitian conjugation in [imath]GL(n, \mathbb{C})[/imath] obviously reduces to complex conjugation when [imath]n=1[/imath] (IOW in [imath]\mathbb{C})[/imath]) so it is to all effect the generalization of complex conjugation and maybe some folks call it just conjugation. I've caught many a QM textbook for physicists in the act of saying Hermitian even for the general case, in which it ought to be self-adjoint.

Posted

Yeah, sorry about the edit - I typed up some stuff late last night, which dawn (and complete sobriety) informed me was rubbish.

 

So given the identity [math]U^{\dagger}U = I[/math] we may have [math]\text{det}U^{\dagger}\text{det}U = 1[/math].

 

Since obviously [math]\text{det}U^T = \text{det}U[/math] then

 

[math]\text{det}U^{\dagger} = \text{det}U^* = (\text{det}U)^*

\Rightarrow \text{det}(U^{\dagger}U) = (\text{det}U)^*\text{det}U [/math]

 

which implies that [math]\sqrt{(\text{det}U)^*\text{det}U} \equiv ||\text{det}U|| = \text{det}I =1[/math].

Posted

I apologize for multiple posting, but I wanted to add this.

[math]\text{det}U^{\dagger} = \text{det}U^* = (\text{det}U)^* \Rightarrow \text{det}(U^{\dagger}U) = (\text{det}U)^*\text{det}U \{=1\}[/math]

Now since we have that [math](e^{i \theta})^* =e^{-i \theta}[/math] and that [math]e^{i \theta} e^{-i \theta} = e^{i \theta - i \theta} = e^0 = 1[/math], we might well ask if there is at least one Lie group with determinant [math]e^{i \theta}[/math].

 

The answer is yes; for example, a 1 x1 matrix has as determinant its single element (obviously). So there must be a unitary Lie group with its only group element [math]e^{i \theta}[/math]. This group is called the unitary group [math]U(1)[/math].

Posted

So, to every Lie group we can associate a vector space called its Lie algebra, reasonably enough, so I guess I should say a word or two about the Lie algebras.

 

Now there is no easy way I can see to do this, but I can give a little motivation.

 

Let [math]S^2 = x^2 + y^2 + z^2 = 1[/math] be the unit sphere embedded in [math]\mathbb{R}^3[/math]. Then for a rotation [math] \theta[/math] around the [math]x[/math]-axis, for example, we have the mapping

 

[math]x' = x[/math]

[math]y' = y \cos \theta - z \sin \theta[/math]

[math] z' = y \sin \theta + z \cos \theta[/math]

 

Notice that a simple calculation shows that [math](x')^2 + (y')^2 +

(z')^2 = 1[/math], which is still "on" the sphere [math]S^2[/math], that is, [math]S^2[/math] has rotational symmetry (you hardly needed ME to tell you that, right?).

 

So the Lie group that describes this symmetry will have as an element the matrix

 

[math]\begin{pmatrix}1& 0 & 0 \\ 0 & \cos \theta & \sin \theta\\ 0 & -\sin \theta & \cos \theta \end{pmatrix}[/math] (where now I am considering coordinate transformations).

 

We easily calculate that this matrix is orthogonal with positive unit determinant, and therefore conclude it is an element in the Lie group [math]SO(3)[/math]. Thus this group defines a set of maps on the 2-sphere as follows: to any element in [math]SO(3)[/math] there corresponds a map [math] S^2 \to S^2[/math].

 

But, on the other hand, since, by the embedding, the point [math]p = (x,y,z) \in S^2[/math] can be regarded as an element in [math]\mathbb{R}^3[/math], then I can just as easily think of [math]SO(3)[/math] a map [math]\mathbb{R}^3 \to \mathbb{R}^3[/math].

 

This then is another realization of this group. But since, unlike [math]S^2[/math], which is a manifold, we know that [math]\mathbb{R}^3[/math] is a vector space, so the map [math]\mathbb{R}^3 \to \mathbb{R}^3[/math] is a linear transformation, which, of course, can be written as a matrix.

 

So I make the definition: The realization of a Lie group as the set of transformations on a vector space is referred to as a representation of the group. In fact, every Lie group has as at least one such representation, as defined above - this is the realization of the group as transformations on its own tangent (vector) space. This is called the adjoint representation of the group.

 

Now things start to get a little weird. The group [math]SO(3)[/math] is a manifold, and has the topology of the 2-sphere. So in the above, can I replace the sphere [math]S^2[/math] by [math]SO(3)[/math] and still be talking sense - id est, can I think of [math]SO(3)[/math] as inducing a map from itself to itself?

 

The answer is yes, provided we proceed with the utmost caution.

 

PS. Some physicists make no distinction between the group and its algebra - this is a Bad Thing. But of course none here would be so foolish.....

Posted

Ah well, talk to yourself, why not. Anyway, I am determined to finish this thread, so tough tits.

 

So I have a generic Lie group [math]G \ni g,\,h[/math]. I now define the map [math]\Psi_g: G \to G[/math] by [math]\Psi_g(h)= g\cdot h\cdot g^{-1}[/math] (typo corrected) (the RHS is called "conjugation" btw), since by the group axioms [math]g\cdot h\cdot g^{-1} \in G[/math] (and another) and also has inverse.

 

If I further insist this implies that [math]\Psi_{g^{-1}} = (\Psi_g)^{-1}[/math] then I may say that the mapping [math]\Psi_g: G \to G[/math] is an isomorphism - in fact the correct name is automorphism.

 

Now since each [math]g \in G[/math] acts on every [math]h \in G[/math] by conjugation, we arrive at the mapping [math]\Psi: G \to \text{Aut}G[/math] where I am now interpreting the notation [math]\Psi_g[/math] as the image [math] g \mapsto \Psi(g) \equiv \Psi_g [/math]; it is not hard to see that this is in fact a GROUP. In fact it is a Lie group, so we have here a Lie group homomorphism.

 

Now these automorphisms each "fix" the identity, since, whereas [math]g \cdot h \cdot g^{-1} \ne h[/math] in general, (Jeez, someone shoot me) by the definition of the identity, [math]g \cdot e \cdot g^{-1} = e[/math] ALWAYS.

 

Now. My group [math]G[/math] is a LIE group, id est it is also a manifold, so to any point [math]g \in G[/math] I can associate a tangent (vector) space [math]T_gG[/math]. Since by the above, the only guaranteed fixed point is [math]e \in G[/math], it is best I work here, using the space [math]T_eG[/math].

 

But the elements, vectors, in this space are the directional derivatives at this point, it makes sense to define the mapping [math]T_eG \to T_eG[/math] by the differentials (evaluated at the identity) of the automorphisms [math] \Psi_g: G \to G[/math].

 

Thus I make the definition [math] \text{Ad}_g \equiv (d\Psi_g)_e: T_eG \to T_eG[/math], and by the same reasoning as before arrive at the mapping [math] \text{Ad}: G \to \text{Aut}T_eG[/math]. (Oh my word!) This is called the adjoint representation of our group [math]G[/math]

 

Outa puff for now, but don't relax. Bens are not quitters.......

Posted

Look, if anyone is trying to follow this shite (which seems unlikely), let me alert them to some grievous typos in my last post, now corrected.

 

So. We have the adjoint representation of a Lie group on its own tangent space, which I wrote as [math]\text{Ad}:G \to \text{Aut}T_eG,\,\, g \mapsto \text{Ad}(g) \equiv \text{Ad}_g[/math]. The elements in the codomain [math]\text{Aut}T_eG[/math] are quite simply the set of all invertible mappings [math]\text{Ad}_g:T_eG \to T_eG[/math].

 

Notice that these automorphisms form a group, but depend on a particular choice of [math]g \in G[/math]. This is not quite what we are after; notice that this group is a sub-space of ALL the mappings [math]T_eG \to T_eG[/math], which I will call [math]\text{End}T_eG[/math]. It is a vector space.

 

So finally let's take the derivative of [math]\text{Ad}: G \to \text{Aut}T_eG[/math] and arrive at our algebra [math]\text{ad}: T_eG \to \text{End}T_eG[/math], that is the mapping form our favoured tangent space onto the mappings from this space to itself.

 

A little thought shows that this is a necessary condition for an algebra, but it may not be sufficient.

 

For now, let's call the vectors in [math]T_eG[/math] as [math]X,\,\,Y,\,\,Z[/math]. Then I define [math]\text{ad}(X)(Y) \equiv \text{ad}_X(Y) \equiv [X,Y] \in T_eG[/math], where [math][X,Y] = XY - YX[/math].

 

Now, don't this look a lot like the commutator of matrices! That's because it IS.

 

Before I can assert this is a LIE algebra, I need to do a little more work. After that, I promise your misery will be at an end, and we can start to have some fun

  • 2 weeks later...
Posted

Most definitely not wasted! Yeah, it's a shame. Most people here are "bad" at math and it's relatively difficult to delve into anything remotely interesting. I'm finishing up my final for a linear algebra class and there were a handful of problems introducing the concept of a Lie algebra. So actually, I am very familiar with the adjoint endomorphism on a Lie algebra.

 

Here's a mildly interesting problem:

 

Let (L, [,]) be a Lie algebra. And define [L,L] to be the set of all linear combinations of elements [x,y] x, y in L. Show that up to isomorphism there is only one Lie algebra of dimension 2 such that the dimension of [L, L] is 1.

 

More interesting: Same problem as above, but with a Lie algebra of dimension 3.

 

I can't remember if you gave the definition of a Lie algebra isomorphism, but in case you didn't, it is a vector space isomorphism that preserves the bracket operation.

 

And on another note, are you interested in commutative algebra? That's currently my main focus. I'd be happy to strike up a conversation about commutative rings any time!

Posted

Here's a mildly interesting problem:

 

Let (L, [,]) be a Lie algebra. And define [L,L] to be the set of all linear combinations of elements [x,y] x, y in L. Show that up to isomorphism there is only one Lie algebra of dimension 2 such that the dimension of [L, L] is 1.

Um, let's write your definition as [math][\,,\,]: L \times L \to L[/math]. Then, if [math]\dim L = 2[/math], then the image of [math][\,,\,][/math] is 1-dimensional (recall that [math]L[/math] is a vector space, after all!)

 

The abelian case is trivial, so let's look at the non-abelian case.

 

So suppose that [math]\{X,Y\}[/math] is a basis for [math]L[/math], with [math]X[/math] spanning the image of [math][\,,\,][/math].

 

Now we can always find some scalar applied to [math]Y[/math] such that [math][X,Y] = X[/math]. Then this uniquely determines the algebra [math]L[/math] as follows.

 

Since we have that [math][X,Y] \equiv XY-YX[/math] we have that

 

[math]X-X=[X,Y]-[X,Y][/math]

[math]= (XY -YX)-(XY-YX)[/math]

[math]= (XY-YX) + (YX -XY)[/math]

[math]= [X,Y] + [Y,X] = X-X = 0 \Rightarrow [X,Y] = -[Y,X][/math]

 

(This is skew-symmetry, of course)

 

This in turn implies that [math][X,X] = 0[/math] and the Jacobi identity follows trivially, Thus the mapping [math][\,,\,]: L \times L \to L[/math] completely determines the algebra [math]L[/math].

 

Thus, for any algebra [math] L[/math] of dimension 2, the mapping [math][\,,\,]: L \times L \to L[/math] is unique, up to isomorphism.

 

And on another note, are you interested in commutative algebra? That's currently my main focus. I'd be happy to strike up a conversation about commutative rings any time!
Ya, well, I always hated Galois Theory, but I am a masochist, so give it ago, why not?
Posted

We don't necessarily have [x, y] = xy-yx. And if we did, your proof fails in characteristic 2 since 1 = -1. [x, x] = 0 always implies skew-symmetry, but the converse fails in characteristic 2.

 

And commutative algebra isn't really Galois theory, though the two have interactions. Commutative algebra is the study of rings

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...