Least Action

Nontrivializing triviality..and vice versa.

Archive for September 2010

Proof of the Jacobi Identity in Classical Mechanics

leave a comment »

I finally thought I should write it down…for once in my life.

Written by Vivek

September 23, 2010 at 22:58

A lemma on a rather frequently encountered set of Hermitian matrices

leave a comment »

I’m documenting a small ‘lemma’ that I think is worth mentioning. Suppose M^{i} are 4 Hermitian matrices (i = 1, \ldots, 4) satisfying

M^{i}M^{j} + M^{j}M^{i} = 2\delta^{ij}I

where I denotes the identity matrix. Then these matrices have eigenvalues \pm 1, are traceless and are necessarily of even dimension.

For i = j, the anticommutator above gives (M^{i})^2 = I. So, for any eigenvector X and eigenvalue \lambda, we have

M^{i}X = \lambda X \implies (M^{i})^2 X = \lambda^2 X

or equivalently \lambda = \pm 1.

Traceless-ness has a neat proof:

Suppose j \neq i. Then

tr(M^{i}) = tr(M^{i}(M^{j})^2) = tr(M^{i}M^{j}M^{j}) = tr(M^{j}M^{i}M^{j})

where the last equality follows from tr(ABC) = tr(CAB). But M^{j}M^{i} = -M^{i}M^{j}, so

tr(M^{i}) = -tr(M^{i}M^{j}M^{j}) = -tr(M^{i}) \implies tr(M^{i}) = 0.

Finally, suppose the numbers of +1 eigenvalues and -1 eigenvalues are a and b respectively. The dimension of the matrix is then a + b. Since the trace equals the sum of eigenvalues, we have

a - b = 0

So, the dimension is a + b = 2a = 2b, which is clearly always an even number.

Written by Vivek

September 19, 2010 at 20:33

Group Theory II – SU(2)

leave a comment »

In a previous post, I cursorily touched upon the standard orthogonal and unitary groups. It turns out that in quantum mechanics, the operators of principal interest are either Hermitian or unitary. So it is only natural that we should be interested in transformations from the Special Unitary Group SU(N). In particular, we dwell on the N = 2 case.

First of all, SU(2) has 3 independent parameters, something that is obvious if one demands that a matrix U,

U = \left(\begin{array}{cc}U_{11}&U_{12}\\U_{21}&U_{22}\end{array}\right)

be unitary, that is

\left(\begin{array}{cc}U_{11}&U_{12}\\U_{21}&U_{22}\end{array}\right)\left(\begin{array}{cc}U_{11}^{*}&U_{21}^{*}\\U_{12}^{*}&U_{22}^{*}\end{array}\right) = \left(\begin{array}{cc}1 & 0\\ 0 & 1\end{array}\right)

and have a determinant equal to +1. It follows that any such matrix can be written as

U = \left(\begin{array}{cc}U_{11}&U_{12}\\-U_{12}^{*}&U_{11}^{*}\end{array}\right)

in which there are three independent parameters. If U is expanded about the identity, i.e.

U = 1 + i\epsilon G

then unitarity of U demands that G be Hermitian, and also traceless. For a finite transformation, this generalizes to

U = \exp(iH) = \exp(i\alpha G)

That is more generally, det(U) = \exp(i\alpha\,tr(G)) implies that tr(G) = 0 for det(U) = 1. So, H is a traceless matrix.

We can write

H = \sum_{k=1}^{3}\alpha_{k}G_{k} = \boldsymbol{\alpha}\cdot\boldsymbol{G}

From nonrelativistic quantum mechanics, we know that the effect of a rotation through an angle \theta on a spin-1/2 particle about an axis \boldsymbol{\hat{n}} is given by the unitary matrix,

U(\theta) = \exp(-i\theta\,\boldsymbol{\hat{n}}\cdot\boldsymbol{\sigma}/2)

where \boldsymbol{\sigma} is the Pauli spin matrix “vector” given by,

\boldsymbol{\sigma} = \hat{x}\sigma_{x} + \hat{y}\sigma_{y} + \hat{z}\sigma_{z}

Clearly, in this development, we can identify G_j = \sigma_j/2 for j = 1, 2, 3 as the generators of the group. The commutation relation for the generators is

\left[\frac{\sigma_i}{2},\frac{\sigma_j}{2}\right] = i\epsilon_{ijk}\frac{\sigma_k}{2}

This is called the fundamental representation of SU(2). An n-dimensional representation of SU(2) consists of n \times n unitary matrices satisfying such commutation relations.

Written by Vivek

September 10, 2010 at 17:42

Group Theory I – O(n), SO(n), U(n), SU(n)

with one comment

I have been trying to learn group theory for a long time. Invariably, the books I come across are either too formal or too cursory. But if I ignore this, there are of course a number of very well written introductions to group theory available on the internet. In this post, I will try not to bore you with what a group is (chances are, you already know, if you’re reading this), but will present a somewhat different perspective of how it fits into our ‘daily’ physics.

Orthogonal and Special Orthogonal Groups

The set of all orthogonal matrices, i.e. matrices satisfying

A^{T}A = AA^{T} = I

forms a group denoted by O(n). Now, if A is a real orthogonal matrix of dimension n, it has n(n-1)/2 independent parameters. This is easily seen by taking n = 2. Let me try a more general proof here..

A = \left(\begin{array}{cccc}a_{11}&a_{12}&a_{13}&\ldots\\a_{21}&a_{22}&a_{23}&\ldots\\a_{31}&a_{32}&a_{33}&\ldots\\ \ldots & \ldots & \ldots &\ldots\end{array}\right)

Clearly,

(A^{T}A)_{ij} = \sum_{k}a_{ki}a_{kj} = \sum_{k}a_{ik}a_{jk} = \delta_{ij}

There are n of these equations with 1 on the RHS, and there are (n^2-n) such equations with a 0 on the RHS. But of them, only half are unique, because i \leftrightarrow j yields the same equation. So, the number of independent conditions is only n(n-1)/2.

Why is O(n) important? It is the rotation group in n-dimensional Euclidean space. The only restriction we need to impose on it is for the matrices to have a determinant of +1 though, which makes them special, and part of the group SO(n). It is well known that rotation matrices in 3 dimensions are orthogonal and have a unit determinant. Note that orthogonality guarantees a determinant of \pm 1, but proper rotations can only be part of SO(n).

There is a caveat to this. Elements of both O(n) and SO(n) are parametrized in general by continuous variables. For a rotation in 2-dimensional space, one angle is sufficient. For 3-dimensions, we have the three Euler angles. So each rotation matrix is a function of n(n-1)/2 angles, which are continuous. Groups that depend on continuously varying parameters are called Lie Groups. Also, since angles vary over closed, finite intervals, these groups are also said to be compact.

Unitary and Special Unitary Groups

The set of unitary matrices, i.e. matrices satisfying

A^{\dagger}A = AA^{\dagger} = I

forms a group denoted by U(n). Let’s use the above idea to find the number of independent parameters of a unitary matrix of dimension n.

A =  \left(\begin{array}{cccc}a_{11}&a_{12}&a_{13}&\ldots\\a_{21}&a_{22}&a_{23}&\ldots\\a_{31}&a_{32}&a_{33}&\ldots\\  \ldots & \ldots & \ldots &\ldots\end{array}\right)

The unitarity condition translates to

(A^{\dagger}A)_{ij} = \sum_{k}a^{*}_{ki}a_{kj} = \sum_{k}a_{ik}a^{*}_{jk} = \delta_{ij}

In this case, the i = j (diagonal) entries contribute to the only nonvanishing RHS, and the upper and lower triangular equations are all distinct because of additional conditions imposed by the elements being complex. So, in all there are n^2 - 1 independent elements of matrix in U(n).

Additionally, if the determinant of the matrix is +1, the matrix is said to be special, and part of the special unitary group of order n, i.e. SU(n). By the way, I haven’t shown that any of these groups are indeed groups. For that, you have to show closure under group multiplication, associativity, existence of a unique unit element and of an inverse for every element. Why is SU(n) important? Well, that requires a motivation through SU(2), the ‘spin group’ in nonrelativistic quantum mechanics. I will address this in a subsequent post.

Much of group theory involves a lot of jargon, which can be a bit tricky to connect to the “real” world, if you think like me. But it helps to know the jargon, as in any field you want to grapple with.

A subset G' of a group G which is closed under multiplication, is called a subgroup of G.

Let g' \in G' and g \in G. If gg'g^{-1} \in G' for every g \in G and g' \in G', then G' is called an invariant subgroup of G.

Representations

So far, we have looked at matrix representations of the standard orthogonal and unitary groups. These are useful because they let us employ familiar rules of matrix algebra to study the properties of the group in question. Matrix representations are closely associated with symmetries. A good example is the time independent Schrodinger equation,

H\psi = E\psi

which is an eigenvalue problem for a stationary state \psi. If there exists a group G of transformations under which the Hamiltonian G stays invariant, then

H_{trans} = RHR^{-1} = H

for some R \in G represents the group action. In particular, this implies that every element of G commutes with the Hamiltonian (and hence, by a result from linear algebra, it is possible to diagonalize H and R simultaneously for every R \in G — quite a strong result!), i.e. RH = HR.

Now,

R(H\psi) = R(E\psi) = E(R\psi)

but

R(H\psi) = H(R\psi) = E(R\psi)

So, all the transformed states are degenerate and constitute a multiplet.

Suppose \Omega denotes the vector space of all transformed solutions, and has a finite dimension N. Then, it has a basis, which we can denote by \psi_1, \psi_2, \ldots, \psi_{N}. Since R\psi_j belongs to the multiplet, it can be expanded in terms of this basis, as

R\psi_j = \sum_{k}r_{jk}\psi_{k}

This means that there is a matrix \{r_{jk}\} associated with every group element R.

Group representations are of two kinds:

1. Irreducible, which means by rotating any element of \Omega with all elements of G, we can recover all other elements of \Omega, or

2. Reducible, in which case, the vector space \Omega splits into a direct sum of vector subspaces each of which is mapped into itself (but not into another) under the action of G, i.e. \Omega = \Omega_1 \oplus \Omega_2 \oplus \ldots.

Written by Vivek

September 10, 2010 at 14:12