# Least Action

Nontrivializing triviality..and vice versa.

## Why V(x) = -1/x^2 has no bound state

Here we present a scaling-based argument to show that the attractive potential

$V(x) = -\frac{\lambda}{x^2}$

($\lambda > 0$), has no bound states (i.e. states with energy E < 0). Consider the Time Independent Schrodinger equation for this potential, which is the eigenvalue equation for the corresponding Hamiltonian,

$-\frac{\hbar^2}{2m}\frac{d^2\psi}{dx^2} - \frac{\lambda}{x^2}V(x)\psi = E\psi$

This can be rearranged as

$\frac{d^2\psi}{dx^2} + \frac{2m\lambda}{\hbar^2}\frac{1}{x^2}\psi = -\frac{2mE}{\hbar^2}\psi$

Now, it is easy to see that the quantity

$\frac{2m\lambda}{\hbar^2}$

is dimensionless. So, this problem has no independent scale, even though naively one might think that $\lambda$ specifies a scale for this problem. We claim that for such a system, there can be no bound state. This is proved below.

Suppose we perform the scale transformation $x \rightarrow \alpha x$ where $\alpha$ is some nonzero real number, we see from the equation above that if $E$ is an eigenvalue, then so is $\alpha^2 E$.

Suppose now that a bound ground state exists, with energy $E_{G}$. By definition $E_{G} < 0$. Then scale invariance implies that $\alpha^2 E_{G}$ must also be the energy of some bound state. But

$\alpha^2 E_{G} < E_{G}$

as multiplying the negative ground state energy by a positive number only makes it more negative. This contradicts the fact that the ground state has energy $E_{G}$. In fact, we can make a stronger statement, viz. the ground state energy can be made as small as we want. Therefore, there is no finite energy ground state for this system, and consequently there can be no bound states.

Note that there is no such problem with scattering states, i.e. states with positive energy. One can take an arbitrarily small positive energy scattering state and from it obtain valid energies of the continuum of higher energy scattering states by performing a scale transformation.

Incidentally, this is why potentials like $-\lambda[\delta(x)]^2$ and $-\lambda\delta(x)/x$ also have no bound states.

Written by Vivek

February 8, 2012 at 08:48

## A Mathematica notebook for Angular Momentum Matrix Algebra

This is a Mathematica notebook for angular momentum matrix algebra. I have not yet added a component to add angular momenta, which I plan to do soon.

Feel free to use and distribute the notebook. Please just add a citation and a link to this webpage (also optional, of course).

Angular Momentum Matrix Algebra Notebook

Written by Vivek

November 17, 2010 at 19:12

## Thermal Noise Engines

I just stumbled upon an interesting paper today on arXiv, from a researcher at the Department of Electrical Engineering at Texas A&M University. I am copying the abstract entry on the pre-print archive below.

# Thermal noise engines

Authors:Laszlo B. Kish
(Submitted on 29 Sep 2010 (v1), last revised 20 Oct 2010 (this version, v5))

Electrical heat engines driven by the Johnson-Nyquist noise of resistors are introduced. They utilize Coulomb’s law and the fluctuation-dissipation theorem of statistical physics that is the reverse phenomenon of heat dissipation in a resistor. No steams, gases, liquids, photons, combustion, phase transition, or exhaust/pollution are present here. In these engines, instead of heat reservoirs, cylinders, pistons and valves, resistors, capacitors and switches are the building elements. For the best performance, a large number of parallel engines must be integrated to run in a synchronized fashion and the characteristic size of the elementary engine must be at the 10 nanometers scale. At room temperature, in the most idealistic case, a two-dimensional ensemble of engines of 25 nanometer characteristic size integrated on a 2.5×2.5cm silicon wafer with 12 Celsius temperature difference between the warm-source and the cold-sink would produce a specific power of about 0.4 Watt. Regular and coherent (correlated-cylinder states) versions are shown and both of them can work in either four-stroke or two-stroke modes. The coherent engines have properties that correspond to coherent quantum heat engines without the presence of quantum coherence. In the idealistic case, all these engines have Carnot efficiency, which is the highest possible efficiency of any heat engine,without violating the second law of thermodynamics.

This is a very interesting paper. Who knows what the future has in store for us…quantum thermal power stations?

Written by Vivek

October 23, 2010 at 00:20

## A lemma on a rather frequently encountered set of Hermitian matrices

I’m documenting a small ‘lemma’ that I think is worth mentioning. Suppose $M^{i}$ are 4 Hermitian matrices ($i = 1, \ldots, 4$) satisfying

$M^{i}M^{j} + M^{j}M^{i} = 2\delta^{ij}I$

where $I$ denotes the identity matrix. Then these matrices have eigenvalues $\pm 1$, are traceless and are necessarily of even dimension.

For $i = j$, the anticommutator above gives $(M^{i})^2 = I$. So, for any eigenvector $X$ and eigenvalue $\lambda$, we have

$M^{i}X = \lambda X \implies (M^{i})^2 X = \lambda^2 X$

or equivalently $\lambda = \pm 1$.

Traceless-ness has a neat proof:

Suppose $j \neq i$. Then

$tr(M^{i}) = tr(M^{i}(M^{j})^2) = tr(M^{i}M^{j}M^{j}) = tr(M^{j}M^{i}M^{j})$

where the last equality follows from $tr(ABC) = tr(CAB)$. But $M^{j}M^{i} = -M^{i}M^{j}$, so

$tr(M^{i}) = -tr(M^{i}M^{j}M^{j}) = -tr(M^{i}) \implies tr(M^{i}) = 0$.

Finally, suppose the numbers of +1 eigenvalues and -1 eigenvalues are $a$ and $b$ respectively. The dimension of the matrix is then $a + b$. Since the trace equals the sum of eigenvalues, we have

$a - b = 0$

So, the dimension is $a + b = 2a = 2b$, which is clearly always an even number.

Written by Vivek

September 19, 2010 at 20:33

## Group Theory II – SU(2)

In a previous post, I cursorily touched upon the standard orthogonal and unitary groups. It turns out that in quantum mechanics, the operators of principal interest are either Hermitian or unitary. So it is only natural that we should be interested in transformations from the Special Unitary Group $SU(N)$. In particular, we dwell on the $N = 2$ case.

First of all, $SU(2)$ has 3 independent parameters, something that is obvious if one demands that a matrix $U$,

$U = \left(\begin{array}{cc}U_{11}&U_{12}\\U_{21}&U_{22}\end{array}\right)$

be unitary, that is

$\left(\begin{array}{cc}U_{11}&U_{12}\\U_{21}&U_{22}\end{array}\right)\left(\begin{array}{cc}U_{11}^{*}&U_{21}^{*}\\U_{12}^{*}&U_{22}^{*}\end{array}\right) = \left(\begin{array}{cc}1 & 0\\ 0 & 1\end{array}\right)$

and have a determinant equal to $+1$. It follows that any such matrix can be written as

$U = \left(\begin{array}{cc}U_{11}&U_{12}\\-U_{12}^{*}&U_{11}^{*}\end{array}\right)$

in which there are three independent parameters. If $U$ is expanded about the identity, i.e.

$U = 1 + i\epsilon G$

then unitarity of $U$ demands that $G$ be Hermitian, and also traceless. For a finite transformation, this generalizes to

$U = \exp(iH) = \exp(i\alpha G)$

That is more generally, $det(U) = \exp(i\alpha\,tr(G))$ implies that $tr(G) = 0$ for $det(U) = 1$. So, $H$ is a traceless matrix.

We can write

$H = \sum_{k=1}^{3}\alpha_{k}G_{k} = \boldsymbol{\alpha}\cdot\boldsymbol{G}$

From nonrelativistic quantum mechanics, we know that the effect of a rotation through an angle $\theta$ on a spin-1/2 particle about an axis $\boldsymbol{\hat{n}}$ is given by the unitary matrix,

$U(\theta) = \exp(-i\theta\,\boldsymbol{\hat{n}}\cdot\boldsymbol{\sigma}/2)$

where $\boldsymbol{\sigma}$ is the Pauli spin matrix “vector” given by,

$\boldsymbol{\sigma} = \hat{x}\sigma_{x} + \hat{y}\sigma_{y} + \hat{z}\sigma_{z}$

Clearly, in this development, we can identify $G_j = \sigma_j/2$ for $j = 1, 2, 3$ as the generators of the group. The commutation relation for the generators is

$\left[\frac{\sigma_i}{2},\frac{\sigma_j}{2}\right] = i\epsilon_{ijk}\frac{\sigma_k}{2}$

This is called the fundamental representation of $SU(2)$. An $n$-dimensional representation of $SU(2)$ consists of $n \times n$ unitary matrices satisfying such commutation relations.

Written by Vivek

September 10, 2010 at 17:42

## Group Theory I – O(n), SO(n), U(n), SU(n)

with one comment

I have been trying to learn group theory for a long time. Invariably, the books I come across are either too formal or too cursory. But if I ignore this, there are of course a number of very well written introductions to group theory available on the internet. In this post, I will try not to bore you with what a group is (chances are, you already know, if you’re reading this), but will present a somewhat different perspective of how it fits into our ‘daily’ physics.

Orthogonal and Special Orthogonal Groups

The set of all orthogonal matrices, i.e. matrices satisfying

$A^{T}A = AA^{T} = I$

forms a group denoted by $O(n)$. Now, if $A$ is a real orthogonal matrix of dimension $n$, it has $n(n-1)/2$ independent parameters. This is easily seen by taking $n = 2$. Let me try a more general proof here..

$A = \left(\begin{array}{cccc}a_{11}&a_{12}&a_{13}&\ldots\\a_{21}&a_{22}&a_{23}&\ldots\\a_{31}&a_{32}&a_{33}&\ldots\\ \ldots & \ldots & \ldots &\ldots\end{array}\right)$

Clearly,

$(A^{T}A)_{ij} = \sum_{k}a_{ki}a_{kj} = \sum_{k}a_{ik}a_{jk} = \delta_{ij}$

There are $n$ of these equations with $1$ on the RHS, and there are $(n^2-n)$ such equations with a $0$ on the RHS. But of them, only half are unique, because $i \leftrightarrow j$ yields the same equation. So, the number of independent conditions is only $n(n-1)/2$.

Why is $O(n)$ important? It is the rotation group in n-dimensional Euclidean space. The only restriction we need to impose on it is for the matrices to have a determinant of $+1$ though, which makes them special, and part of the group $SO(n)$. It is well known that rotation matrices in 3 dimensions are orthogonal and have a unit determinant. Note that orthogonality guarantees a determinant of $\pm 1$, but proper rotations can only be part of $SO(n)$.

There is a caveat to this. Elements of both $O(n)$ and $SO(n)$ are parametrized in general by continuous variables. For a rotation in $2$-dimensional space, one angle is sufficient. For $3$-dimensions, we have the three Euler angles. So each rotation matrix is a function of $n(n-1)/2$ angles, which are continuous. Groups that depend on continuously varying parameters are called Lie Groups. Also, since angles vary over closed, finite intervals, these groups are also said to be compact.

Unitary and Special Unitary Groups

The set of unitary matrices, i.e. matrices satisfying

$A^{\dagger}A = AA^{\dagger} = I$

forms a group denoted by $U(n)$. Let’s use the above idea to find the number of independent parameters of a unitary matrix of dimension $n$.

$A = \left(\begin{array}{cccc}a_{11}&a_{12}&a_{13}&\ldots\\a_{21}&a_{22}&a_{23}&\ldots\\a_{31}&a_{32}&a_{33}&\ldots\\ \ldots & \ldots & \ldots &\ldots\end{array}\right)$

The unitarity condition translates to

$(A^{\dagger}A)_{ij} = \sum_{k}a^{*}_{ki}a_{kj} = \sum_{k}a_{ik}a^{*}_{jk} = \delta_{ij}$

In this case, the $i = j$ (diagonal) entries contribute to the only nonvanishing RHS, and the upper and lower triangular equations are all distinct because of additional conditions imposed by the elements being complex. So, in all there are $n^2 - 1$ independent elements of matrix in $U(n)$.

Additionally, if the determinant of the matrix is $+1$, the matrix is said to be special, and part of the special unitary group of order n, i.e. $SU(n)$. By the way, I haven’t shown that any of these groups are indeed groups. For that, you have to show closure under group multiplication, associativity, existence of a unique unit element and of an inverse for every element. Why is $SU(n)$ important? Well, that requires a motivation through $SU(2)$, the ‘spin group’ in nonrelativistic quantum mechanics. I will address this in a subsequent post.

Much of group theory involves a lot of jargon, which can be a bit tricky to connect to the “real” world, if you think like me. But it helps to know the jargon, as in any field you want to grapple with.

A subset $G'$ of a group $G$ which is closed under multiplication, is called a subgroup of G.

Let $g' \in G'$ and $g \in G$. If $gg'g^{-1} \in G'$ for every $g \in G$ and $g' \in G'$, then $G'$ is called an invariant subgroup of G.

Representations

So far, we have looked at matrix representations of the standard orthogonal and unitary groups. These are useful because they let us employ familiar rules of matrix algebra to study the properties of the group in question. Matrix representations are closely associated with symmetries. A good example is the time independent Schrodinger equation,

$H\psi = E\psi$

which is an eigenvalue problem for a stationary state $\psi$. If there exists a group $G$ of transformations under which the Hamiltonian $G$ stays invariant, then

$H_{trans} = RHR^{-1} = H$

for some $R \in G$ represents the group action. In particular, this implies that every element of $G$ commutes with the Hamiltonian (and hence, by a result from linear algebra, it is possible to diagonalize $H$ and $R$ simultaneously for every $R \in G$ — quite a strong result!), i.e. $RH = HR$.

Now,

$R(H\psi) = R(E\psi) = E(R\psi)$

but

$R(H\psi) = H(R\psi) = E(R\psi)$

So, all the transformed states are degenerate and constitute a multiplet.

Suppose $\Omega$ denotes the vector space of all transformed solutions, and has a finite dimension $N$. Then, it has a basis, which we can denote by $\psi_1, \psi_2, \ldots, \psi_{N}$. Since $R\psi_j$ belongs to the multiplet, it can be expanded in terms of this basis, as

$R\psi_j = \sum_{k}r_{jk}\psi_{k}$

This means that there is a matrix $\{r_{jk}\}$ associated with every group element $R$.

Group representations are of two kinds:

1. Irreducible, which means by rotating any element of $\Omega$ with all elements of $G$, we can recover all other elements of $\Omega$, or

2. Reducible, in which case, the vector space $\Omega$ splits into a direct sum of vector subspaces each of which is mapped into itself (but not into another) under the action of $G$, i.e. $\Omega = \Omega_1 \oplus \Omega_2 \oplus \ldots$.

Written by Vivek

September 10, 2010 at 14:12