# Least Action

Nontrivializing triviality..and vice versa.

## Fun with Homology

There is a theorem (currently being attributed to Wikipedia, but I’m sure I can do better given more time) which states that

All closed surfaces can be produced by gluing the sides of some polygon and all even-sided polygons (2n-gons) can be glued to make different manifolds.

Conversely, a closed surface with $n$ non-zero classes can be cut into a 2n-gon.

Two interesting cases of this are:

1. Gluing opposite sides of a hexagon produces a torus $T^2$.
2. Gluing opposite sides of an octagon produces a surface with two holes, topologically equivalent to a torus with two holes.

I had trouble visualizing this on a piece of paper, so I found two videos which are fascinating and instructive, respectively.

The two-torus from a hexagon

The genus-2 Riemann surface from an octagon

I would like to figure out how one can make such animations, and generalizations of these, using Mathematica or Sagemath.

There are a bunch of other very cool examples on the Youtube channels of these users. Kudos to them for making such instructive videos!

PS – I see that $\LaTeX$ on WordPress has become (or is still?) very sloppy! 😦

Written by Vivek

October 20, 2016 at 21:57

## Why V(x) = -1/x^2 has no bound state

Here we present a scaling-based argument to show that the attractive potential

$V(x) = -\frac{\lambda}{x^2}$

($\lambda > 0$), has no bound states (i.e. states with energy E < 0). Consider the Time Independent Schrodinger equation for this potential, which is the eigenvalue equation for the corresponding Hamiltonian,

$-\frac{\hbar^2}{2m}\frac{d^2\psi}{dx^2} - \frac{\lambda}{x^2}V(x)\psi = E\psi$

This can be rearranged as

$\frac{d^2\psi}{dx^2} + \frac{2m\lambda}{\hbar^2}\frac{1}{x^2}\psi = -\frac{2mE}{\hbar^2}\psi$

Now, it is easy to see that the quantity

$\frac{2m\lambda}{\hbar^2}$

is dimensionless. So, this problem has no independent scale, even though naively one might think that $\lambda$ specifies a scale for this problem. We claim that for such a system, there can be no bound state. This is proved below.

Suppose we perform the scale transformation $x \rightarrow \alpha x$ where $\alpha$ is some nonzero real number, we see from the equation above that if $E$ is an eigenvalue, then so is $\alpha^2 E$.

Suppose now that a bound ground state exists, with energy $E_{G}$. By definition $E_{G} < 0$. Then scale invariance implies that $\alpha^2 E_{G}$ must also be the energy of some bound state. But

$\alpha^2 E_{G} < E_{G}$

as multiplying the negative ground state energy by a positive number only makes it more negative. This contradicts the fact that the ground state has energy $E_{G}$. In fact, we can make a stronger statement, viz. the ground state energy can be made as small as we want. Therefore, there is no finite energy ground state for this system, and consequently there can be no bound states.

Note that there is no such problem with scattering states, i.e. states with positive energy. One can take an arbitrarily small positive energy scattering state and from it obtain valid energies of the continuum of higher energy scattering states by performing a scale transformation.

Incidentally, this is why potentials like $-\lambda[\delta(x)]^2$ and $-\lambda\delta(x)/x$ also have no bound states.

Written by Vivek

February 8, 2012 at 08:48

## Evaluation of Complex Error Functions erf(z) using GSL/C

For a small QFT calculation, I needed to numerically evaluate the imaginary error function erfi(x) = erf(i x). But it turns out that GSL (and most other numerical recipe code I could find) can only deal with erf(x), where x is real. Here’s a poor man’s implementation of erf(z) through a standard Taylor expansion,

$erf(z) = \frac{2}{\sqrt{\pi}}\sum_{n = 0}^{\infty}\frac{(-1)^n z^{2n+1}}{n!(2n+1)}$

The catch here is to deal with propagation of errors in the complex Taylor series, and to also somehow benchmark the results. Well, I haven’t been able to think about this yet, but I was able to confirm that for real arguments, my Taylor series code is about as good as the GSL error function gsl_sf_erf(x).

I am working on a CUDA implementation of this now, because in my project, I need to perform a numerical integration over the error function, which is quite intensive even for Mathematica. For now, I’m just sharing the serial implementation. (Hint: if you can write wrapper for each of the gsl functions inside my Taylor series calculating method, you can embed them within a __global__ kernel call through a struct. Alternatively — and less fun — you can just parallelize the for loop.)

/* Function to compute erf(z) using a Taylor series expansion
/* Author: Vivek Saxena
/* Last updated: January 28, 2011 21:11 hrs */
#include <gsl/gsl_complex_math.h>
#include <gsl/gsl_sf_erf.h>
#include <gsl/gsl_sf_pow_int.h>
#include <gsl/gsl_sf_gamma.h>
#include <stdio.h>

#define PI 3.1415926543
double cz = 2/sqrt(PI);
const int TERMS = 10; // no of terms to use in the Taylor series

gsl_complex erfTaylor( gsl_complex z, int trunc ){
gsl_complex res = gsl_complex_rect(0,0),
num  = gsl_complex_rect(0,0),
den  = gsl_complex_rect(1,0),
snum = gsl_complex_rect(1,0),
temp = gsl_complex_rect(0,0);
for(int i = 0; i < trunc; i++){
snum = gsl_complex_rect( cz * gsl_sf_pow_int(-1, i), 0 );
num  = gsl_complex_mul(snum, gsl_complex_pow_real(z, 2*i+1));
den  = gsl_complex_rect((2*i + 1)*gsl_sf_fact(i),0);
temp = gsl_complex_div(num, den);
}

return res;
}

int main ( void ){
printf( "Real error function\n\n");
for ( float i = 0; i <= 1; i += 0.01 ){
float gslerror = gsl_sf_erf(i);
float taylor   = GSL_REAL( erfTaylor( gsl_complex_rect(i, 0), 10 ) );
printf("erf(%f): gsl = %f, taylor = %f, mag error = %f\n", i, gslerror, taylor, abs(gslerror-taylor));
}

gsl_complex t, arg;
printf( "\n\nImaginary error function\n\n");
for (float i = 0; i <= 1; i += 0.01 ){
/* this would be your generic argument z in erf(z).
* so if z = x + iy, then
* arg = gsl_complex_rect(x, y);
*/
arg = gsl_complex_rect(0, i);
t   = erfTaylor( arg, TERMS );
printf("erf(%f + i %f) = %f + i %f\n", GSL_REAL(arg), GSL_IMAG(arg), GSL_REAL(t), GSL_IMAG(t));
}
return 0;
}


Written by Vivek

January 28, 2011 at 21:12

## Cadabra – a Computer Algebra System for field theory

This seems very exciting:

Cadabra is a computer algebra system (CAS) designed specifically for the solution of problems encountered in field theory. It has extensive functionality for tensor computer algebra, tensor polynomial simplification including multi-term symmetries, fermions and anti-commuting variables, Clifford algebras and Fierz transformations, implicit coordinate dependence, multiple index types and many more. The input format is a subset of TeX. Both a command-line and a graphical interface are available.

There are two interesting papers on this. The first is a semi-technical overview, and the other (hep-th/0701238) is a more comprehensive one geared towards an audience familiar with various problems in modern field theory. The abstract of the second paper reads:

# Introducing Cadabra: a symbolic computer algebra system for field theory problems

(Submitted on 25 Jan 2007 (v1), last revised 14 Jun 2007 (this version, v2))

Abstract: Cadabra is a new computer algebra system designed specifically for the solution of problems encountered in field theory. It has extensive functionality for tensor polynomial simplification taking care of Bianchi and Schouten identities, for fermions and anti-commuting variables, Clifford algebras and Fierz transformations, implicit coordinate dependence, multiple index types and many other field theory related concepts. The input format is a subset of TeX and thus easy to learn. Both a command-line and a graphical interface are available. The present paper is an introduction to the program using several concrete problems from gravity, supergravity and quantum field theory.

Written by Vivek

November 8, 2010 at 00:35

## Gaussian Integrals and Wick Contractions

Theorem 1: For nonzero real $a$,

$\langle x^{2n}\rangle = \frac{\int_{-\infty}^{\infty}dx\,e^{-\frac{1}{2}ax^2}x^{2n}}{\int_{-\infty}^{\infty}dx\,e^{-\frac{1}{2}ax^2}} = \frac{1}{a^{n}}(2n-1)(2n-3)\cdots 5\cdot 3\cdot 1$

Theorem 2: For a real symmetric $N\times N$ matrix $A$, a real $N \times 1$ vector $x$ and a real $N\times 1$ vector $J$, we have

$\int_{-\infty}^{\infty}\dotsi\int_{-\infty}^{\infty}\,dx_1\cdots dx_N\, e^{-\frac{1}{2}x^{T}Ax + J^{T}x} = \left(\frac{(2\pi)^N}{\det(A)}\right)^{1/2}e^{\frac{1}{2}J^{T}A^{-1}J}$

Proof:

Define $O$ to be the real orthogonal matrix which diagonalizes $A$ to its diagonal form $D$. That is,

$A = O^{T} D O$

where $O^{T}O = OO^{T} = I$. Further, let us choose $O$ to be a special orthogonal matrix, so that $\det(O) = +1$. Also, define $Y = OX$. Now,

$dy_1 \cdots dy_n = |\det(O)| dx_1 \cdots dx_n = dx_1 \cdots dx_n$

The argument of the exponential is

$-\frac{1}{2}X^{T}AX + J^{T}X = -\frac{1}{2}Y^{T}O(O^{-1}DO)(O^{T}Y) + J^{T}O^{T} Y$

which equals

$-\frac{1}{2}Y^{T}DY + J^{T}O^{T}Y$

The first term is

$-\frac{1}{2}Y^{T}DY = -\frac{1}{2}\sum_{i}y_{i}D_{ii}y_{i}$

Let $C^{T} = J^{T}O^{T}$ (that is, $C = OJ$). So the second term is

$J^{T}O^{T}Y = C^{T}Y$

So the argument of the exponential becomes

$-\frac{1}{2}\sum_{i}y_{i}D_{ii}y_{i} + \sum_{i}C_{i}Y_{i}$

which can be written as

$-\frac{1}{2}\sum_{i}\left[D_{ii}\left(y_{i}-\frac{C_i}{D_{ii}}\right)^2 - \frac{C_i^2}{D_{ii}}\right]$

The integrand becomes

$e^{-\frac{1}{2}\sum_{i}\left[D_{ii}\left(y_{i}-\frac{C_i}{D_{ii}}\right)^2 - \frac{C_i^2}{D_{ii}}\right]} = \prod_{i}e^{-\frac{1}{2}D_{ii}\left(y_{i}-\frac{C_i}{D_{ii}}\right)^2}e^{\frac{1}{2}C_{i}D_{ii}^{-1}C_{i}}$

Hence the quantity to be evaluated is

$\left\{\int_{-\infty}^{\infty}\dotsi\int_{-\infty}^{\infty}dy_1 \cdots dy_n \prod_{i}e^{-\frac{1}{2}D_{ii}\left(y_{i}-\frac{C_i}{D_{ii}}\right)^2}\right\}e^{\frac{1}{2}\sum_{i}C_{i}D_{ii}^{-1}C_{i}}$

The quantity in curly brackets involves $N$ definite Gaussian integrals, the $i^{th}$ term of which is

$\sqrt{\frac{2\pi}{D_{ii}}}$

using the identity $\int_{-\infty}^{\infty} dt\,e^{-\frac{1}{2}at^2} = \sqrt{\frac{2\pi}{a}}$.

So the continued product is

$\left(\frac{(2\pi)^N}{\det(D)}\right)^{1/2} = \left(\frac{(2\pi)^N}{\det(A)}\right)^{1/2}$

whereas the exponential factor sticking outside is

$e^{\frac{1}{2}\sum_{i}C_{i}D_{ii}^{-1}C_{i}} = e^{\frac{1}{2}C^{T}D^{-1}C} = e^{\frac{1}{2}(J^{T}O^{T}OA^{-1}O^{-1}OJ)} = e^{\frac{1}{2}J^{T}A^{-1}J}$

This completes the proof.

Theorem 3: $\langle x_{i} x_{j} \cdots x_{k}x_{l}\rangle = \sum_{\mbox{Wick}}(A^{-1})_{ab}\cdots(A^{-1})_{cd}$

Proof: If $m$ is the number of terms in $\langle x_{i} x_{j} \cdots x_{k}x_{l}\rangle$, differentiate the identity proved in Theorem 2 above $m$ times with respect to each of $J_i, J_j, \ldots, J_k, J_l$.

As an example, differentiate the RHS of the identity in Theorem 2 with respect to $J_{i}$. The derivative is

$\frac{d}{dJ_{i}}\exp\left\{\frac{1}{2}\sum_{m,n}J_{m}(A^{-1})_{mn}J_{n}\right\} = \frac{1}{2}\left(\sum_{m,n}(\delta_{mi}(A^{-1})_{mn}J_{n} + J_{m}(A^{-1})_{mn}\delta_{ni})\right)\exp\left\{\frac{1}{2}\sum_{m,n}J_{m}(A^{-1})_{mn}J_{n}\right\}$

the prefactor of the exponential is

$\frac{1}{2}(\sum_{n}(A^{-1})_{in}J_{n} + \sum_{m}J_{m}(A^{-1})_{mi})$

which becomes

$\sum_{j}(A^{-1})_{ij}J_{j}$

using the fact that $A$ is symmetric. Repeating this exercise, we see that bringing down $x_{i}$ involves differentiating with respect to $J_{i}$, and effectively introduces a matrix element of $A^{-1}$. It is obvious by induction that the sum must run over all possible permutations of matrix indices of $A^{-1}$. That is, the sum must run over every possible permutation $(a, b), \ldots (c, d)$ of pairing the indices $i, j, \ldots k, l$. This completes the “proof”.

Written by Vivek

October 9, 2010 at 21:45

## A lemma on a rather frequently encountered set of Hermitian matrices

I’m documenting a small ‘lemma’ that I think is worth mentioning. Suppose $M^{i}$ are 4 Hermitian matrices ($i = 1, \ldots, 4$) satisfying

$M^{i}M^{j} + M^{j}M^{i} = 2\delta^{ij}I$

where $I$ denotes the identity matrix. Then these matrices have eigenvalues $\pm 1$, are traceless and are necessarily of even dimension.

For $i = j$, the anticommutator above gives $(M^{i})^2 = I$. So, for any eigenvector $X$ and eigenvalue $\lambda$, we have

$M^{i}X = \lambda X \implies (M^{i})^2 X = \lambda^2 X$

or equivalently $\lambda = \pm 1$.

Traceless-ness has a neat proof:

Suppose $j \neq i$. Then

$tr(M^{i}) = tr(M^{i}(M^{j})^2) = tr(M^{i}M^{j}M^{j}) = tr(M^{j}M^{i}M^{j})$

where the last equality follows from $tr(ABC) = tr(CAB)$. But $M^{j}M^{i} = -M^{i}M^{j}$, so

$tr(M^{i}) = -tr(M^{i}M^{j}M^{j}) = -tr(M^{i}) \implies tr(M^{i}) = 0$.

Finally, suppose the numbers of +1 eigenvalues and -1 eigenvalues are $a$ and $b$ respectively. The dimension of the matrix is then $a + b$. Since the trace equals the sum of eigenvalues, we have

$a - b = 0$

So, the dimension is $a + b = 2a = 2b$, which is clearly always an even number.

Written by Vivek

September 19, 2010 at 20:33

## Group Theory II – SU(2)

In a previous post, I cursorily touched upon the standard orthogonal and unitary groups. It turns out that in quantum mechanics, the operators of principal interest are either Hermitian or unitary. So it is only natural that we should be interested in transformations from the Special Unitary Group $SU(N)$. In particular, we dwell on the $N = 2$ case.

First of all, $SU(2)$ has 3 independent parameters, something that is obvious if one demands that a matrix $U$,

$U = \left(\begin{array}{cc}U_{11}&U_{12}\\U_{21}&U_{22}\end{array}\right)$

be unitary, that is

$\left(\begin{array}{cc}U_{11}&U_{12}\\U_{21}&U_{22}\end{array}\right)\left(\begin{array}{cc}U_{11}^{*}&U_{21}^{*}\\U_{12}^{*}&U_{22}^{*}\end{array}\right) = \left(\begin{array}{cc}1 & 0\\ 0 & 1\end{array}\right)$

and have a determinant equal to $+1$. It follows that any such matrix can be written as

$U = \left(\begin{array}{cc}U_{11}&U_{12}\\-U_{12}^{*}&U_{11}^{*}\end{array}\right)$

in which there are three independent parameters. If $U$ is expanded about the identity, i.e.

$U = 1 + i\epsilon G$

then unitarity of $U$ demands that $G$ be Hermitian, and also traceless. For a finite transformation, this generalizes to

$U = \exp(iH) = \exp(i\alpha G)$

That is more generally, $det(U) = \exp(i\alpha\,tr(G))$ implies that $tr(G) = 0$ for $det(U) = 1$. So, $H$ is a traceless matrix.

We can write

$H = \sum_{k=1}^{3}\alpha_{k}G_{k} = \boldsymbol{\alpha}\cdot\boldsymbol{G}$

From nonrelativistic quantum mechanics, we know that the effect of a rotation through an angle $\theta$ on a spin-1/2 particle about an axis $\boldsymbol{\hat{n}}$ is given by the unitary matrix,

$U(\theta) = \exp(-i\theta\,\boldsymbol{\hat{n}}\cdot\boldsymbol{\sigma}/2)$

where $\boldsymbol{\sigma}$ is the Pauli spin matrix “vector” given by,

$\boldsymbol{\sigma} = \hat{x}\sigma_{x} + \hat{y}\sigma_{y} + \hat{z}\sigma_{z}$

Clearly, in this development, we can identify $G_j = \sigma_j/2$ for $j = 1, 2, 3$ as the generators of the group. The commutation relation for the generators is

$\left[\frac{\sigma_i}{2},\frac{\sigma_j}{2}\right] = i\epsilon_{ijk}\frac{\sigma_k}{2}$

This is called the fundamental representation of $SU(2)$. An $n$-dimensional representation of $SU(2)$ consists of $n \times n$ unitary matrices satisfying such commutation relations.

Written by Vivek

September 10, 2010 at 17:42