# Least Action

Nontrivializing triviality..and vice versa.

## Divergence Theorem in Complex Coordinates

The divergence theorem in complex coordinates,

$\int_R d^2{z} (\partial_z v^z + \partial_{\bar{z}}v^{\bar{z}}) = i \oint_{\partial R}(v^z d\bar{z} - v^{\bar{z}}dz)$

(where the contour integral circles the region R counterclockwise) appears in the context of two dimensional conformal field theory, to derive Noether’s Theorem and the Ward Identity for a conformally invariant scalar field theory (for example), and is useful in general in 2D CFT/string theory. This is equation (2.1.9) of Polchinski’s volume 1, but a proof is not given in the book.

This is straightforward to prove by converting both sides separately to Cartesian coordinates $(\sigma^1, \sigma^2)$, through

$z = \sigma^1 + i \sigma^2$
$\bar{z} = \sigma^1 - i \sigma^2$

$\partial_z = \frac{1}{2}(\partial_1 - i \partial_2)$

$\partial_{\bar{z}} = \frac{1}{2}(\partial_1 + i \partial_2)$

$d^2 z = 2 d\sigma^1 d\sigma^2 = 2 d^2 \sigma$

and using the Green’s theorem in the plane

$\oint_{\partial R}(L dx + M dy) = \int \int_{R} \left(\frac{\partial M}{\partial x} - \frac{\partial L}{\partial y}\right) dx dy$

with the identifications

$x \rightarrow \sigma^1, y \rightarrow \sigma^2$
$L \rightarrow -v^2, M \rightarrow v^1$

There is perhaps a faster and more elegant way of doing this directly in the complex plane, but this particular line of reasoning makes contact with the underlying Green’s theorem in the plane, which is more familiar from real analysis.

Written by Vivek

October 12, 2014 at 20:47

## The Yang Mills Field Strength: a motivation for differential forms

Although their importance and applications are widely recognized in theoretical physics, differential forms are not part of the standard curriculum in physics courses, except for the rare mention in a general relativity course, e.g. if you have used the book by Sean Carroll. In this blog post, we describe the construction of the field strength tensor for Yang Mills Theory, using differential forms. The derivation (which is based on the one given by Tony Zee in his QFT book) is elegant, and mathematically less tedious than the more conventional derivation based on the matrix gauge field. It also illustrates the power of the differential forms approach.

To begin with, let me define a normalized matrix gauge field $A_\mu$ which is $-i$ times the usual matrix gauge field. So in the equations that appear below, the covariant derivative has no lurking factors of $-i$. This greatly simplifies the algebra for us, as we don’t have to keep track of conjugations and sign changes associated with $i$. The covariant derivative is thus $D_\mu = \partial_\mu + A_\mu$ in terms of this new gauge field. The matrix 1-form $A$ is

$A = A_\mu dx^\mu$

So, $A^2 = A_\mu A_\nu dx^\mu dx^\nu$. But since $dx^\mu dx^\nu = -dx^\nu dx^\mu$, only the antisymmetric part of the product survives and hence we can write

$A^2 = \frac{1}{2}[A_\mu, A_\nu] dx^\mu dx^\nu$

We want to construct an appropriate 2-form $F = \frac{1}{2}F_{\mu\nu}dx^\mu dx^\nu$ from this 1-form A. Now, if d denotes the exterior derivative, then dA is a 2-form, as is A^2. These are the only two forms we can construct from A. So, $F$ must be a linear combination of these two forms. This is a very simple, and neat argument!

Now, the transformation law for the gauge potential is

$A \rightarrow U A U^\dagger + U dU^\dagger$

where $U$ is a 0 form (so $dU^\dagger = \partial_\mu U^\dagger dx^\mu$). Applying d to the transformation law, we get

$dA \rightarrow U dA U^\dagger + dU A U^\dagger - U A dU^\dagger + dU dU^\dagger$

where the negative sign in the third term comes from moving the 1-form d past the 1-form A. Squaring the transformation law yields

$A^2 \rightarrow UA^2 U^\dagger + U A dU ^\dagger + U dU^\dagger U A U^\dagger + U dU^\dagger U dU^\dagger$

Now, $UU^\dagger = 1$, so applying d again to both sides we get $U dU^\dagger = -dU U^\dagger$. So, we can write the square transformation law as

$A^2 \rightarrow U A^2 U^\dagger + U A dU^\dagger - dU A U^\dagger - dU dU^\dagger$
whereas if we recall the expression for the transformation of $dA$, it was just
$dA \rightarrow U dA U^\dagger + dU A U^\dagger - U A dU^\dagger + dU dU^\dagger$

Clearly if we merely add $A^2$ and $dA$, the last 3 terms on the RHS of each cancel out, and we get

$A^2 + dA \rightarrow U(A^2 + dA)U^\dagger$

which is the expected transformation law for a field strength of the form $F = A^2 + dA$:

$F \rightarrow U F U^{\dagger}$

The differential form approach uses compact notation that suppresses the Lorentz index $\mu$ as well as the group index $a$, and gives us a fleeting glimpse into the connection between gauge theory and fibre bundles.

For a gentle yet semi-rigorous introduction to differential forms, the reader is referred to the book on General Relativity by Sean Carroll.

Written by Vivek

June 11, 2014 at 10:53

## Cadabra – a Computer Algebra System for field theory

This seems very exciting:

Cadabra is a computer algebra system (CAS) designed specifically for the solution of problems encountered in field theory. It has extensive functionality for tensor computer algebra, tensor polynomial simplification including multi-term symmetries, fermions and anti-commuting variables, Clifford algebras and Fierz transformations, implicit coordinate dependence, multiple index types and many more. The input format is a subset of TeX. Both a command-line and a graphical interface are available.

There are two interesting papers on this. The first is a semi-technical overview, and the other (hep-th/0701238) is a more comprehensive one geared towards an audience familiar with various problems in modern field theory. The abstract of the second paper reads:

# Introducing Cadabra: a symbolic computer algebra system for field theory problems

(Submitted on 25 Jan 2007 (v1), last revised 14 Jun 2007 (this version, v2))

Abstract: Cadabra is a new computer algebra system designed specifically for the solution of problems encountered in field theory. It has extensive functionality for tensor polynomial simplification taking care of Bianchi and Schouten identities, for fermions and anti-commuting variables, Clifford algebras and Fierz transformations, implicit coordinate dependence, multiple index types and many other field theory related concepts. The input format is a subset of TeX and thus easy to learn. Both a command-line and a graphical interface are available. The present paper is an introduction to the program using several concrete problems from gravity, supergravity and quantum field theory.

Written by Vivek

November 8, 2010 at 00:35

## Coulomb’s Law in (n+1) dimensions

The pairwise interaction energy in $n$ spatial dimensions is given by

$E = -\int\frac{d^n k}{(2\pi)^{n}}\frac{e^{i\vec{k}\cdot\vec{r}}}{k^2+m^2}$

To evaluate this $n$-dimensional Fourier integral, we must employ hyperspherical coordinates. The volume element is

$dV = k^{n-1}\sin^{n-2}\phi_1\sin^{n-3}\phi_2\ldots\sin\phi_{n-2}\,dk d\phi_1\ldots d\phi_{n-1}$

The angles $\phi_{1}, \ldots, \phi_{n-2}$ range from $0$ to $\pi$ and $\phi_{n-1}$ ranges from $0$ to $2\pi$. Writing $\vec{k}\cdot\vec{r} = kr\cos\phi_1$, the integral can be written as

$E = -\frac{(2\pi)^{d} C}{(2\pi)^{n}}\int_{0}^{\infty}\frac{dk\,k^{n-1}}{k^2+m^2}\int_{0}^{\pi}d\phi_1\,e^{ikr\cos\phi_1}\sin^{n-2}\phi_1$

The constant $C$ is equal to the product of integrals of the form $\int_{0}^{\pi} \sin^{n-k}\phi\,d\phi$. The number of such integrals depends on the dimension $n$. In two and three dimensions, $C = 1$. In more than 3 dimensions, there are $n-3$ such terms, for $k = 2, 3, \ldots, n-2$ (the integral over $\phi_{n-1}$ produces a $2\pi$, which we’ve already factored out). The value of $d$ is $0$ for 2 dimensions, and is $1$ for 3 and higher dimensions.

The integral over $\phi_1$ produces a regularized confluent hypergeometric function $_{2}F_3(a_1,a_2;b_1,b_2;\alpha)$. Specifically, for $n \geq 2$, the integral over $\phi_1$ produces

$\frac{\sqrt{\pi}e^{-ikr}\Gamma\left(\frac{n-1}{2}\right)}{\Gamma\left(\frac{n}{2}\right)}\left[_{2}F_{3}\left(\frac{n-1}{4},\frac{n+1}{4};\frac{1}{2},\frac{n-1}{2},\frac{n}{2};-k^2 r^2\right) + (ikr) _{2}F_{3}\left(\frac{n+1}{4},\frac{n+3}{4};\frac{3}{2},\frac{n+1}{2},\frac{n}{2};-k^2 r^2\right)\right]$

(As a check, for $n =2$, this becomes $\pi J_{0}(kr)$, which is what we obtained for 2 spatial dimensions in a previous post.)

The resulting integration over $k$ is rather tricky. At this point, I don’t know if its possible to do it by hand, so I am using Mathematica to perform it.

Edit: a few hours later..

So, it seems this integral does not have a closed form representation, or at least not one that Mathematica can find.

Written by Vivek

October 21, 2010 at 18:39

## Feynman Diagrams using Adobe Illustrator

I recently became aware that Professor Daniel Schroeder used Adobe Illustrator to make Feynman Diagrams in Chapter 9 of his Thermal Physics book. A google search yielded two videos by a Youtube user named AjabberWok. I’m taking the liberty of embedding them here. If you would rather view them in a new window, just click on the videos to to be taken to the appropriate youtube link.

In a previous post, I briefly outlined the installation of FeynArts, an open source Mathematica package for creating Feynman diagrams. There are also other tools such as the open source JaxoDraw program, and the axodraw $\LaTeX$ file, which Jaxodraw actually uses to export diagrams into $\LaTeX$ lineart. I personally endorse the use of open source programs such as FeynArts and Jaxodraw, but for those of you who already have access to Adobe Illustrator, you might find these videos quite useful.

Written by Vivek

October 17, 2010 at 11:53

## FeynArts on Mathematica in Windows

As much as I hate to admit it, some of us are constrained to use Mathematica on Windows for some reason or the other. If you happen to be part of this ‘some of us’ subset (at least temporarily, in my case) and are eager to get your hands on FeynArts, a nifty package for Mathematica which lets you make and play with Feynman diagrams, you’re going to have some trouble installing it on the current version of Mathematica. If you are using Windows, I suggest downloading the tarball and unzipping its contents to:

The important thing to note is that the tarball actually unpacks into a new directory (FeynArts-3.5 for now). You need to ensure that the contents of this directory are placed in the above directory, so that FeynArts-3.5 is not a subfolder of the above folder. In short, this is what my folder looks like:

The next step is to fire up Mathematica and select “Install” from the File Menu.  Select “Package” under the “Type of item to Install” tab and navigate to the above directory using the “File” option under the Source tab. Select FeynArts35.m and type “FeynArts” in the Install Name field. Finally, under “Default Installation Directory”, check “Your user Mathematica base directory”.

This last step is important: if you check the “System-wide Mathematica base directory” option, you’ll end up with the \$FeynArtsDir variable pointing to “C:\Program Files\Wolfram Research\Mathematica\7.0\AddOns\Applications”. In this case, FeynArts will work only if you dump the contents of “C:\Users\<your Windows user name>\AppData\Roaming\Mathematica\Applications\” into “C:\Program Files\Wolfram Research\Mathematica\7.0\AddOns\Applications”, something you shouldn’t do as this makes the Application folder messy.

A workaround should be to change the Directory[] variable in Setup.m for FeynArts, to point to the folder of your choice. I tried this, but ran into other path-related problems. Anyway, so long as it works, who cares!

Written by Vivek

October 15, 2010 at 19:08

## Gaussian Integrals and Wick Contractions

Theorem 1: For nonzero real $a$,

$\langle x^{2n}\rangle = \frac{\int_{-\infty}^{\infty}dx\,e^{-\frac{1}{2}ax^2}x^{2n}}{\int_{-\infty}^{\infty}dx\,e^{-\frac{1}{2}ax^2}} = \frac{1}{a^{n}}(2n-1)(2n-3)\cdots 5\cdot 3\cdot 1$

Theorem 2: For a real symmetric $N\times N$ matrix $A$, a real $N \times 1$ vector $x$ and a real $N\times 1$ vector $J$, we have

$\int_{-\infty}^{\infty}\dotsi\int_{-\infty}^{\infty}\,dx_1\cdots dx_N\, e^{-\frac{1}{2}x^{T}Ax + J^{T}x} = \left(\frac{(2\pi)^N}{\det(A)}\right)^{1/2}e^{\frac{1}{2}J^{T}A^{-1}J}$

Proof:

Define $O$ to be the real orthogonal matrix which diagonalizes $A$ to its diagonal form $D$. That is,

$A = O^{T} D O$

where $O^{T}O = OO^{T} = I$. Further, let us choose $O$ to be a special orthogonal matrix, so that $\det(O) = +1$. Also, define $Y = OX$. Now,

$dy_1 \cdots dy_n = |\det(O)| dx_1 \cdots dx_n = dx_1 \cdots dx_n$

The argument of the exponential is

$-\frac{1}{2}X^{T}AX + J^{T}X = -\frac{1}{2}Y^{T}O(O^{-1}DO)(O^{T}Y) + J^{T}O^{T} Y$

which equals

$-\frac{1}{2}Y^{T}DY + J^{T}O^{T}Y$

The first term is

$-\frac{1}{2}Y^{T}DY = -\frac{1}{2}\sum_{i}y_{i}D_{ii}y_{i}$

Let $C^{T} = J^{T}O^{T}$ (that is, $C = OJ$). So the second term is

$J^{T}O^{T}Y = C^{T}Y$

So the argument of the exponential becomes

$-\frac{1}{2}\sum_{i}y_{i}D_{ii}y_{i} + \sum_{i}C_{i}Y_{i}$

which can be written as

$-\frac{1}{2}\sum_{i}\left[D_{ii}\left(y_{i}-\frac{C_i}{D_{ii}}\right)^2 - \frac{C_i^2}{D_{ii}}\right]$

The integrand becomes

$e^{-\frac{1}{2}\sum_{i}\left[D_{ii}\left(y_{i}-\frac{C_i}{D_{ii}}\right)^2 - \frac{C_i^2}{D_{ii}}\right]} = \prod_{i}e^{-\frac{1}{2}D_{ii}\left(y_{i}-\frac{C_i}{D_{ii}}\right)^2}e^{\frac{1}{2}C_{i}D_{ii}^{-1}C_{i}}$

Hence the quantity to be evaluated is

$\left\{\int_{-\infty}^{\infty}\dotsi\int_{-\infty}^{\infty}dy_1 \cdots dy_n \prod_{i}e^{-\frac{1}{2}D_{ii}\left(y_{i}-\frac{C_i}{D_{ii}}\right)^2}\right\}e^{\frac{1}{2}\sum_{i}C_{i}D_{ii}^{-1}C_{i}}$

The quantity in curly brackets involves $N$ definite Gaussian integrals, the $i^{th}$ term of which is

$\sqrt{\frac{2\pi}{D_{ii}}}$

using the identity $\int_{-\infty}^{\infty} dt\,e^{-\frac{1}{2}at^2} = \sqrt{\frac{2\pi}{a}}$.

So the continued product is

$\left(\frac{(2\pi)^N}{\det(D)}\right)^{1/2} = \left(\frac{(2\pi)^N}{\det(A)}\right)^{1/2}$

whereas the exponential factor sticking outside is

$e^{\frac{1}{2}\sum_{i}C_{i}D_{ii}^{-1}C_{i}} = e^{\frac{1}{2}C^{T}D^{-1}C} = e^{\frac{1}{2}(J^{T}O^{T}OA^{-1}O^{-1}OJ)} = e^{\frac{1}{2}J^{T}A^{-1}J}$

This completes the proof.

Theorem 3: $\langle x_{i} x_{j} \cdots x_{k}x_{l}\rangle = \sum_{\mbox{Wick}}(A^{-1})_{ab}\cdots(A^{-1})_{cd}$

Proof: If $m$ is the number of terms in $\langle x_{i} x_{j} \cdots x_{k}x_{l}\rangle$, differentiate the identity proved in Theorem 2 above $m$ times with respect to each of $J_i, J_j, \ldots, J_k, J_l$.

As an example, differentiate the RHS of the identity in Theorem 2 with respect to $J_{i}$. The derivative is

$\frac{d}{dJ_{i}}\exp\left\{\frac{1}{2}\sum_{m,n}J_{m}(A^{-1})_{mn}J_{n}\right\} = \frac{1}{2}\left(\sum_{m,n}(\delta_{mi}(A^{-1})_{mn}J_{n} + J_{m}(A^{-1})_{mn}\delta_{ni})\right)\exp\left\{\frac{1}{2}\sum_{m,n}J_{m}(A^{-1})_{mn}J_{n}\right\}$

the prefactor of the exponential is

$\frac{1}{2}(\sum_{n}(A^{-1})_{in}J_{n} + \sum_{m}J_{m}(A^{-1})_{mi})$

which becomes

$\sum_{j}(A^{-1})_{ij}J_{j}$

using the fact that $A$ is symmetric. Repeating this exercise, we see that bringing down $x_{i}$ involves differentiating with respect to $J_{i}$, and effectively introduces a matrix element of $A^{-1}$. It is obvious by induction that the sum must run over all possible permutations of matrix indices of $A^{-1}$. That is, the sum must run over every possible permutation $(a, b), \ldots (c, d)$ of pairing the indices $i, j, \ldots k, l$. This completes the “proof”.

Written by Vivek

October 9, 2010 at 21:45