# Least Action

Nontrivializing triviality..and vice versa.

## Fun with Homology

There is a theorem (currently being attributed to Wikipedia, but I’m sure I can do better given more time) which states that

All closed surfaces can be produced by gluing the sides of some polygon and all even-sided polygons (2n-gons) can be glued to make different manifolds.

Conversely, a closed surface with $n$ non-zero classes can be cut into a 2n-gon.

Two interesting cases of this are:

1. Gluing opposite sides of a hexagon produces a torus $T^2$.
2. Gluing opposite sides of an octagon produces a surface with two holes, topologically equivalent to a torus with two holes.

I had trouble visualizing this on a piece of paper, so I found two videos which are fascinating and instructive, respectively.

The two-torus from a hexagon

The genus-2 Riemann surface from an octagon

I would like to figure out how one can make such animations, and generalizations of these, using Mathematica or Sagemath.

There are a bunch of other very cool examples on the Youtube channels of these users. Kudos to them for making such instructive videos!

PS – I see that $\LaTeX$ on WordPress has become (or is still?) very sloppy! 😦

Written by Vivek

October 20, 2016 at 21:57

## Divergence Theorem in Complex Coordinates

The divergence theorem in complex coordinates,

$\int_R d^2{z} (\partial_z v^z + \partial_{\bar{z}}v^{\bar{z}}) = i \oint_{\partial R}(v^z d\bar{z} - v^{\bar{z}}dz)$

(where the contour integral circles the region R counterclockwise) appears in the context of two dimensional conformal field theory, to derive Noether’s Theorem and the Ward Identity for a conformally invariant scalar field theory (for example), and is useful in general in 2D CFT/string theory. This is equation (2.1.9) of Polchinski’s volume 1, but a proof is not given in the book.

This is straightforward to prove by converting both sides separately to Cartesian coordinates $(\sigma^1, \sigma^2)$, through

$z = \sigma^1 + i \sigma^2$
$\bar{z} = \sigma^1 - i \sigma^2$

$\partial_z = \frac{1}{2}(\partial_1 - i \partial_2)$

$\partial_{\bar{z}} = \frac{1}{2}(\partial_1 + i \partial_2)$

$d^2 z = 2 d\sigma^1 d\sigma^2 = 2 d^2 \sigma$

and using the Green’s theorem in the plane

$\oint_{\partial R}(L dx + M dy) = \int \int_{R} \left(\frac{\partial M}{\partial x} - \frac{\partial L}{\partial y}\right) dx dy$

with the identifications

$x \rightarrow \sigma^1, y \rightarrow \sigma^2$
$L \rightarrow -v^2, M \rightarrow v^1$

There is perhaps a faster and more elegant way of doing this directly in the complex plane, but this particular line of reasoning makes contact with the underlying Green’s theorem in the plane, which is more familiar from real analysis.

Written by Vivek

October 12, 2014 at 20:47

## The Yang Mills Field Strength: a motivation for differential forms

Although their importance and applications are widely recognized in theoretical physics, differential forms are not part of the standard curriculum in physics courses, except for the rare mention in a general relativity course, e.g. if you have used the book by Sean Carroll. In this blog post, we describe the construction of the field strength tensor for Yang Mills Theory, using differential forms. The derivation (which is based on the one given by Tony Zee in his QFT book) is elegant, and mathematically less tedious than the more conventional derivation based on the matrix gauge field. It also illustrates the power of the differential forms approach.

To begin with, let me define a normalized matrix gauge field $A_\mu$ which is $-i$ times the usual matrix gauge field. So in the equations that appear below, the covariant derivative has no lurking factors of $-i$. This greatly simplifies the algebra for us, as we don’t have to keep track of conjugations and sign changes associated with $i$. The covariant derivative is thus $D_\mu = \partial_\mu + A_\mu$ in terms of this new gauge field. The matrix 1-form $A$ is

$A = A_\mu dx^\mu$

So, $A^2 = A_\mu A_\nu dx^\mu dx^\nu$. But since $dx^\mu dx^\nu = -dx^\nu dx^\mu$, only the antisymmetric part of the product survives and hence we can write

$A^2 = \frac{1}{2}[A_\mu, A_\nu] dx^\mu dx^\nu$

We want to construct an appropriate 2-form $F = \frac{1}{2}F_{\mu\nu}dx^\mu dx^\nu$ from this 1-form A. Now, if d denotes the exterior derivative, then dA is a 2-form, as is A^2. These are the only two forms we can construct from A. So, $F$ must be a linear combination of these two forms. This is a very simple, and neat argument!

Now, the transformation law for the gauge potential is

$A \rightarrow U A U^\dagger + U dU^\dagger$

where $U$ is a 0 form (so $dU^\dagger = \partial_\mu U^\dagger dx^\mu$). Applying d to the transformation law, we get

$dA \rightarrow U dA U^\dagger + dU A U^\dagger - U A dU^\dagger + dU dU^\dagger$

where the negative sign in the third term comes from moving the 1-form d past the 1-form A. Squaring the transformation law yields

$A^2 \rightarrow UA^2 U^\dagger + U A dU ^\dagger + U dU^\dagger U A U^\dagger + U dU^\dagger U dU^\dagger$

Now, $UU^\dagger = 1$, so applying d again to both sides we get $U dU^\dagger = -dU U^\dagger$. So, we can write the square transformation law as

$A^2 \rightarrow U A^2 U^\dagger + U A dU^\dagger - dU A U^\dagger - dU dU^\dagger$
whereas if we recall the expression for the transformation of $dA$, it was just
$dA \rightarrow U dA U^\dagger + dU A U^\dagger - U A dU^\dagger + dU dU^\dagger$

Clearly if we merely add $A^2$ and $dA$, the last 3 terms on the RHS of each cancel out, and we get

$A^2 + dA \rightarrow U(A^2 + dA)U^\dagger$

which is the expected transformation law for a field strength of the form $F = A^2 + dA$:

$F \rightarrow U F U^{\dagger}$

The differential form approach uses compact notation that suppresses the Lorentz index $\mu$ as well as the group index $a$, and gives us a fleeting glimpse into the connection between gauge theory and fibre bundles.

For a gentle yet semi-rigorous introduction to differential forms, the reader is referred to the book on General Relativity by Sean Carroll.

Written by Vivek

June 11, 2014 at 10:53

## From engineering to physics graduate school

This blog post is about switching to physics graduate school, when you do not have a physics undergrad per se, but something like an engineering degree. I have been meaning to write this post for years, as I never found one myself when I was trying to figure things out for myself. In some sense, this is a personal chronicle of my experiences, opinions and beliefs which have been shaped over the past 8 years or so. Before I proceed though, I should warn readers that I am by no means an authority on either physics graduate school, or this particular career path. I cannot also say with any degree of finality that things have worked out optimally for me. This is also not a philosophical career advice post, and at this point, I have no intention to adapt it to different kinds of individuals who may have different tastes, inclinations or backgrounds. Ultimately, and I cannot emphasize this enough, no advice is good advice.

Physics is a challenging and tricky subject that requires a lot of dedication and focus. I dislike the use of the term “genius” because while there are luminaries and experts in physics, use of this term suggests that it is possible for some people to do nothing and yet do extremely well in physics. This is deceptive. I believe everyone you see who is really very good has worked hard in his or her own way by solving lots of problems, understanding things properly and patiently, and persisting. Some people require less time, but it is incorrect to say that they do not have to work hard to achieve the same amount as someone else because they are geniuses.

But having said that, physics isn’t for the weak-hearted (nor is engineering actually, but there’s a greater commercial value associated with it, so it manages to attract more weak-hearted and feeble souls than the pure sciences do). Switching to physics is going to be a rocky road and if you are putting off physics for practicality so that you can do engineering (or something else) now and “always switch to physics later”, this is a bad idea. There are exceptions, which this post is about, so we’ll get to them later.

Why do I say this? Well, that’s because physics has several foundational pillars which you really can’t do without, and a patchy education means you will spend many years gathering confidence and learning things which you ought to have learnt systematically through courses and books. So the bottom line is if you are into physics because of the popular science books on dark mater and string theory, then you are either in for a rude shock or you’re being misled. So start reading real physics books!

Also, be advised that you will probably not have a lot of time for “hobbies” and for doing things like playing guitar and skydiving and fixing radios and watching all your favorite movies and TV shows all at the same time. You could have done all of that as an undergrad (perhaps even a physics undergrad at a less demanding college) and yet managed to scrape through by studying at the last minute. But progress in the long term will demand some major task scheduling. And yet, I should add that a lot of very enthusiastic, excellent and bright people I know do manage to do all these things and also pace their physics work. So yes, its more of how you manage yourself.

Reading books by yourself is a good idea, but there is a reason why courses are more effective in getting you started: they expose you to the 40% or so of the things you can find in a book, but with discipline. You attend classes, work out problems in homeworks, take exams (okay, not the best thing in the world), and more importantly, discuss with peers and your professors.

3. Take core courses seriously

There is a lot of physics in engineering courses, more than you may realize. If you are a mechanical engineer, you have excellent opportunities to learn about stress tensors, Lagrangians and Hamiltonians, thermodynamics and some statistical mechanics, etc. If you are an electrical engineer, semiconductor device physics is a playground for statistical mechanics, solid state physics and even some quantum mechanics (these days, with people in engineering research playing with things like NEGF and simulations, “some” is an understatement). There’s a lot of physics and classical mechanics in particular, in things like linear systems analysis, control system theory, and of course, RF and electromagnetism. From a practical standpoint, doing badly in engineering courses, and better in physics courses only shows lack of commitment and not physicsy brilliance.

4. Take courses on Classical Mechanics, Quantum Mechanics, Electromagnetic Theory and Statistical Mechanics

You may be able to impress people with fancy courses like quantum field theory and particle physics as an undergrad, but everyone who does any serious physics knows the importance of the four pillar courses. A good foundation in Classical Mechanics at the level of Goldstein, is more vital to a good understanding of QM, SM, EMT and even more advanced courses. I don’t have a “below Goldstein” suggestion if you didn’t do calculus in high school (in India calculus is taught in the 11th year of school, and every science student knows a good deal of calculus irrespective of whether he/she does science or engineering). But Giancoli’s books are good to finish off by the end of your freshman year if you haven’t done so before (I personally read them in 11th and 12th grade, but that’s because of the Indian system). There’s also a very tried and tested book on Mechanics by Kleppener and Kolenkow from MIT, which is worth checking out. For a first course in Quantum Mechanics, I strongly suggest being able to do almost every problem of Griffiths’s book, but for a text there is no dearth of choices. I recommend having a good library of all the ‘standard’ books (these days you can get ebooks, but its probably a good idea to have hard copies nonetheless) like Shankar, Griffiths, Sakurai, Cohen-Tannoudji, Powell and Crasemann, Townsend, Merzbacher, Gottfried to name a few. For Statistical Mechanics, I believe Reif’s two books (the non-Berkeley series one as well as the Berkeley series one), Thermal Physics by Schroeder, and of course Pathria are good choices, though over time other books such as the one by Kubo have emerged as options too.

For EM, I suggest reading Griffiths cover to cover definitely, and working out as many problems as you can (ideally, all). This will form a great foundation for starting that Bible of EM called Jackson. Something I’ve realized in graduate school is that almost everyone (including wannabe serious physicists) hate electromagnetism. This came as a surprise to me as an electrical engineering undergraduate, because EM for me was one of the most important courses, and it was taught in a way that I couldn’t dislike it. So, for me doing Jackson problems — albeit in a very limited way given the time constraints of first semester in grad school — was fun. I could not understand why electromagnetic theory, which gives you quick returns in the form of visualizable, sensible, physical predictions, and is a good way to flex your muscle with special functions, mathematical methods, computer programming, etc. is loathed so much. I can find two reasons: (a) lack of time, (b) an enormous negative reputation built up by generations of practicing physicists, graduate students. Of course, Jackson problems are hard, and you will probably benefit a lot from discussing them (instead of just looking up solutions off the web). But this is somewhere you can make yourself different from the herd. That of course doesn’t mean you should keep Jackson’s book under your pillow and have sleepless nights filled with Bessel function integrals in your head. You could choose the former, or the latter.

There is also a very nicely written graduate textbook on EM Theory by Andrew Zangwill, which is also definitely worth checking out. Having used it alongside Jackson, I will say that it complements Jackson very well, though having never taught the course or developed enough conviction about it to say such things, I cannot opine on whether it can be an adequate replacement for Jackson, which remains a source of challenging problems and insights.

5. Improve your programming skills, get familiar with Mathematica, Python, C, Matlab, etc.

6. Take difficult courses after doing the foundational courses

It helps to develop more experience by challenging yourself by taking tougher courses in theoretical physics, even if you want to end up doing experiments later. For one, it teaches you the importance of the core courses, and provides you room to apply the skills you learnt in those courses. But it also exposes you to a wider range of physics, and sometimes even some research.

7. Do research

If you end up in your sophomore or junior year as an engineering student who can’t wiggle out and do some physics, do not restrict yourself to reading books and spending time watching popular science expositions of string theory as a theory of everything which will solve global hunger and malnutrition. Instead, do research, take on projects even if they are in engineering topics. To a mature mind, there is no fundamental distinction between physics and engineering. Graduate schools and professors prefer students who have had some research exposure and possibly not even stellar grades over students who have perfect grades but have not proven themselves in an environment outside their comfort zone.

8. Form discussion groups with likeminded students, have blackboard talks, be involved, ask lots of questions! No question is a stupid question, and we all started somewhere. Hopefully people are not born knowing what supersymmetry is! I have learnt more by talking to friends doing different kinds of physics and engineering, than I have by merely reading books and following the internet.

There are of course, other administrative things like taking GREs, recommendations, etc. but I feel those are important yet clerical enough for you to be able to find adequate (and more expert) comments and treatments across the web, and hence I don’t think they merit an inclusion here.

Written by Vivek

May 24, 2014 at 11:00

Tagged with

## Gnuplot with C/C++

Sometimes, it is convenient to call gnuplot from within your own C/C++ code rather than having the code create a data file, and manually execute gnuplot every time to re-plot using the contents of the file. This is easy if you use Linux: just use a pipe. For instance:

 #include <iostream> #include <stdio.h> #include <cstdlib>

 using namespace std; int main(){ // Code for gnuplot pipe FILE *pipe = popen("gnuplot -persist", "w"); fprintf(pipe, "\n"); // Your code goes here 

 // Don't forget to close the pipe fclose(pipe); return 0; }

Note that we have had to use C routines to call gnuplot. But there’s a problem with this code: gnuplot will generate the plot only when the pipe is closed. So if you have multiple plot statements, or if you want to refresh the same plot (as for instance, in an animation scenario) you will find that the above approach will result in the plots getting asynchronously updated right at the end.

So how does one ensure that after every plot command, a plot is actually generated (or an existing one refreshed) Immediately? The key is to flush the buffer using fflush. Simply insert

 fflush(pipe); 

below every plot call to gnuplot. If you have an animation, then in order to make the transitions smoother, you might want to include a delay subroutine. A simple one can be built using the clock routines in time.h. For an example, click here.

Thanks to Vidushi Sharma for bringing these issues to my attention.

Written by Vivek

June 22, 2012 at 14:13

## Why V(x) = -1/x^2 has no bound state

Here we present a scaling-based argument to show that the attractive potential

$V(x) = -\frac{\lambda}{x^2}$

($\lambda > 0$), has no bound states (i.e. states with energy E < 0). Consider the Time Independent Schrodinger equation for this potential, which is the eigenvalue equation for the corresponding Hamiltonian,

$-\frac{\hbar^2}{2m}\frac{d^2\psi}{dx^2} - \frac{\lambda}{x^2}V(x)\psi = E\psi$

This can be rearranged as

$\frac{d^2\psi}{dx^2} + \frac{2m\lambda}{\hbar^2}\frac{1}{x^2}\psi = -\frac{2mE}{\hbar^2}\psi$

Now, it is easy to see that the quantity

$\frac{2m\lambda}{\hbar^2}$

is dimensionless. So, this problem has no independent scale, even though naively one might think that $\lambda$ specifies a scale for this problem. We claim that for such a system, there can be no bound state. This is proved below.

Suppose we perform the scale transformation $x \rightarrow \alpha x$ where $\alpha$ is some nonzero real number, we see from the equation above that if $E$ is an eigenvalue, then so is $\alpha^2 E$.

Suppose now that a bound ground state exists, with energy $E_{G}$. By definition $E_{G} < 0$. Then scale invariance implies that $\alpha^2 E_{G}$ must also be the energy of some bound state. But

$\alpha^2 E_{G} < E_{G}$

as multiplying the negative ground state energy by a positive number only makes it more negative. This contradicts the fact that the ground state has energy $E_{G}$. In fact, we can make a stronger statement, viz. the ground state energy can be made as small as we want. Therefore, there is no finite energy ground state for this system, and consequently there can be no bound states.

Note that there is no such problem with scattering states, i.e. states with positive energy. One can take an arbitrarily small positive energy scattering state and from it obtain valid energies of the continuum of higher energy scattering states by performing a scale transformation.

Incidentally, this is why potentials like $-\lambda[\delta(x)]^2$ and $-\lambda\delta(x)/x$ also have no bound states.

Written by Vivek

February 8, 2012 at 08:48

## Evaluation of Complex Error Functions erf(z) using GSL/C

For a small QFT calculation, I needed to numerically evaluate the imaginary error function erfi(x) = erf(i x). But it turns out that GSL (and most other numerical recipe code I could find) can only deal with erf(x), where x is real. Here’s a poor man’s implementation of erf(z) through a standard Taylor expansion,

$erf(z) = \frac{2}{\sqrt{\pi}}\sum_{n = 0}^{\infty}\frac{(-1)^n z^{2n+1}}{n!(2n+1)}$

The catch here is to deal with propagation of errors in the complex Taylor series, and to also somehow benchmark the results. Well, I haven’t been able to think about this yet, but I was able to confirm that for real arguments, my Taylor series code is about as good as the GSL error function gsl_sf_erf(x).

I am working on a CUDA implementation of this now, because in my project, I need to perform a numerical integration over the error function, which is quite intensive even for Mathematica. For now, I’m just sharing the serial implementation. (Hint: if you can write wrapper for each of the gsl functions inside my Taylor series calculating method, you can embed them within a __global__ kernel call through a struct. Alternatively — and less fun — you can just parallelize the for loop.)

/* Function to compute erf(z) using a Taylor series expansion
/* Author: Vivek Saxena
/* Last updated: January 28, 2011 21:11 hrs */
#include <gsl/gsl_complex_math.h>
#include <gsl/gsl_sf_erf.h>
#include <gsl/gsl_sf_pow_int.h>
#include <gsl/gsl_sf_gamma.h>
#include <stdio.h>

#define PI 3.1415926543
double cz = 2/sqrt(PI);
const int TERMS = 10; // no of terms to use in the Taylor series

gsl_complex erfTaylor( gsl_complex z, int trunc ){
gsl_complex res = gsl_complex_rect(0,0),
num  = gsl_complex_rect(0,0),
den  = gsl_complex_rect(1,0),
snum = gsl_complex_rect(1,0),
temp = gsl_complex_rect(0,0);
for(int i = 0; i < trunc; i++){
snum = gsl_complex_rect( cz * gsl_sf_pow_int(-1, i), 0 );
num  = gsl_complex_mul(snum, gsl_complex_pow_real(z, 2*i+1));
den  = gsl_complex_rect((2*i + 1)*gsl_sf_fact(i),0);
temp = gsl_complex_div(num, den);
}

return res;
}

int main ( void ){
printf( "Real error function\n\n");
for ( float i = 0; i <= 1; i += 0.01 ){
float gslerror = gsl_sf_erf(i);
float taylor   = GSL_REAL( erfTaylor( gsl_complex_rect(i, 0), 10 ) );
printf("erf(%f): gsl = %f, taylor = %f, mag error = %f\n", i, gslerror, taylor, abs(gslerror-taylor));
}

gsl_complex t, arg;
printf( "\n\nImaginary error function\n\n");
for (float i = 0; i <= 1; i += 0.01 ){
/* this would be your generic argument z in erf(z).
* so if z = x + iy, then
* arg = gsl_complex_rect(x, y);
*/
arg = gsl_complex_rect(0, i);
t   = erfTaylor( arg, TERMS );
printf("erf(%f + i %f) = %f + i %f\n", GSL_REAL(arg), GSL_IMAG(arg), GSL_REAL(t), GSL_IMAG(t));
}
return 0;
}


Written by Vivek

January 28, 2011 at 21:12