## Archive for the ‘**Uncategorized**’ Category

## Fun with Homology

There is a theorem (currently being attributed to Wikipedia, but I’m sure I can do better given more time) which states that

*All closed surfaces can be produced by gluing the sides of some polygon and all even-sided polygons (2n-gons) can be glued to make different manifolds.*

*Conversely, a closed surface with non-zero classes can be cut into a 2n-gon.*

Two interesting cases of this are:

- Gluing opposite sides of a hexagon produces a torus .
- Gluing opposite sides of an octagon produces a surface with two holes, topologically equivalent to a torus with two holes.

I had trouble visualizing this on a piece of paper, so I found two videos which are fascinating and instructive, respectively.

**The two-torus from a hexagon**

**The genus-2 Riemann surface from an octagon**

I would like to figure out how one can make such animations, and generalizations of these, using Mathematica or Sagemath.

There are a bunch of other very cool examples on the Youtube channels of these users. Kudos to them for making such instructive videos!

PS – I see that $\LaTeX$ on WordPress has become (or is still?) very sloppy! ðŸ˜¦

## If I were a Springer-Verlag Graduate Text in Mathematics….

If I were a Springer-Verlag Graduate Text in Mathematics, I would be William S. Massey’s .
A Basic Course in Algebraic TopologyI am intended to serve as a textbook for a course in algebraic topology at the beginning graduate level. The main topics covered are the classification of compact 2-manifolds, the fundamental group, covering spaces, singular homology theory, and singular cohomology theory. These topics are developed systematically, avoiding all unecessary definitions, terminology, and technical machinery. Wherever possible, the geometric motivation behind the various concepts is emphasized. Which Springer GTM would |

## An evolving .vimrc – VIM with Python

I’ve switched almost completely to Python for my Master’s thesis. The question that comes up when you use Python is…what IDE to use? There seem to be a whole bunch of them but none of them worked for me, for one reason or another. So for now, I am using vim, and will explore different ways of customizing it for use with Python. For now here is a minimalist ~/.vimrc.

filetype off set nu filetype plugin indent on syntax on set modeline set background=dark autocmd FileType python set complete+=k~/.vim/syntax/python.vim isk+=.,( map <buffer> <S-e> :w<CR>: !/usr/bin/env python % <CR>

I’ll explain and elaborate in due course of time.

For now, you might want to ignore the line containing autocmd (just put a # before it). Also, for starters ~/ refers to your home directory. And files beginning with a dot (.) are hidden by default so an “ls -l” won’t show them, but an “ls -aCF” will. So just use your favorite editor and copy paste the above text into a file called “.vimrc” which is placed in your home directory (~/.)

## Gnuplot with C/C++

Sometimes, it is convenient to call gnuplot from within your own C/C++ code rather than having the code create a data file, and manually execute gnuplot every time to re-plot using the contents of the file. This is easy if you use Linux: just use a pipe. For instance:

#include <iostream>

#include <stdio.h>

#include <cstdlib>

```
```using namespace std;

int main(){

// Code for gnuplot pipe

FILE *pipe = popen("gnuplot -persist", "w");

fprintf(pipe, "\n");

// Your code goes here

` // Don't forget to close the pipe`

fclose(pipe);

return 0;

}

Note that we have had to use C routines to call gnuplot. But there’s a problem with this code: gnuplot will generate the plot only when the pipe is closed. So if you have multiple plot statements, or if you want to refresh the same plot (as for instance, in an animation scenario) you will find that the above approach will result in the plots getting asynchronously updated right at the end.

So how does one ensure that after every plot command, a plot is actually generated (or an existing one refreshed) Immediately? The key is to flush the buffer using fflush. Simply insert

fflush(pipe);

below every plot call to gnuplot. If you have an animation, then in order to make the transitions smoother, you might want to include a delay subroutine. A simple one can be built using the clock routines in time.h. For an example, click here.

Thanks to Vidushi Sharma for bringing these issues to my attention.

## Constructing a vector from a spinor

The purpose of this post is to examine why the quantity

transforms like a vector under rotations. Here, is a 2 component spinor, and is one of the 3 Pauli matrices. The explanation follows essentially the arguments put forth by Sakurai in chapter 3 of his book on quantum mechanics.

**Spadework**

An arbitrary state ket in spin space can be written as

where and are the base “kets” (actually they are spinors, for the spin-1/2 case). This identification suggests that we can specify a general two component spinor as

Further, from Sakurai’s equations 3.2.30,

Therefore, we get the most important result:

Physically, this equation implies that the right hand side equals the expectation value of the spin operator in spinor-space (or in more precise terms, in the basis generated by the set of all possible ‘s). So, the quantity that we are examining is just the expectation value of the operator (modulo a constant factor of ).

**The argument..**

Now, Sakurai has shown that under a rotation of the state kets, the expectation value of transforms like the component of a classical vector. That is,

(This is equation 3.2.11 of Sakurai)

where are the elements of the 3 x 3 orthogonal matrix R that specifies this rotation.

The corresponding rotation of the state kets is carried out by the operator .

Therefore, transforms like a vector, and hence so does .

Thanks to David Angelaszek for a long, patient and extremely useful discussion, which led to the resolution of this argument. And of course, thanks to JJ Sakurai for forcing us to read between the lines ðŸ™‚

## Coulomb’s Law in (1+2) dimensions

The idea for this post came from an unsolved problem in Tony Zee’s book on QFT. We begin with the ‘mutual’ energy:

Define . Transforming to cylindrical polar coordinates, this is

The integral produces a Bessel Function of order 0, i.e. , so that

I’m lazy, so I used Mathematica to get the final result, which is

where denotes the modified Bessel function of the second kind, of order 0. A plot of as a function of is shown below.

The graph in red is that of the usual 1/r potential. So, in 2 spatial dimensions, the potential energy actually falls off faster than 1/r.

## Scalar Propagator

In this post, I want to discuss various aspects of the scalar field propagator. This is the first propagator one encounters in quantum field theory. In canonical quantization, there is quite a bit of spadework that goes into deriving the expression. However, as the more astute student will recognize, the expression is encountered well before one actually writes it. Those familiar with the theory of Fourier transforms will recognize that the propagator as normally written, is just the Green’s function of the wave equation, expressed as an integral over momentum space which is nothing but the inverse Fourier transform of the momentum space propagator. So, one should recognize straight out that a propagator is just a Green’s function. This doesn’t undermine the importance of the propagator of course (what would one do without it in field theory), but is merely meant to emphasize the crucial role played by Green’s functions in physics. Capturing the essence of the propagator, even if it means doing a lot of similar looking calculations repeatedly, is important in understanding both Feynman diagrams as well as Cluster Expansions.

Nevertheless, my reason for writing this post is that most of the ‘obvious’ calculational steps leading to the important expressions are often left to the imagination of readers in books, and sometimes its the steps and not the final result (as much) which give insight into the physics.

So, without further ado, let us begin with the expression for the propagator:

The has been added to make the expression sensible at (which is the “on-shell” condition, if you’re familiar with special relativity). Note that we have not imposed this condition in this integral, so p can actually run free over all possible values. Remember that the restriction comes into place only if we put in a “hand-made” delta function inside with its argument equal to . Actually the “ prescription” (as it is referred to in most QFT books) goes a bit deeper than this:- it is required for the path integral of the scalar field to converge at large values of the scalar field. That the same should help us regularize the denominator from blowing up on shell is not unexpected.

For simplicity, let me consider the case . In any event, I can always obtain given a general expression for the propagator.

So,

which I write as

Also , so

The poles are located at

where

So, in the limit, the poles are at

So the poles are in the second and fourth quadrant. This is important (why? If you loved the joke about why the mathematician named his dog Cauchy, you probably know why).

The integral

can be easily computed using the Residue Theorem. There are two cases to consider, depending on whether or .

**Case I**:

In this case, we must have Im() > 0 on the contour of integration, for the integrand to be bounded (a necessary condition for the indefinite integral to converge). This is most easily seen if one expresses as a complex number:

Now, , so . So, if , we must have or else the integrand will blow up at large . I know that this is just the nifty trick known to contour integrators as Jordan’s lemma, and I probably don’t need to go into it in so much detail here, but the reason I do it is that it is easy to make sign errors subconsciously. So at least for the first few attempts, one must go through each and every step in detail before becoming an “expert propagator” ðŸ˜‰

Anyway, so much for that. The upper half plane contribution comes from the pole in the second quadrant, and the residue is simple to compute. It turns out to be

where .

So,

**Case II**:

Obviously, I have to pick out the contribution from the pole in the 4th quadrant now. The residue is

So that

Plugging the results back into the original expression for , one gets

Now, the volume integral over the “3-momentum” variables is invariant under inversion in the 3-momentum space: . So, I can write this as

where I’ve performed the invariance operation of inversion in the first term. This is just

**Under Construction**