Least Action

Nontrivializing triviality..and vice versa.

Archive for October 2010

Thermal Noise Engines

leave a comment »

I just stumbled upon an interesting paper today on arXiv, from a researcher at the Department of Electrical Engineering at Texas A&M University. I am copying the abstract entry on the pre-print archive below.

Thermal noise engines

Authors:Laszlo B. Kish
(Submitted on 29 Sep 2010 (v1), last revised 20 Oct 2010 (this version, v5))

Electrical heat engines driven by the Johnson-Nyquist noise of resistors are introduced. They utilize Coulomb’s law and the fluctuation-dissipation theorem of statistical physics that is the reverse phenomenon of heat dissipation in a resistor. No steams, gases, liquids, photons, combustion, phase transition, or exhaust/pollution are present here. In these engines, instead of heat reservoirs, cylinders, pistons and valves, resistors, capacitors and switches are the building elements. For the best performance, a large number of parallel engines must be integrated to run in a synchronized fashion and the characteristic size of the elementary engine must be at the 10 nanometers scale. At room temperature, in the most idealistic case, a two-dimensional ensemble of engines of 25 nanometer characteristic size integrated on a 2.5×2.5cm silicon wafer with 12 Celsius temperature difference between the warm-source and the cold-sink would produce a specific power of about 0.4 Watt. Regular and coherent (correlated-cylinder states) versions are shown and both of them can work in either four-stroke or two-stroke modes. The coherent engines have properties that correspond to coherent quantum heat engines without the presence of quantum coherence. In the idealistic case, all these engines have Carnot efficiency, which is the highest possible efficiency of any heat engine,without violating the second law of thermodynamics.

Direct Link: http://arxiv.org/abs/1009.5942

This is a very interesting paper. Who knows what the future has in store for us…quantum thermal power stations?

Advertisements

Written by Vivek

October 23, 2010 at 00:20

Coulomb’s Law in (n+1) dimensions

with 3 comments

The pairwise interaction energy in n spatial dimensions is given by

E = -\int\frac{d^n k}{(2\pi)^{n}}\frac{e^{i\vec{k}\cdot\vec{r}}}{k^2+m^2}

To evaluate this n-dimensional Fourier integral, we must employ hyperspherical coordinates. The volume element is

dV = k^{n-1}\sin^{n-2}\phi_1\sin^{n-3}\phi_2\ldots\sin\phi_{n-2}\,dk d\phi_1\ldots d\phi_{n-1}

The angles \phi_{1}, \ldots, \phi_{n-2} range from 0 to \pi and \phi_{n-1} ranges from 0 to 2\pi. Writing \vec{k}\cdot\vec{r} = kr\cos\phi_1, the integral can be written as

E = -\frac{(2\pi)^{d} C}{(2\pi)^{n}}\int_{0}^{\infty}\frac{dk\,k^{n-1}}{k^2+m^2}\int_{0}^{\pi}d\phi_1\,e^{ikr\cos\phi_1}\sin^{n-2}\phi_1

The constant C is equal to the product of integrals of the form \int_{0}^{\pi} \sin^{n-k}\phi\,d\phi. The number of such integrals depends on the dimension n. In two and three dimensions, C = 1. In more than 3 dimensions, there are n-3 such terms, for k = 2, 3, \ldots, n-2 (the integral over \phi_{n-1} produces a 2\pi, which we’ve already factored out). The value of d is 0 for 2 dimensions, and is 1 for 3 and higher dimensions.

The integral over \phi_1 produces a regularized confluent hypergeometric function _{2}F_3(a_1,a_2;b_1,b_2;\alpha). Specifically, for n \geq 2, the integral over \phi_1 produces

\frac{\sqrt{\pi}e^{-ikr}\Gamma\left(\frac{n-1}{2}\right)}{\Gamma\left(\frac{n}{2}\right)}\left[_{2}F_{3}\left(\frac{n-1}{4},\frac{n+1}{4};\frac{1}{2},\frac{n-1}{2},\frac{n}{2};-k^2 r^2\right) + (ikr) _{2}F_{3}\left(\frac{n+1}{4},\frac{n+3}{4};\frac{3}{2},\frac{n+1}{2},\frac{n}{2};-k^2 r^2\right)\right]

(As a check, for n =2, this becomes \pi J_{0}(kr), which is what we obtained for 2 spatial dimensions in a previous post.)

The resulting integration over k is rather tricky. At this point, I don’t know if its possible to do it by hand, so I am using Mathematica to perform it.

Edit: a few hours later..

So, it seems this integral does not have a closed form representation, or at least not one that Mathematica can find.

Written by Vivek

October 21, 2010 at 18:39

Coulomb’s Law in (1+2) dimensions

with 2 comments

The idea for this post came from an unsolved problem in Tony Zee’s book on QFT. We begin with the ‘mutual’ energy:

E = -\frac{1}{2}\int \frac{d^2 k}{(2\pi)^2}\frac{e^{i\vec{k}\cdot(\vec{x}_1-\vec{x}_2)}}{k^2 + m^2}

Define \boldsymbol{r} = \boldsymbol{x}_1-\boldsymbol{x}_2. Transforming to cylindrical polar coordinates, this is

E = -\frac{1}{2(2\pi)^2}\int_{0}^{\infty}\frac{k\,dk}{k^2 + m^2}\int_{0}^{2\pi}d\theta\,e^{ikr\cos\theta}

The \theta integral produces a Bessel Function of order 0, i.e. 2\pi J_{0}(kr), so that

E = -\frac{1}{4\pi}\int_{0}^{\infty}\frac{k J_{0}(kr)}{k^2 + m^2}dk

I’m lazy, so I used Mathematica to get the final result, which is

E = -\frac{1}{4\pi}K_{0}(mr)

where K_{0} denotes the modified Bessel function of the second kind, of order 0. A plot of K_{0}(mr) as a function of mr is shown below.

The graph in red is that of the usual 1/r potential. So, in 2 spatial dimensions, the potential energy actually falls off faster than 1/r.

Written by Vivek

October 18, 2010 at 23:29

Posted in Uncategorized

Feynman Diagrams using Adobe Illustrator

leave a comment »

I recently became aware that Professor Daniel Schroeder used Adobe Illustrator to make Feynman Diagrams in Chapter 9 of his Thermal Physics book. A google search yielded two videos by a Youtube user named AjabberWok. I’m taking the liberty of embedding them here. If you would rather view them in a new window, just click on the videos to to be taken to the appropriate youtube link.

In a previous post, I briefly outlined the installation of FeynArts, an open source Mathematica package for creating Feynman diagrams. There are also other tools such as the open source JaxoDraw program, and the axodraw \LaTeX file, which Jaxodraw actually uses to export diagrams into \LaTeX lineart. I personally endorse the use of open source programs such as FeynArts and Jaxodraw, but for those of you who already have access to Adobe Illustrator, you might find these videos quite useful.

Written by Vivek

October 17, 2010 at 11:53

Scalar Propagator

leave a comment »

In this post, I want to discuss various aspects of the scalar field propagator. This is the first propagator one encounters in quantum field theory. In canonical quantization, there is quite a bit of spadework that goes into deriving the expression. However, as the more astute student will recognize, the expression is encountered well before one actually writes it. Those familiar with the theory of Fourier transforms will recognize that the propagator as normally written, is just the Green’s function of the wave equation, expressed as an integral over momentum space which is nothing but the inverse Fourier transform of the momentum space propagator. So, one should recognize straight out that a propagator is just a Green’s function. This doesn’t undermine the importance of the propagator of course (what would one do without it in field theory), but is merely meant to emphasize the crucial role played by Green’s functions in physics. Capturing the essence of the propagator, even if it means doing a lot of similar looking calculations repeatedly, is important in understanding both Feynman diagrams as well as Cluster Expansions.

Nevertheless, my reason for writing this post is that most of the ‘obvious’ calculational steps leading to the important expressions are often left to the imagination of readers in books, and sometimes its the steps and not the final result (as much) which give insight into the physics.

So, without further ado, let us begin with the expression for the propagator:

D(x-y) = \int \frac{d^{4}p}{(2\pi)^4}\frac{e^{ip\cdot(x-y)}}{p^2-m^2 + i\epsilon}

The i\epsilon has been added to make the expression sensible at p^2 = m^2 (which is the “on-shell” condition, if you’re familiar with special relativity). Note that we have not imposed this condition in this integral, so p can actually run free over all possible values. Remember that the restriction comes into place only if we put in a “hand-made” delta function inside with its argument equal to p^2 - m^2. Actually the “i\epsilon prescription” (as it is referred to in most QFT books) goes a bit deeper than this:- it is required for the path integral of the scalar field to converge at large values of the scalar field. That the same \epsilon should help us regularize the denominator from blowing up on shell is not unexpected.

For simplicity, let me consider the case y = 0. In any event, I can always obtain D(x-y) given a general expression for the propagator.

So,

D(x) = \int \frac{d^{4}p}{(2\pi)^4}\frac{e^{i(p^0x^0-\vec{p}\cdot\vec{x})}}{p^2-m^2 + i\epsilon}

which I write as

D(x) = \int \frac{d^{3}p}{(2\pi)^3}e^{-i\vec{p}\cdot\vec{x}}\frac{1}{2\pi}\int dp^0\frac{e^{i p^0x^0}}{p^2-m^2 + i\epsilon}

Also p^2 = (p^0)^2-\vec{p}^2, so

p^2-m^2 + i\epsilon = (p^0)^2-\vec{p}^2-m^2 + i\epsilon

The poles are located at

\pm \sqrt{\vec{p}^2+m^2-i\epsilon} = \pm\sqrt{\omega_{p}^2-i\epsilon} where \omega_{p} = \sqrt{\vec{p}^2+m^2}

So, in the \epsilon \rightarrow 0 limit, the poles are at

\omega_{p} - i\epsilon
-\omega_{p} + i\epsilon

So the poles are in the second and fourth quadrant. This is important (why? If you loved the joke about why the mathematician named his dog Cauchy, you probably know why).

The integral

I = \int dp^0\frac{e^{i p^0x^0}}{p^2-m^2 + i\epsilon} = \int dp^0\frac{e^{i p^0x^0}}{(p^0-(\omega_p - i\epsilon))(p^0-(-\omega_p+i\epsilon))}

can be easily computed using the Residue Theorem. There are two cases to consider, depending on whether x^0 > 0 or x^0 < 0.

Case I: x^0 > 0

In this case, we must have Im(x^0) > 0 on the contour of integration, for the integrand to be bounded (a necessary condition for the indefinite integral to converge). This is most easily seen if one expresses p^0 as a complex number:

p^0 = a + ib

Now, ip^0 = ia - b, so e^{ip^0 x^0} = e^{-bx^0} e^{ia x^0}. So, if x^0 > 0, we must have b > 0 or else the integrand will blow up at large p^0. I know that this is just the nifty trick known to contour integrators as Jordan’s lemma, and I probably don’t need to go into it in so much detail here, but the reason I do it is that it is easy to make sign errors subconsciously. So at least for the first few attempts, one must go through each and every step in detail before becoming an “expert propagator” 😉

Anyway, so much for that. The upper half plane contribution comes from the pole in the second quadrant, and the residue is simple to compute. It turns out to be

\frac{e^{ip^0(-\omega_{p} + i\epsilon)}}{(-\omega_{p} + i\epsilon)-(\omega_{p} - i\epsilon)} = \frac{e^{-i\omega_p t}}{-2\omega_p}

where t = x^0.

So,

I = 2\pi i \frac{e^{-i\omega_p t}}{-2\omega_p}

Case II: x^0 < 0

Obviously, I have to pick out the contribution from the pole in the 4th quadrant now. The residue is

\frac{e^{ip^0(\omega_{p} - i\epsilon)}}{(\omega_{p} - i\epsilon)-(-\omega_{p} + i\epsilon)} = \frac{e^{i\omega_p t}}{2\omega_p}

So that

I = 2\pi i \frac{e^{i\omega_p t}}{2\omega_p}

Plugging the results back into the original expression for D(x), one gets

D(x) = \int \frac{d^3 p}{(2\pi)^3} e^{-i\vec{p}\cdot\vec{x}}\left[-i\frac{e^{-i\omega_p t}}{2\omega_p}\theta(p^0) + i \frac{e^{i\omega_p t}}{2\omega_p}\theta(-p^0)\right]

Now, the volume integral over the “3-momentum” variables is invariant under inversion in the 3-momentum space: \vec{p} \rightarrow -\vec{p}. So, I can write this as

D(x) = \int \frac{d^3 p}{(2\pi)^3} \left[-i\frac{e^{-i\omega_p t}e^{i\vec{p}\cdot\vec{x}}}{2\omega_p}\theta(p^0) + i  \frac{e^{i\omega_p t}e^{-i\vec{p}\cdot\vec{x}}}{2\omega_p}\theta(-p^0)\right]

where I’ve performed the invariance operation of inversion in the first term. This is just

D(x) = \int \frac{d^3 p}{(2\pi)^3} \left[-i\frac{e^{-i(\omega_p  t-\vec{p}\cdot\vec{x})}}{2\omega_p}\theta(p^0) + i  \frac{e^{i(\omega_p  t-\vec{p}\cdot\vec{x})}}{2\omega_p}\theta(-p^0)\right]

Under Construction

Written by Vivek

October 16, 2010 at 20:01

Posted in Uncategorized

FeynArts on Mathematica in Windows

with 2 comments

As much as I hate to admit it, some of us are constrained to use Mathematica on Windows for some reason or the other. If you happen to be part of this ‘some of us’ subset (at least temporarily, in my case) and are eager to get your hands on FeynArts, a nifty package for Mathematica which lets you make and play with Feynman diagrams, you’re going to have some trouble installing it on the current version of Mathematica. If you are using Windows, I suggest downloading the tarball and unzipping its contents to:

C:\Users\<your Windows user name>\AppData\Roaming\Mathematica\Applications\

The important thing to note is that the tarball actually unpacks into a new directory (FeynArts-3.5 for now). You need to ensure that the contents of this directory are placed in the above directory, so that FeynArts-3.5 is not a subfolder of the above folder. In short, this is what my folder looks like:

The next step is to fire up Mathematica and select “Install” from the File Menu.  Select “Package” under the “Type of item to Install” tab and navigate to the above directory using the “File” option under the Source tab. Select FeynArts35.m and type “FeynArts” in the Install Name field. Finally, under “Default Installation Directory”, check “Your user Mathematica base directory”.

This last step is important: if you check the “System-wide Mathematica base directory” option, you’ll end up with the $FeynArtsDir variable pointing to “C:\Program Files\Wolfram Research\Mathematica\7.0\AddOns\Applications”. In this case, FeynArts will work only if you dump the contents of “C:\Users\<your Windows user name>\AppData\Roaming\Mathematica\Applications\” into “C:\Program Files\Wolfram Research\Mathematica\7.0\AddOns\Applications”, something you shouldn’t do as this makes the Application folder messy.

A workaround should be to change the Directory[] variable in Setup.m for FeynArts, to point to the folder of your choice. I tried this, but ran into other path-related problems. Anyway, so long as it works, who cares!

Written by Vivek

October 15, 2010 at 19:08

Gaussian Integrals and Wick Contractions

leave a comment »

Theorem 1: For nonzero real a,

\langle x^{2n}\rangle = \frac{\int_{-\infty}^{\infty}dx\,e^{-\frac{1}{2}ax^2}x^{2n}}{\int_{-\infty}^{\infty}dx\,e^{-\frac{1}{2}ax^2}} = \frac{1}{a^{n}}(2n-1)(2n-3)\cdots 5\cdot 3\cdot 1

Theorem 2: For a real symmetric N\times N matrix A, a real N \times 1 vector x and a real N\times 1 vector J, we have

\int_{-\infty}^{\infty}\dotsi\int_{-\infty}^{\infty}\,dx_1\cdots dx_N\, e^{-\frac{1}{2}x^{T}Ax + J^{T}x} = \left(\frac{(2\pi)^N}{\det(A)}\right)^{1/2}e^{\frac{1}{2}J^{T}A^{-1}J}

Proof:

Define O to be the real orthogonal matrix which diagonalizes A to its diagonal form D. That is,

A = O^{T} D O

where O^{T}O = OO^{T} = I. Further, let us choose O to be a special orthogonal matrix, so that \det(O) = +1. Also, define Y = OX. Now,

dy_1 \cdots dy_n = |\det(O)| dx_1 \cdots dx_n = dx_1 \cdots dx_n

The argument of the exponential is

-\frac{1}{2}X^{T}AX + J^{T}X = -\frac{1}{2}Y^{T}O(O^{-1}DO)(O^{T}Y) + J^{T}O^{T} Y

which equals

-\frac{1}{2}Y^{T}DY + J^{T}O^{T}Y

The first term is

-\frac{1}{2}Y^{T}DY = -\frac{1}{2}\sum_{i}y_{i}D_{ii}y_{i}

Let C^{T} = J^{T}O^{T} (that is, C = OJ). So the second term is

J^{T}O^{T}Y = C^{T}Y

So the argument of the exponential becomes

-\frac{1}{2}\sum_{i}y_{i}D_{ii}y_{i} + \sum_{i}C_{i}Y_{i}

which can be written as

-\frac{1}{2}\sum_{i}\left[D_{ii}\left(y_{i}-\frac{C_i}{D_{ii}}\right)^2 - \frac{C_i^2}{D_{ii}}\right]

The integrand becomes

e^{-\frac{1}{2}\sum_{i}\left[D_{ii}\left(y_{i}-\frac{C_i}{D_{ii}}\right)^2 - \frac{C_i^2}{D_{ii}}\right]} = \prod_{i}e^{-\frac{1}{2}D_{ii}\left(y_{i}-\frac{C_i}{D_{ii}}\right)^2}e^{\frac{1}{2}C_{i}D_{ii}^{-1}C_{i}}

Hence the quantity to be evaluated is

\left\{\int_{-\infty}^{\infty}\dotsi\int_{-\infty}^{\infty}dy_1 \cdots dy_n \prod_{i}e^{-\frac{1}{2}D_{ii}\left(y_{i}-\frac{C_i}{D_{ii}}\right)^2}\right\}e^{\frac{1}{2}\sum_{i}C_{i}D_{ii}^{-1}C_{i}}

The quantity in curly brackets involves N definite Gaussian integrals, the i^{th} term of which is

\sqrt{\frac{2\pi}{D_{ii}}}

using the identity \int_{-\infty}^{\infty} dt\,e^{-\frac{1}{2}at^2} = \sqrt{\frac{2\pi}{a}}.

So the continued product is

\left(\frac{(2\pi)^N}{\det(D)}\right)^{1/2} = \left(\frac{(2\pi)^N}{\det(A)}\right)^{1/2}

whereas the exponential factor sticking outside is

e^{\frac{1}{2}\sum_{i}C_{i}D_{ii}^{-1}C_{i}} = e^{\frac{1}{2}C^{T}D^{-1}C} = e^{\frac{1}{2}(J^{T}O^{T}OA^{-1}O^{-1}OJ)} = e^{\frac{1}{2}J^{T}A^{-1}J}

This completes the proof.

Theorem 3: \langle x_{i} x_{j} \cdots x_{k}x_{l}\rangle = \sum_{\mbox{Wick}}(A^{-1})_{ab}\cdots(A^{-1})_{cd}

Proof: If m is the number of terms in \langle x_{i} x_{j} \cdots x_{k}x_{l}\rangle, differentiate the identity proved in Theorem 2 above m times with respect to each of J_i, J_j, \ldots, J_k, J_l.

As an example, differentiate the RHS of the identity in Theorem 2 with respect to J_{i}. The derivative is

\frac{d}{dJ_{i}}\exp\left\{\frac{1}{2}\sum_{m,n}J_{m}(A^{-1})_{mn}J_{n}\right\} = \frac{1}{2}\left(\sum_{m,n}(\delta_{mi}(A^{-1})_{mn}J_{n} + J_{m}(A^{-1})_{mn}\delta_{ni})\right)\exp\left\{\frac{1}{2}\sum_{m,n}J_{m}(A^{-1})_{mn}J_{n}\right\}

the prefactor of the exponential is

\frac{1}{2}(\sum_{n}(A^{-1})_{in}J_{n} + \sum_{m}J_{m}(A^{-1})_{mi})

which becomes

\sum_{j}(A^{-1})_{ij}J_{j}

using the fact that A is symmetric. Repeating this exercise, we see that bringing down x_{i} involves differentiating with respect to J_{i}, and effectively introduces a matrix element of A^{-1}. It is obvious by induction that the sum must run over all possible permutations of matrix indices of A^{-1}. That is, the sum must run over every possible permutation (a, b), \ldots (c, d) of pairing the indices i, j, \ldots k, l. This completes the “proof”.

Written by Vivek

October 9, 2010 at 21:45