HEAD | PREVIOUS |

Monte Carlo Techniques

$\int p(v)\mathit{dv}=1.$ | $(11.1)$ |

$\int p(\mathit{v}){d}^{3}v=1.$ | $(11.2)$ |

$p(\mathit{v})=f(\mathit{v})/\int f(\mathit{v}){d}^{3}v=f(\mathit{v})/n,$ | $(11.3)$ |

$P(v)={\int}_{-\mathit{\infty}}^{v}p(v\text{'})\mathit{dv}\text{'}.$ | $(11.4)$ |

$P(\mathit{v})=P({v}_{x},{v}_{y},{v}_{z})={\int}_{-\mathit{\infty}}^{{v}_{x}}{\int}_{-\mathit{\infty}}^{{v}_{y}}{\int}_{-\mathit{\infty}}^{{v}_{z}}p(\mathit{v}\text{'}){d}^{3}v\text{'}.$ | $(11.5)$ |

${\mathit{\mu}}_{N}=\frac{1}{N}\sum _{i=1}^{N}{v}_{i}.$ | $(11.6)$ |

${S}_{N}^{2}=\frac{1}{N-1}\sum _{i=1}^{N}({v}_{i}-{\mathit{\mu}}_{N}{)}^{2}.$ | $(11.7)$ |

$\mathit{\mu}=\int vp(v)\mathit{dv}$ | $(11.8)$ |

${S}^{2}=\int (v-\mathit{\mu}{)}^{2}p(v)\mathit{dv}.$ | $(11.9)$ |

${p}_{u}(u)\mathit{du}={p}_{v}(v)\mathit{dv}\mathrm{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}}\Rightarrow \mathrm{\hspace{1em}\hspace{1em}\hspace{1em}}{p}_{v}(v)={p}_{u}(u)\left|\frac{\mathit{du}}{\mathit{dv}}\right|=\left|\frac{\mathit{du}}{\mathit{dv}}\right|.$ | $(11.10)$ |

$u={P}_{v}(v)\mathrm{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}}\text{for which}\mathrm{\hspace{1em}\hspace{1em}\hspace{1em}}\frac{\mathit{du}}{\mathit{dv}}={p}_{v}(v).$ | $(11.11)$ |

$v={P}_{v}^{-1}(u).$ | $(11.12)$ |

Figure 11.1: To obtain numerically a random variable $v$ with specified
probability distribution ${p}_{v}$ (not to scale), calculate a table
of the function ${P}_{v}(v)$ by integration. Draw a random number from
uniform deviate $u$. Find the $v$ for which ${P}_{v}(v)=u$ by
interpolation. That's the random $v$.

Since ${P}_{v}$ is monotonic, for any $u$ between 0 and 1, there is a
single root $v$ of the equation ${P}_{v}(v)-u=0$. Provided we can find that
root quickly, then given $u$ we can find $v$. One way to make the root
finding quick is to generate a table of ${N}_{t}$ values of $v$ and $u={P}_{v}(v)$,
equally spaced Figure 11.2: The rejection method chooses a $v$ value randomly from a
simple distribution (e.g. a constant) whose integral is
invertible. Then a second random number decides whether it will
be rejected or accepted. The fraction accepted at $v$ is equal
to the ratio of ${p}_{v}(v)$ to the simple invertible distribution.
${p}_{v}(v)$ must be scaled by a constant factor to be everywhere
less than the simple distribution (1 here).

In effect this means picking points below the first scaled
distribution, in the illustrated case of a rectangular distribution,
uniformly distributed within the rectangle, and accepting only those
that are below ${p}_{v}(v)$ (suitably scaled to be everywhere less than
1). Therefore some inefficiency is inevitable. If the area under
${p}_{v}(v)$ is, say, half the total, then twice as many total choices are
needed, and each requires two random numbers, giving four
times as many random numbers per
accepted point. Improvement on the second inefficiency can be obtained
by using a simply invertible function that fits ${p}_{v}(v)$ more
closely. Even so, this will be slower than the tabulated function
method, unless the random number generator has very small cost.
Figure 11.3: Simulating over a volume that is embedded in a wider
external region, we need to be able to decide how to inject
particles from the exterior into the simulation volume so as
to represent statistically the exterior distribution.

Suppose the volume is a cuboid shown in Fig. 11.3. It has 6
faces, each of which is normal to one of the coordinate axes, and
located at $\pm {L}_{x}$, $\pm {L}_{y}$ or $\pm {L}_{z}$. We'll consider the face
perpendicular to $x$ which is at $-{L}_{x}$, so that positive velocity
${v}_{x}$ corresponds to moving $\mathit{\Gamma}}_{x}(\mathit{x})=\int \int {\int}_{{v}_{x}=0}^{\mathit{\infty}}{v}_{x}f(\mathit{v},\mathit{x}){\mathit{dv}}_{x}{\mathit{dv}}_{y}{\mathit{dv}}_{z}\mathrm{\hspace{0.5em}\hspace{0.5em}$ | $(11.13)$ |

${F}_{-{L}_{x}}={\int}_{-{L}_{y}}^{{L}_{y}}{\int}_{-{L}_{y}}^{{L}_{y}}{\mathit{\Gamma}}_{x}(-{L}_{x},y,z)\mathrm{\hspace{0.5em}\hspace{0.5em}}\mathit{dy}\mathit{dz}.$ | $(11.14)$ |

${p}_{n}=\mathrm{exp}(-r){r}^{n}/n!\hspace{0.5em}.$ | $(11.15)$ |

$\begin{array}{cccc}\multicolumn{1}{c}{F({v}_{x})}& =\hfill & {\int}_{0}^{{v}_{x}}{v}_{x}\text{'}\frac{\partial}{\partial {v}_{x}\text{'}}P({v}_{x}\text{'},{v}_{\mathit{ymax}},{v}_{\mathit{zmax}})\mathrm{\hspace{0.5em}\hspace{0.5em}}{\mathit{dv}}_{x}\text{'}\hfill & \hfill \hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}(11.16)\\ \multicolumn{1}{c}{}& =\hfill & {v}_{x}P({v}_{x},{v}_{\mathit{ymax}},{v}_{\mathit{zmax}})-{\int}_{0}^{{v}_{x}}P({v}_{x}\text{'},{v}_{\mathit{ymax}},{v}_{\mathit{zmax}})\mathrm{\hspace{0.5em}\hspace{0.5em}}{\mathit{dv}}_{x}\text{'}.\hfill \end{array}$ |

Afterwards, we can normalize $F({v}_{x})$ by dividing by $F({v}_{\mathit{xmax}})$, ariving at the cumulative flux weighted probability for ${v}_{x}$. We then proceed as follows.

1. | Choose a random ${v}_{x}$ from its cumulative flux-weighted probability $F({v}_{x})$. |

2. | Choose a random ${v}_{y}$ from its cumulative probability for the already chosen ${v}_{x}$, namely $P({v}_{x},{v}_{y},{v}_{\mathit{zmax}})$ regarded as a function only of ${v}_{y}$. |

3. | Choose a random ${v}_{z}$ from its cumulative probability for the already chosen ${v}_{x}$ and ${v}_{y}$, namely $P({v}_{x},{v}_{y},{v}_{z})$ regarded as a function only of ${v}_{z}$. |

Naturally for other faces, $y$ and $z$ one has to start with the corresponding velocity component and cycle round the indices. For steady external conditions all the cumulative velocity probabilities need to be calculated only once, and then stored for subsequent time steps.

A four-dimensional sphere of radius 1 consists of all those points for which ${r}^{2}={x}_{1}^{2}+{x}_{2}^{2}+{x}_{3}^{2}+{x}_{4}^{2}\le 1$. Its volume is known analytically; it is ${\mathit{\pi}}^{2}/2$. Let us evaluate the volume numerically by examing the unit hypercube $0\le {x}_{i}\le 1$, $i=1,\dots ,4$. It is $1/{2}^{4}=1/16$th of the hypercube $-1\le {x}_{i}\le 1$, inside of which the hypersphere fits; so the volume of of the hypersphere that lies within the $0\le {x}_{i}\le 1$ unit hypercube is $1/16$th of its total volume; it is ${\mathit{\pi}}^{2}/32$. We calculate this volume numerically by discrete integration as follows. A deterministic (non-random) integration of the volume consists of constructing an equally spaced lattice of points at the center of cells that fill the unit cube. If there are $M$ points per edge, then the lattice positions in the dimension $i$ ($i=1,\dots ,4$) of the cell-centers are ${x}_{i,{k}_{i}}=({k}_{i}-0.5)/M$, where ${k}_{i}=1,\dots ,M$ is the (dimension-$i$) position index. We integrate the volume of the sphere by collecting values from every lattice point throughout the unit hypercube. A value is unity if the point lies within the hypersphere ${r}^{2}\le 1$; otherwise it is zero. We sum all values (zero or one) from every lattice point and obtain an integer equal to the number of lattice points $S$ inside the hypersphere. The total number of lattice points is equal to ${M}^{4}$. That sum corresponds to the total volume of the hypercube, which is 1. Therefore the discrete estimate of the volume of $1/16$th of the hypersphere is $S/{M}^{4}$. We can compare this numerical integration with the analytic value and express the fractional error as the numerical value divided by the analytic value, minus one:

$\text{Fractional Error}=|\frac{S/{M}^{4}}{{\mathit{\pi}}^{2}/32}-1|$ |

Monte Carlo integration works essentially exactly the same except that the points we choose are not a regular lattice, but rather they are random. Each one is found by taking four new uniform-variate values (between $0$ and $1$) for the four coordinate values ${x}_{i}$. The point contributes unity if it has ${r}^{2}\le 1$ and zero otherwise. We obtain a different count ${S}_{r}$. We'll choose to use a total number $N$ of random point positions exactly equal to the number of lattice points $N={M}^{4}$, although we could have made $N$ any integer we like. The Monte Carlo integration estimate of the volume is ${S}_{r}/N$. I wrote a computer code to carry out these simple procedures, and compare the fractional errors for values of $M$ ranging from 1 to 100. The results are shown in Fig. 11.4.

Figure 11.4: Comparing error in the volume of a hypersphere found
numerically using lattice and Monte Carlo integration. It turns out
that Monte Carlo integration actually does *not*
converge significantly faster than lattice integration, contrary to
common wisdom. They both converge approximately like
$1/\sqrt{N}$ (logarithmic slope $=-\frac{1}{2}$). What's more, if one
uses a "bad" random number generator (the Monte Carlo Bad
line) it is possible the random integration will cease
converging at some number, because it gives only a finite-length
of independent random numbers, which in this case is exhausted
at roughly a million.

Four dimensional lattice integration
does as well as Monte Carlo for this sphere. Lattice
integration is not as bad as the dubious assumption of fractional
uncertainty $\frac{1}{M}={N}^{-1/d}$ suggests; it is more like
${N}^{-2/d}$ for $d>1$. Only at higher dimensionality than $d=4$ do tests show the
advantages of Monte Carlo integration beginning to be significant.
As a bonus, this integration experiment detects poor random number
generators.${}^{77}$
1. A random variable is required, distributed on the interval $0\le x\le \mathit{\infty}$ with probability distribution $p(x)=k\mathrm{exp}(-\mathit{kx})$, with $k$ a constant. A library routine is available that returns a uniform random variate $y$ (i.e. with uniform probability $0\le y\le 1$). Give formulas and an algorithm to obtain the required randomly distributed $x$ value from the returned $y$ value.

2. Particles that have a Maxwellian distribution

$f(\mathit{v})=n{\left(\frac{m}{2\mathit{\pi}kT}\right)}^{3/2}\mathrm{exp}(-\frac{{\mathit{mv}}^{2}}{2\mathit{kT}})$ | $(11.17)$ |

3. Write a code to perform a Monto Carlo integration of the area under the curve $y=(1-{x}^{2}{)}^{0.3}$ for the interval $-1\le x\le 1$. Experiment with different total numbers of sample points, and determine the area accurate to 0.2%, and approximately how many sample points are needed for that accuracy.

HEAD | NEXT |