Let me now turn to a foundational result
in the theory of random walks
known as the central limit theorem.
So let's imagine thinking about a random
walk in arbitrary spatial dimensions
in which the single step is given by
a displacement, little x, and the
probability distribution of these single
step displacements is given by little p
of little x.
Let us assume that these little x's are
iid variables. This is a standard acronym
for independent, identically distributed.
So, independent and identically
distributed.
So that is, for each x, p of x
is the same distribution function.
Let us further assume that the mean
displacement in a single step,
so we no longer have to restrict
ourselves to a symmetrical random walk.
Let me assume that the mean displacement
in a single step is finite,
very mild restriction.
And let me also assume that the mean
square displacement in a single step
is also finite.
Under these conditions, then the
probability distribution after n steps,
so P of n after n steps, of the
total displacement.
So here, big X is the sum of the
displacements, little x, n of these guys.
Then, this probability distribution
of the total displacement is equal to
one over square root of 2 pi n sigma
squared, e to the minus x, minus n times
the average of little x squared,
divided by 2, 2 n sigma squared,
where here sigma squared is equal to the
variance in the single step distribution.
So, it's the difference between x squared
average minus x average squared.
So, this statement is known as the
central limit theorem.
And I should say that it's holding and the
limit is n is going to infinity.
This is a very powerful and general result
because one needs then only the first two
moments of the single step distribution,
these are the only things that matter to
determine the long distance, long-time
properties of a random walk.
So, this is an example of a universal
statement in which short-range details of
the microscopic process, namely the
details of the single steps basically do
not matter in the long-time limit.
It doesn't matter if we are talking about
a nearest neighbor random walk in one
dimension or some other type of hopping
distribution as long as the distribution
obeys these two conditions, that the first
and second moments are finite, then the
distribution of the long-time limit is
this universal Gaussian form.
So, notice also that from this result,
we now know that the mean displacement
after n steps of a random walk, which is
just the integral of x p n of x, d x is
equal to n times the displacement
after a single step.
And similarly, the variance after many
steps of the random walk, x squared
average minus x average squared.
Again using the same definition as before,
so this is the integral of x squared
p n of x, d x minus x average squared,
which we've already computed.
This thing is simply equal to n sigma
squared. So, once we know the first and
second moments of the single step
distribution, what they are, then we know
that the distribution of a random walk of
n steps is given by a Gaussian form,
and from there we can compute any
moment we want of the random walk itself.