In computational complexity, we typically
count algorithms that run in a polynomial
number of steps as efficient, even though
it doesn't necessarily match up with
efficiency in practice. For example,
a polynomial like n^100 or even 1 billion
times n^2 can hardly be seen as efficient.
But it turns out that the distinction
between polynomial and exponential is
incredibly useful in practice, essentially
because polynomials don't grow that badly
as n grows. A linear polynomial scalar
multiple of n grows linearly where we use
the geometric analogy of a line.
n^2 grows like a square. n^3 grows like a
cube. In all of these cases, we can easily
draw the geometric figure corresponding
to the growth rate. But exponentials
explode. If you put 1 grain of rice on
the 1st square of a chess board, and
then on every subsequent square, you
double the number of grains of rice, this
may seem like an innocuous task, but by
the time you get to the 3rd row of the
chess board, you'll have a million grains
of rice. By the time you get to the 4th
row, you'll have a billion, and by the
time you get to the end of the chess board
you'll have 18 quintillion grains of rice.
You might also wonder: what happens when
computers get faster every year? Can't we
solve bigger problems then? It turns out
that the growth rate, exponential or
polynomial, affects this as well. So you
can do T steps in a week on your current
computer. According to Moore's Law, next
year, this will essentially be 2T. Moore's
Law may be ending, but let's pretend that
it's gonna go on forever. If you have a
problem whose running time is O of n^2, so
some constant times n^2, then doubling T
(i.e. what your computer will be able to
do next year) multiplies the size of the
problem you can solve by the square root
of 2, or around 1.4. This means that
simply by waiting for computers to get
faster, you could solve a problem that is
40% larger than the problem you could
solve today. But if the running time is
exponential (a constant times 2^n),
doubling T changes n to n + 1. You can
solve problems that are 1 bigger next year
than you could this year. This has real
implications, so let's plot some
polynomials, 2^n and n!, on a log plot.
The y axis here is the logarithm of the
number of microseconds that it takes for
an algorithm to run. If you have an
algorithm that runs in time n or n^2 or
n^3, you see that you get very similar
looking lines here. By the time n is 90 or
100, you can still solve these problems in
under a minute. But for a problem that
takes 2^n time, by the time you have
inputs of size 25 or so, you surprassed a
minute. By the time you get up to inputs
of size 80, the amount of time it takes
for this algorithm to finish is larger
than the age of the universe. For n! this
is even worse; the age of the universe is
reached by n! when n is only around 20. So
for exponential algorithms or worse, there
really is a sense in which they're
effectively impossible to use on inputs of
any reasonably large size. Note also that
because this is a log plot, if we talk
about something that's O of n^3, that
means it's at most C times n^3, that
simply shifts the n^3 curve up by a
constant factor but doesn't change its
shape, whereas for 2^n, even if you shift
it down by a constant, all this will do is
shift up by a constant the size of the
input it takes before you reach the age of
the universe, but ONLY by a constant. In
practice, it turns out that once you have
a polynomial-time algorithm, the constants
often come down soon thereafter, turning
it into an actually effective algorithm.