1
99:59:59,999 --> 99:59:59,999
In computational complexity, we typically
2
99:59:59,999 --> 99:59:59,999
count algorithms that run in a polynomial
3
99:59:59,999 --> 99:59:59,999
number of steps as efficient, even though
4
99:59:59,999 --> 99:59:59,999
it doesn't necessarily match up with
5
99:59:59,999 --> 99:59:59,999
efficiency in practice. For example,
6
99:59:59,999 --> 99:59:59,999
a polynomial like n^100 or even 1 billion
7
99:59:59,999 --> 99:59:59,999
times n^2 can hardly be seen as efficient.
8
99:59:59,999 --> 99:59:59,999
But it turns out that the distinction
9
99:59:59,999 --> 99:59:59,999
between polynomial and exponential is
10
99:59:59,999 --> 99:59:59,999
incredibly useful in practice, essentially
11
99:59:59,999 --> 99:59:59,999
because polynomials don't grow that badly
12
99:59:59,999 --> 99:59:59,999
as n grows. A linear polynomial scalar
13
99:59:59,999 --> 99:59:59,999
multiple of n grows linearly where we use
14
99:59:59,999 --> 99:59:59,999
the geometric analogy of a line.
15
99:59:59,999 --> 99:59:59,999
n^2 grows like a square. n^3 grows like a
16
99:59:59,999 --> 99:59:59,999
cube. In all of these cases, we can easily
17
99:59:59,999 --> 99:59:59,999
draw the geometric figure corresponding
18
99:59:59,999 --> 99:59:59,999
to the growth rate. But exponentials
19
99:59:59,999 --> 99:59:59,999
explode. If you put 1 grain of rice on
20
99:59:59,999 --> 99:59:59,999
the 1st square of a chess board, and
21
99:59:59,999 --> 99:59:59,999
then on every subsequent square, you
22
99:59:59,999 --> 99:59:59,999
double the number of grains of rice, this
23
99:59:59,999 --> 99:59:59,999
may seem like an innocuous task, but by
24
99:59:59,999 --> 99:59:59,999
the time you get to the 3rd row of the
25
99:59:59,999 --> 99:59:59,999
chess board, you'll have a million grains
26
99:59:59,999 --> 99:59:59,999
of rice. By the time you get to the 4th
27
99:59:59,999 --> 99:59:59,999
row, you'll have a billion, and by the
28
99:59:59,999 --> 99:59:59,999
time you get to the end of the chess board
29
99:59:59,999 --> 99:59:59,999
you'll have 18 quintillion grains of rice.
30
99:59:59,999 --> 99:59:59,999
You might also wonder: what happens when
31
99:59:59,999 --> 99:59:59,999
computers get faster every year? Can't we
32
99:59:59,999 --> 99:59:59,999
solve bigger problems then? It turns out
33
99:59:59,999 --> 99:59:59,999
that the growth rate, exponential or
34
99:59:59,999 --> 99:59:59,999
polynomial, affects this as well. So you
35
99:59:59,999 --> 99:59:59,999
can do T steps in a week on your current
36
99:59:59,999 --> 99:59:59,999
computer. According to Moore's Law, next
37
99:59:59,999 --> 99:59:59,999
year, this will essentially be 2T. Moore's
38
99:59:59,999 --> 99:59:59,999
Law may be ending, but let's pretend that
39
99:59:59,999 --> 99:59:59,999
it's gonna go on forever. If you have a
40
99:59:59,999 --> 99:59:59,999
problem whose running time is O of n^2, so
41
99:59:59,999 --> 99:59:59,999
some constant times n^2, then doubling T
42
99:59:59,999 --> 99:59:59,999
(i.e. what your computer will be able to
43
99:59:59,999 --> 99:59:59,999
do next year) multiplies the size of the
44
99:59:59,999 --> 99:59:59,999
problem you can solve by the square root
45
99:59:59,999 --> 99:59:59,999
of 2, or around 1.4. This means that
46
99:59:59,999 --> 99:59:59,999
simply by waiting for computers to get
47
99:59:59,999 --> 99:59:59,999
faster, you could solve a problem that is
48
99:59:59,999 --> 99:59:59,999
40% larger than the problem you could
49
99:59:59,999 --> 99:59:59,999
solve today. But if the running time is
50
99:59:59,999 --> 99:59:59,999
exponential (a constant times 2^n),
51
99:59:59,999 --> 99:59:59,999
doubling T changes n to n + 1. You can
52
99:59:59,999 --> 99:59:59,999
solve problems that are 1 bigger next year
53
99:59:59,999 --> 99:59:59,999
than you could this year. This has real
54
99:59:59,999 --> 99:59:59,999
implications, so let's plot some
55
99:59:59,999 --> 99:59:59,999
polynomials, 2^n and n!, on a log plot.
56
99:59:59,999 --> 99:59:59,999
The y axis here is the logarithm of the
57
99:59:59,999 --> 99:59:59,999
number of microseconds that it takes for
58
99:59:59,999 --> 99:59:59,999
an algorithm to run. If you have an
59
99:59:59,999 --> 99:59:59,999
algorithm that runs in time n or n^2 or
60
99:59:59,999 --> 99:59:59,999
n^3, you see that you get very similar
61
99:59:59,999 --> 99:59:59,999
looking lines here. By the time n is 90 or
62
99:59:59,999 --> 99:59:59,999
100, you can still solve these problems in
63
99:59:59,999 --> 99:59:59,999
under a minute. But for a problem that
64
99:59:59,999 --> 99:59:59,999
takes 2^n time, by the time you have
65
99:59:59,999 --> 99:59:59,999
inputs of size 25 or so, you surprassed a
66
99:59:59,999 --> 99:59:59,999
minute. By the time you get up to inputs
67
99:59:59,999 --> 99:59:59,999
of size 80, the amount of time it takes
68
99:59:59,999 --> 99:59:59,999
for this algorithm to finish is larger
69
99:59:59,999 --> 99:59:59,999
than the age of the universe. For n! this
70
99:59:59,999 --> 99:59:59,999
is even worse; the age of the universe is
71
99:59:59,999 --> 99:59:59,999
reached by n! when n is only around 20. So
72
99:59:59,999 --> 99:59:59,999
for exponential algorithms or worse, there
73
99:59:59,999 --> 99:59:59,999
really is a sense in which they're
74
99:59:59,999 --> 99:59:59,999
effectively impossible to use on inputs of
75
99:59:59,999 --> 99:59:59,999
any reasonably large size. Note also that
76
99:59:59,999 --> 99:59:59,999
because this is a log plot, if we talk
77
99:59:59,999 --> 99:59:59,999
about something that's O of n^3, that
78
99:59:59,999 --> 99:59:59,999
means it's at most C times n^3, that
79
99:59:59,999 --> 99:59:59,999
simply shifts the n^3 curve up by a
80
99:59:59,999 --> 99:59:59,999
constant factor but doesn't change its
81
99:59:59,999 --> 99:59:59,999
shape, whereas for 2^n, even if you shift
82
99:59:59,999 --> 99:59:59,999
it down by a constant, all this will do is
83
99:59:59,999 --> 99:59:59,999
shift up by a constant the size of the
84
99:59:59,999 --> 99:59:59,999
input it takes before you reach the age of
85
99:59:59,999 --> 99:59:59,999
the universe, but ONLY by a constant. In
86
99:59:59,999 --> 99:59:59,999
practice, it turns out that once you have
87
99:59:59,999 --> 99:59:59,999
a polynomial-time algorithm, the constants
88
99:59:59,999 --> 99:59:59,999
often come down soon thereafter, turning
89
99:59:59,999 --> 99:59:59,999
it into an actually effective algorithm.