So, what is complexity?
What I'm going to try and do
is section the concept:
to look at it from a multitude
of different angles,
which I think is appropriate
for any area of inquiry.
The way we'll do this is we'll look at
complexity as a discipline,
the way we might look at physics
as a discipline.
We'll look at the domain that
complexity is investigating.
We'll also look at its methods,
its epistemology,
the kinds of mathematics that you
use in complexity science,
and then relate it to one of the more
fundamental concepts
related to complexity,
and that's emergence.
So, complexity and the disciplines -
if I asked you to define biology
or geography, how would you do it?
It's extremely hard to do, and it's no
easier for complexity science.
So the way to do this,
I think,
Is to break up a discipline into
its constituent parts,
or its preferred methods and approaches.
And we're going to do that
for a number of disciplines,
and then finally do it for complexity.
So here are some disciplinary traits:
how quantitative is a field?
Is it obsessed with
measurement and calculation,
as the natural sciences have
tended to be,
or is it more satisfied with a
qualitative narrative type of account?
How reductionist is it?
Now, reductionist relates to
explanations that feel fulfilling
by virtue of presenting
the parts of a system.
So, you go down levels
to account for the level of interest -
that's what we mean by reductionism.
So, a physicist might say,
"to understand gravity, I need to
understand gravitons,
or to understand electricity, electrons,"
and so forth.
Another question which gets confounded
with reduction is compression.
Which is - are the fields amenable
to compressive mathematical description?
Can you write down short equations
that capture the essence
of the phenomenon?
That's not the same thing as reductionism.
It's a different kind of reduction -
a reduction to short,
compressed equations.
And finally, how historicist
is the field?
How important is history
in accounting for phenomenon?
How far back do you have to go?
And of course in biology,
we feel you have to go way back.
To understand traits and organisms,
we have phylogenetic explanations,
less so in physics.
And so that - if you like,
quartet of characteristics
helps us to define a discipline.
So lets consider physics.
So physics is obviously a very
quantitative field.
Everything is presented in terms of
numbers and measurements and so forth.
It's also very reductionist,
because the search for
grand unified theories
or fundamental theories,
typically consist in looking for
elementary constituents.
For example, the standard model
in physics:
the minimum number
of particles in fields.
And having done that,
presenting that in mathematical terms,
using very elegant, very compressed
mathematical formula:
F=ma, Maxwell's Laws, and so on.
So, physics has that characteristic
that's sometimes described as sort
of back-of-the-envelope-like calculations:
short calculations based on
fundamental constituents
that are highly quantitative.
Okay, so systems biology
is highly quantitative.
It's also very reductionist.
You try to understand traits in terms,
for example, of their genetic
or epigenetic factors.
And it's also very historicist.
You understand things in a taxinomical
or phylogenetic framework.
But it's not very compressive.
It doesn't present regularities
in terms of very short, elegant
mathematical equalities.
Then there's something like
biological anthropology,
or biological linguistics,
and here, they're slightly
less quantitative.
There's less data than there would
be, for example, in genomics.
They tend to be reductionist,
trying to understand things, again,
in terms of biological factors
that contribute to behavior.
Very historicist, phylogenetic,
and also somewhat compressive,
that is using fundamental evolutionary
theories, like kin selection,
to understand behavior.
And having explained all of those,
to try and illustrate that
all these fields should be understood
in terms of how much they weight
different factors or traits,
where does complexity science fit?
Well, complexity science is
very quantitative, by and large,
it's very historical; we're studying,
you know, adaptive agents,
and it's very compressive
because we're looking for
mathematical theories that capture
essential irregulatires.
But what we are not is reductionist.
Like, we are not looking down levels
to explain the level of interest.
And that's one of its defining features
that we'll come back to at some point
when we talk about emergence.
So, having talked a little bit about
what complexity might mean
as a discipline,
let's talk about what
complexity science studies;
that is the domain, the territory,
of inquiry, that establishes
the way it looks and feels.
So if we look at classical mechanics,
physics,
this is the study of
very ordered processes.
And, you can write down equations
that describe the orbits of the planets
and the stars
in a very compressive, compact form.
That means low complexity.
In this case, complexity relates to,
in some sense,
the number of pages of equations
required to describe
the regularity of interest.
So, in that sense, Newton's Laws
are very compressed.
When you get to quantum mechanics,
that introduces more stochasticity,
more randomness, in the equations,
even though they're still classical,
are correspondingly slightly
more complicated.
Interestingly, if you introduce
lots of randomness,
you can also write down a very
compressed description
in terms of statistical mechanics
and thermodynamics.
So these two limits are, in some sense,
the limits of the physical world,
and that's why physics has been so
effective at theorizing about phenomena.
But if you now look at the domain where
noise and regularity compete -
the complex domain - what happens?
We don't really know. We need entirely
new kinds of theories
to describe this intersection where
frozen accidents dominate.
That is the world of nature,
or of culture.
And so here are some examples.
On the left, classical mechanics,
just a little bit more randomness,
the wave equation
in quantum mechanics,
on the far right where you get
a lot of randomness,
the description of the entropy
of a system,
and again in the middle
where that C is written,
some new mathematics,
some new description is required
that respects the complex domain.
Now, what's happened in the 21st century
is that two very distinct approaches
have evolved to deal with complexity.
On the one hand, you have
machine learning AI
that encodes whole libraries
of big data sets
with billions of parameters that produce,
within a very circumscribed range,
highly predictive solutions.
On the other hand, you have
complexity science,
which tries to do something closer
to what physics was trying to do.
That is, a smaller number of essentially
processes and equations
which describe regularities
but don't predict.
So it looks as if we've reached this
point of bifurcation
where you have to make a decision.
I can either go down the
path of prediction
and lose understanding
and comprehensibility,
or go down the path of mechanism
and understanding,
and lose prediction.
And I think the open question
that we're all dealing with
is could we reconcile these two different
approaches to the complex domain?
Here are some examples of methods
and frameworks in complexity science
that have been invented to deal with
the complex domain,
and here's three examples:
Scaling theory, that is, what patterns
of irregularities span multiple different
orders of magnitude in space
and time;
agent based models, which takes seriously
the idea of agency or reflexivity,
that is, the things that we study
in the complex domain
have teleology, they have purpose,
they have function,
and that's not true in physics;
and network theory, that takes seriously
the collective dynamics
of complex systems.
And, of course, one of the interesting
things about these three
is that they find application.
So, in scaling theory, we can explain
how long organisms live,
how many species we typically find
in a unit area,
we can even apply scaling theory
to social phenomena,
where we're interested in how
patterned production, for example,
scales as a function of city size.
Network theory is used ubiquitously,
in this particular case, to study
political polarization,
or the spread of disease.
And agent based models are the prefered
computational tools for looking at
things like swarming, flocking,
and congestion in cities.
So, of interest here, is that
even though the complex domain
doesn't yield to these highly
compressive formalisms,
they prove to be extremely useful
in studying real-world problems.
So when we talked about
the complex domain,
what we were talking about was
the structure of reality,
what we call ontology.
But then there's the question of
how we understand that reality,
how we describe it, how we
mathematize it:
the structure of knowledge itself,
and we call that epistemology.
And complexity science has a very
interesting epistemology.
In 1960, a prominent physicist
working in quantum mechanics,
Eugene Wignor, wrote a paper
called "The Unreasonable Effectiveness
of Mathematics
in the Physical Sciences."
And Wigner was very interested
in this perplexing observation
that you can invent mathematics,
freely, through your imagination,
and yet somehow that imagination,
that imaginary object,
can predict regular patterns
in the natural world
that have nothing to do with you.
So how is it that mathematics
is so effective
at explaining and predicting
the real world?
And we can place that in a
slightly more mathematical framing
by saying that what amazed Wigner
is that models with very few parameters,
that is, highly compressed,
very parsimonious models,
could predict phenomena
very precisely,
and that's what's represented
on these two axes here:
the x-axis showing
the number of parameters,
and the axis coming out towards you,
how predictive that is,
and at the top there is an example
of what he was amazed by:
Maxwell's Equations.
Here's another example, from the
founder of the Santa Fe Institute,
working in particle physics,
is Murray Gell-Mann,
and Murray wrote down the algebra,
mathematical formalism,
to explain symmetries,
in this case, eight-fold symmetries
captured by something called SU(3).
And by manipulating these lea-groups,
he was able to predict particles
that had never been observed before.
So, exactly to Wigner's point,
the mathematics generated a solution
that didn't seem to be present
in the mathematics to begin with.
Another example is the work
of Paul Dirac,
this is Dirac's equation,
it's a relativistic wave equation.
It takes quantum mechanics
and special relativity and merges them,
and he solved this equation
and discovered negative energy states ,
and he used those solutions
to infer the existence of antimatter.
So, no one has seen antimatter;
these equations were derived to describe
the ordinary world that we can measure
and observe,
and yet, they predicted
something extraordinary.
So that's the world, if you like,
the epistemological world,
that's made possible by the simple
domains of physics, right?
The two edges of that graph
I showed earlier
which are either perfectly regular
or perfectly random.
That's what they allow us to do.
But most of the world we care about,
the social world, the biological world,
and so on,
isn't like that.
It has a different ontology, right?
It combines noise and order,
and so what do we do now?
It's not the unreasonable
effectiveness of mathematics,
it's the unreasonable
ineffectiveness of mathematics,
in dealing with the complex domain.
The key to effective coarse-graining
is that you don't lose
predictive efficacy by losing
degrees of freedom,
by losing parameters.
So, there are special domains
where averaging is actually permitted,
and one very good example of that is
the domain of scaling.
So in scaling theory, you get equations
that look a bit like
equations from physics.
This looks a bit like F=ma.
This is basal metabolic rate,
scales as mass raised to the
three-quarter powers,
and those threes and fours there
are actually the dimensions of space,
three,
divided by the dimensions displaced
plus one fractal dimension,
so it's very physical.
And you can derive these equations
through mathematics
of perturbation theory,
that would be very familiar
to the world of physics.
And here's an example of
what that looks like.
And so scaling theory gives us
insights into the complex domain
by using the concept of
coarse-graining very effectively.
But again, as I said, for many phenomena
that's not an option.
And - so, much of complexity science
does something different.
Instead of trying to find
parsimonious models,
like F=ma, or B=M raise to the
three-quarter power,
it asks, "what gives rise
to those structures in the first place?
What allows for the possibility in the
complex domain of coarse-graining?
Or what doesn't? What produces the
structure that we want to theorize about?"
So let me make that quite explicit now
with an example.
If you think about machine learning
and the performance of algorithms
like AlphaGo, AlphaZero,
underlying all those lines of code,
nd all those hundreds of millions,
if not billions, of parameters,
is a very simple idea:
the idea of reinforcement learning.
And that can be written down
in just a few lines of code;
That's just a few lines of mathematics.
So in that sense,
this is highly compressive.
It's not the particular model
that finally is instantiated,
but how the parameters are tuned.
And the same thing goes for biology.
People sometimes say,
"evolutionary theory is not predictive."
Well, it's not predictive in the sense
that you could predict
a giraffe, or a flea, or a bacterium,
but all of them were subject to the same
optimization principle
in their local environment:
natural selection and drift.
And so, what we're looking for
is a parsimonious description
of the algorithm, or the process,
that produces the object.
Physical science theorizes about
the object parsimoniously,
we're theorizing, in some sense, about
a process that gives rise to an object:
A process that gives rise to a theory.
So in that sense,
complexity science is metatheoretic.
One of the concepts that one
hears a lot about,
when talking about complexity
is emergence.
It's its nearest relative,
in a certain sense.
And just like complexity,
it generates a lot of perplexity.
So I want to explain
what emergence is now.
One very simple way of defining
emergence is,
you're dealing with an
emergent phenomenon,
when there's no need to look
under the hood.
And I use that in the following sense:
if your car stops,
you're not quite sure why,
and the most natural thing to do
is to check whether you've run out of gas.
So there's a kind of reductionism there,
because you're saying to understand why a
car stops, you need to look at its parts.
And where phenomenon is strongly emergent,
you don't need to look under the hood,
and let me give an example.
This equation is the so-called
Fermat Conjecture,
and it took hundreds of years
to be solved,
and this is Andrew Wiles,
who finally solved it,
between 1993 and 95,
in fact the first solution
had an error in it,
which freaked him out, and then
he was able to correct it.
But you can ask, you know,
how did he do it?
How did he solve this theorem?
And let me show you quickly
some pages from his proof.
Here's one page, where he
establishes the relationship
between the Fermat Conjecture
and elliptical forms.
He goes through a whole series of
ingenious deductive steps,
recruiting unexpected errors
of mathematics
until he finally arrives
at the conclusion,
which is the proof of the
Fermat Conjecture.
Now, this proof is presented to us
only in terms of mathematics.
The language of mathematics is sufficient
to establish the credibility of the result;
you don't have to look under the hood
of Andrew Wiles to determine
whether or not this proof
is right or not.
For example, we don't need to do
brain science on Andrew Wiles,
we don't have to say, you know,
"the reason that the proof is correct
is because he was expressing a lot
of serotonin or dopamine," or,
"this particular neural circuit
was being recruited."
That would be interesting; that would be
something that you might want to know
but it has nothing to do
with the correctness of the proof.
Similarly, Wiles's economic circumstances,
or the particular market that he's
working in and the university,
Princeton, who's paying his salary.
None of this is relevant
to the correctness of the proof,
and neither is his nationality
or his ideology.
So, here's an example where correctness
operates entirely
at the level of mathematics,
and moving below mathematics, for example,
doing sort of particle physics
on Andrew Wiles's brain,
might be interesting,
but is illuminating with respect
to whether the theorem
has been proved or not.