So, what have we learned from this investigation?
Our goal here was to give you
a sense about how coarse-graining
of data
leads to transformations
in the model
and in fact
we saw a couple things.
One is is that when you simplify the data,
when you coarse-grain you don't necessarily get a simpler model.
Sometimes the relationship between
the model and the transformed model
once we do the coarse-graining
can be somewhat hard to see.
If you remember when we
took that three-state
Markov chain that we began with,
when we coarse-grain the data and ask
what the best fit model would be
it actually had more
non-zero transitions, for example.
One of the big things that you learned
or encountered for the first time
was the idea of fixed points.
A fixed point is a model such that
when you coarse-grain the data
and you ask how the model transformed
you get back the original model.
If you simplify the data,
the model doesn't change.
It kind of has a fractal feel to it.
And in fact in the case of
the Markov chains we found a
continuum of fixed points.
All of the Markov chains where
they all go to the same states
with the same probabilities,
such that the input probability
to state Q for example
is equal for the transition
from all the other states.
Markov chains of that particular form
that lie on that lower dimensional manifold
in the space of all possible Markov
Chain models with that number of states,
those models act as fixed points
and not only do they act as fixed points,
they actually act as attractive fixed points,
meaning as continue to coarse-grain the data
and continue to transform the model,
yeah, it kind of ??? for a while, maybe,
but at the end it's going to end up
somewhere on this lower-dimensional plane
in the case of the two-state model that
lower-dimensional plane is actually just a line.
The final thing we got
actually was a little fun point,
which is that it's hard to take
a square root of a stochastic matrix.
But what that means for us is that
sometimes you have a model
and you say what fine-grained theory
could this have coarse grained from.
Where does that model come from
what's the more fine-grained,
the more detailed story
that I could use to describe or to explain
where the coarse-grained model came from.
And if we go all the way back
to the beginning of this module
we talked about the relationship between
microeconomics and macroeconomics
In the macroeconomic case,
in this case here when we ask what
the square root of the T is what we're saying is
what's the micro economic story that
corresponds to the macroeconomic pattern?
In that case the micro story is
not just a different model,
but an entirely different model class.
You have to give that model
greater sophistication
then you had in the coarse-grained
version.
That's a wrinkle that we'll see
over and over again
the fact that when you do this sometimes
you stay within the same model space.
But other times things can go, well,
not necessarily wrong
but things can get interesting.
And in fact in the next modules
what we're going to see is
the opposite case.
what you're going to do is you're
going to take some data,
coarse-grain it,
and the new model you're going
to get out the other side
is going to be a different class,
a more complicated, richer class,
than the one you began with.
Here when we went from A to T, we simplified the world.
It turned out by coarse graining
we could forget things,
we could forget details.
The story we had to tell about the system
got simpler.
In other cases when we coarse-grain
what we're going to find is that
we have to enlarge our model space,
that we have to make our models
more sophisticated.
In those cases, as we see,
simplifying the data makes
the model harder to do.