Most things that we encounter in our lives follow some sort of predictable pattern. Microwaving something for 26 seconds instead of 25 makes it just a little bit hotter, though we might not be able to really tell the difference. Depress the accelerator of your car just a little more and you go just a little faster. Move your mouse to the left and the little cursor on the screen goes ... left.

Imagine a world in which things routinely were not at all predictable. Where microwaving a frozen burrito for 23 seconds sort of thaws it out, for 24 seconds causes the cheese to boil, and for 25 seconds seems to make it colder than just frozen. Where pressing the accelerator sometimes makes you go faster, sometimes slows you down, and sometimes changes the radio station. Where moving your mouse left makes the cursor go left, except for the times when it goes right, up, down, or clicks the "Buy Now" button.

We'd rightly call living in such a world "

*chaos*."

In mathematics, we revel in situations where things behave predictably. Calculus is built upon

*continuous*functions. A function \(f\) is continuous if, when \(x\) and \(y\) are "close" in value to each other, \(f(x)\) and \(f(y)\) are "close" in value to each other. Since 25 and 26 are "close" in value, the temperatures of frozen burritos after being microwaved for 25 or 26 seconds are about the same.

How does all of this relate to the animated gif above?

There is a famous algorithm called Newton's Method that approximates, accurately, solutions to equations. Given a function \(y=f(x)\), Newton's Method allows us to approximate the solution to an equation like \(f(x)=0\). One starts with an initial guess \(x_0\), and Newton's Method gives the approximation \(\hat x\) that is usually a great approximation to \(f(x)=0\); that is, \(f(\hat x)\approx 0\).

We can think of Newton's Method as a function \(N\) itself, where \(N(x_0) = \hat x\). Returning to the "predictability" theme of this post, we would expect two things to be true of \(N\):

- If \(x_0\approx y_0\), then \(N(x_0) \approx N(y_0)\). That is, initial guesses that are close to each other return good approximations that are also close to each other. In other words, initial guesses of 3 and 3.001 should return the same approximate solution.
- If \(\overline x\) is a solution to \(f(x)=0\), that is, \(f(\overline x)=0\), and if \(x_0\) is close to \(\overline x\), then we'd expect \(N(x_0)\) to be really close to \( \overline x\). Without all the fancy notation, suppose 5 is a solution to \(f(x)=0\); that is, \(f(5)=0\). We would expect that the initial guess of 5.1 would return something really,
*really*close to 5.

Those are reasonable expectations. And they can fail spectacularly.

Consider the complex plane, where we plot the complex number \(a+bi\) as the point \((a,b)\) on the familiar Cartesian plane and consider the function \(f(z) = z^5-1\). Since \(f\) is a polynomial with degree 5, we know \(f(z)=0\) has 5 solutions. One of them is real, \(z=1\), and the other 4 are complex.

Apply Newton's Method to thousands of points in the complex plane and color each point according to the solution of \(f(z)=0\) Newton's Method returns. If Newton's Method returns a solution near \(z=1\), color the point red. If it returns one of the other 4 complex solutions, color the point purple, green, yellow or blue.

The animated gif above starts with a view of the complex plane with corners at \(-2-2i\) and \(2+2i\). Note how large regions of the plane behave nicely. The big patch of red to the right means lots of points near each other all give the solution \(z=1\). (This means Expectation #2 is holding up: solutions near \(z=1\) return an approximation of \(z=1\).)

But the crazy part is that there are lots of places where the points very close to each other lead to very different solutions. Between the big regions of red and purple there are regions of blue, yellow and green.

And then we zoom in. Over and over we see that the borders between "large" regions of solid color are actually smaller regions of solid color. This goes on forever - no matter how far you zoom in, you will always find that the border between "large" regions of color is made up of a similar pattern of smaller regions of solid color.

The upshot is this: applying Newton's Method to two points that are really close to each other can lead to completely different solutions. That's

*chaos*.
The following gif shows the first frame of the gif above in a different way. Newton's Method is an algorithm - a repeated set of steps. We stop repeating once we get close enough to a solution. In the gifs above and below, we indicate how many steps it takes to

*converge*to a solution by the brightness of the color. The darker the color, the more steps it takes. Note how some regions stay black - these do not converge after 50 steps. They might after more, but we stopped after that number in the picture below.Note how some points converge very quickly - even points that are not close to the solution they converge to.

There are lots of ways to illustrate chaotic behavior. We picked a common one above: showing convergence regions of Newton's Method. Another popular one is shown below.

Start with a quadratic, complex function. We chose \(f(z) = z^2-0.8+0.157i\). For every point in the complex plane, apply this function over and over again. For instance, if we start with \(z_0=1\), applying \(f\) gives \(z_1=f(1) = 0.2+0.157i\). Apply \(f\) again: \(z_2 = f(z_1) = -0.785+0.2198i\). Keep doing this until the results start to get "big." For instance, if you start with \(z_0 = 10\), then \(z_1=f(z_0) = 99.2+0.157i\) and \(z_2 = 9839.82+31.3058i\). Clearly these numbers are getting big. Fast.

Starting with \(z_0=1\) is a different story. After applying \(f\) 250 times, we find \(z_{250} = -1.33+0.575i\), hardly big. Apply \(f\) just a few more times, though, and the result

*is*big: \(z_{255} \approx 1587-2822i\). And once the result is "big," it never gets small again.

Below, we color points in the complex plane according to how many applications of \(f\) it takes to make the result "big." Dark spots get big fast, bright points get big slowly. Pure white spots haven't gotten "big" after 150 iterations of \(f\).

The initial picture is interesting enough, but zooming in tells more of the story. We see that points that "get big fast" are always located near points that do not get big, fast.

*Chaos.*And like our "zooming-in" gif at the top of this post, we see that as we zoom in, we see similar shapes repeated over and over, smaller and smaller.

Below we show the plane as it gets colored in. We see that points that get big, fast, are located throughout the region, as are points that don't get big after 150 iterations.

In both of our examples of chaos, there exist points that never converge. In our Newton's Method example, there are points that never converge to a solution of \(f(z)=0\), and in the current example there are points that never get "big." Never. These points form what is called a Julia Set. Tweaking some things gives the infamous Mandelbrot Set.

So while mathematics brings structure to so much of our lives, it also brings chaos. It shows us behavior that, at present, is beyond our ability to predict and fully understand. And that's

*awesome.*

Consider following us on Twitter; we'll tweet only when a new post is up.

What are your thoughts on on cellular automata and wolfram's ideas about the nature of reality and the future of science?

ReplyDelete