It’s no surprise that people who have never taken physics have heard of quantum mechanics (indeed, have heard phrases like “the [or Heisenberg’s] uncertainty principle” and “wave-particle duality), that people who’ve never written a line of computer code have heard of or even read books about artificial intelligence, and that in general scientific fields which many non-scientists are aware of are not at all representative of even those fields, let alone the sciences. More interesting, at least to me, is the number of people who don’t know what calculus is about let alone what differential equations are who have heard of things like fractals, strange attractors, and other topics from “chaos theory.”
It is much less surprising that many of the popular conceptions about such topics are wrong, right down to the basic nomenclature. Had the popular science author Gleick written his book at a different time and/or chosen different terminology for his extremely popular Chaos Theory, I think people would be referring to “catastrophe theory”, “dynamical systems”, “nonlinear science”, or some other term scientists have used to describe complex systems/complexity. True, “chaos theory” is much less misleading than “catastrophe theory” would probably be, but it is misleading nonetheless.
Things like chaotic systems and fractals are interesting because they are NOT chaotic, in the sense that aren’t characterized by randomness and often not even unpredictably. In fact, fractals are about as far from randomness, chaos, and unpredictability as we can get. Although no universally accepted definition for fractals exists, there are some properties agreed to characterize them, one of the most fundamental being self-similarity. However, the relationships between fractals and “chaotic systems” are not straightforward, so I will take “chaos theory” here to refer to chaotic systems like weather.
If fractals can be at least largely characterized by self-similarity, is there any comparable property that characterizes chaotic systems? Sure: order. That’s right, the single most important property of chaotic systems is non-randomness, regularity, order, structure, and other terms that are almost opposite in meaning to “chaos”. When you think about it a bit, this makes perfect sense. We generally thing of chaos as describing things that have no order, that are random, that lack organization. Systems that are random are really, really, boring. You can simulate the dynamics of many complex systems very easily by flipping a coin (WARNING! Do NOT try this at home- go to a friend’s house or do it while driving for safety reasons). A (reasonably) fair coin behaves chaotically in the more usual sense: the outcome of each flip is entirely independent from the previous and each outcome is entirely random.
Let’s unpack that a bit. Another way of saying both of the above together is that the only “rule” that determines the system’s outcomes is probabilistic (more specifically, every flip is a Bernoulli trial). However, I broke this statement into two for a reason. There are all kinds of ways in which things can be called “random” (and the word means different things even in probability theory, let alone the sciences). One kind of random process is something called a Markov process with something called a Markov property. Simply put, this describes a system in which the outcome states are basically like coin flips:
1) The system is discrete (the “states”, like coin flips, are like “units” or clearly individual outcomes as opposed to the state of continuous systems like a an arrow shot into the air or a coin dropped from a building which are constantly changing over time),
2) The system is random (I’m not even going to attempt a satisfactory definition of “random” here)
3) Given the system’s state at a particular time (or step, trial, outcome, whatever) doesn’t depend upon the system’s past states.
The difference between a system that “acts” like coin flips (Bernoulli trials) and those stochastic systems with the Markov property all depends upon adding an “s”: a system with dynamics like that of coin tosses has future states that are independent of any other state, while those with the Markov property have future states that depend on the present state. This distinction is so important and has such extraordinary consequences that it is hard to overestimate. For example, the only thing you need to understand any system which acts like our flipped coin is a probability distribution (the Bernoulli distribution, or in more general cases the binomial distribution). For this seemingly innocent system with the Markov property, we need to introduce things liked probability vectors, stochastic matrices, difference equations, and possibly more (e.g., finite-state machines). All due to the fact that the next state of the system is conditional ONLY upon the present, rather than independent of present and past states (like the flipped coin) or dependent upon past states and initial conditions (most systems, including “chaotic”).
Notice that this surprisingly qualitative change in complexity comes from making a random process not “purely” random. A single condition is added, and we get an entirely different picture. Of course, it is possible to have boring systems that are characterized by order, structure, predictability, etc., too. Computers, for example, are really boring as systems. We built them to do exactly what we want them too, so we compartmentalized their hardware by function, made the electrodynamics of the physics of bits purely an engineering problem, and relied on a formal (mathematical) system/language that was around long before computers to build them (and we had models for computers before computers). Interestingly, even though a computer is about as perfect exemplar of a deterministic, reductionist system there is, we can create programs that use deterministic rules to simulate/model systems that behave “chaotically” and unpredictability.
And back we come to “chaotic systems.” Simply put, “chaotic systems” are systems which behave in almost orderly ways. For example, a simple “chaotic system” might be behaving in a simple, regular way like cinderblocks piled on a metal support beam until some critical point in which the system radically shifts (in our example, the beam buckles and everything crashed to the ground). More complicated systems may forever oscillate in some interval or circle around some values, but continuously tracing out similar, not identical paths. Others can do this and in addition may suddenly switch to a different kind of almost identically repeating cycles. So what makes these systems interesting, not “chaotic”, and different from boring non-“chaotic” systems? A lot, but simplistically the reasons all relate to how these systems seem to exhibit order yet the kind of order they exhibit is both extremely sensitive to (initial) conditions and is “almost” ordered/regular. By “almost”, I mean that these systems can 1) actually behave just like “boring” orderly systems for a while and then some extremely tiny change can result in a completely different state or behavior and/or 2) can exhibit patterns of behavior that, when graphed, seem very structured and ordered because they tend to fluctuate around certain values or in certain intervals yet actually predicting what the precise value will be for a given time is difficult or impossible.
To wrap up, let me give an example of a “chaotic system” that is interesting only because it should be boring: a swinging pendulum. This is what is called a 1-body problem, because in classical physics its position and momentum at any time t is determined only using Newton’s law and knowledge of the systems mass and acceleration (in 2-body, 3-body,…,n-body problems one is dealing with interacting “parts” or bodies; in the classical example involving the motion of the moon around the Earth as the Earth moves around the Sun, the interaction is the gravitational attractions between these “bodies”). It’s boring. The thing just starts at some position and swings. It’s hard to even imagine what COULD be “chaotic” about it. Yet if we take Newton’s law F=ma and change it to describe a pendulum (namely, accounting for arc length and motion), we wind up with a model that we can’t solve. With additional information or numerical methods we can certainly find answers to the state of particular systems with particular initial conditions at particular times, but even though this simple system reduces to a single equation that (were it solved) could tell us everything we could wish to know about it, we can’t solve that equation.