subjective randomness

People are extremely good at finding structure embedded in noise. This sensitivity to patterns and regularities is at the heart of many of the inductive leaps characteristic of human cognition, such as identifying the words in a stream of sounds, or discovering the presence of a common cause underlying a set of events. These acts of everyday induction are quite different from the kind of inferences normally considered in machine learning and statistics: human cognition usually involves reaching strong conclusions on the basis of limited data, while many statistical analyses focus on the asymptotics of large samples. The ability to detect structure embedded in noise has a paradoxical character: while it is an excellent example of the kind of inference at which people excel but machines fail, it also seems to be the source of errors in tasks at which machines regularly succeed. For example, a common demonstration conducted in introductory psychology classes involves presenting students with two binary sequences of the same length, such as HHTHTHTT and HHHHHHHH, and asking them to judge which one seems more random. When students select the former, they are told that their judgments are irrational: the two sequences are equally random, since they have the same probability of being produced by a fair coin. In the real world, the sense that some random sequences seem more structured than others can lead people to a variety of erroneous inferences, whether in a casino or thinking about patterns of births and deaths in a hospital.

- Griffiths & Tenenbaum - From Algorithmic to Subjective Randomness

We perceive the more orderly pattern HHHHHHHH to be a less likely outcome of the coin-tossing experiment, while in reality it is as likely as the other pattern HHTHTHTT.

Why do we expect all random uniform processes (i.e. experiments with uniform probability distributions) to generate visually-disordered outcomes? Recall that the second law of thermodynamics dictates that the entropy of an isolated system tends to increase over time. In other words an isolated system constantly evolves towards the more-likely-to-happen states. Since such states are often the more-disorderly-looking ones, it is not surprising that we developed the just-mentioned-expectation. Most of what-is-perceived-to-be-random (i.e. entropy-driven) in nature does in fact result in visual disorder.

The paper (from which I extracted the above quotation) suggests that our subjective interpretation of randomness is more in line with what is called algorithmic complexity. (i.e. Greater complexity is equated with greater randomness.) This observation is not surprising neither. Why? Because the-more-disorderly-looking patterns tend to have higher algorithmic complexity. (Here I put "tend" in italics because a pattern may be algorithmically simple but nevertheless visually ugly.)

There is a small caveat though. In some rare cases, the more-likely-to-happen states that an isolated system evolves towards may not at all look disorderly. In fact, the final equilibrium stage may have a lot of visual structure. Here is a nice example:

Individual particles such as atoms often arrange into a crystal because their mutual attraction lowers their total energy. In contrast, entropy usually favors a disordered arrangement, like that of molecules in a liquid. But researchers long ago found, in simulations and experiments, that spheres without any attraction also crystallize when they are packed densely enough. This entropy-driven crystallization occurs because the crystal leaves each sphere with some space to rattle around. In contrast, a random arrangement becomes "jammed" into a rigid network of spheres in contact with their neighbors. The entropy of the few "rattlers" that are still free to move can't make up for the rigidity of the rest of the spheres.

Read this for further details.