Flip a fair coin four times. Consider the cases, if any, *after* a heads. Since you’re flipping a fair coin—you reason—you should notice no significant patterns in the flips after a heads, because flips are independent. However, you have heard of “hot streaks” or how random events become “due”, so you record your results.

You flip H **T** H **H**, and record “tails” and “heads” (bolded the flips after a heads). You flip T T H **T** and record “tails” from the 4th position. etc.

For each of these result sets, compute the percentage of heads flips.

In Scala,

```
import scala.util.Random.nextBoolean
// Consider true = Heads, false = Tails.
val numberOfFlips = 4
def flipCoins() = Seq.fill(numberOfFlips)(nextBoolean)
// Consider the results _after_ a head is flipped
def resultsAfterAHeads(l: Seq[Boolean]) = l.sliding(2).collect {
case Seq(true, n) => n
}.toSeq
// Determine the % of flips that are heads of a non-empty list
def percentHeads(l: Seq[Boolean]) = {
val (heads, tails) = l.sorted.partition(s => s)
heads.length.toDouble / (heads.length + tails.length).toDouble
}
// Flip the coins, collect the results after a heads, compute % heads
def trial = resultsAfterAHeads(flipCoins()) match {
case Seq() => None
case nonEmpty => Some(percentHeads(nonEmpty))
}
```

Run this trial 1,000,000 times, and compute the mean of the percentage of heads.

```
def runTrials = Stream.continually(trial).flatten
def mean(s: Seq[Double]) = s.sum / s.length
mean(runTrials.take(1000000))
// res0: Double = 0.4050016666669707
```

How could this be? After a heads is flipped, another heads is only flipped 41% of the time?

Perhaps four flips per set isn’t enough? You try 10.

```
val numberOfFlips = 10
// ...
mean(runTrials.take(1000000))
// res1: Double = 0.44557722142809525
```

With two million samples of empirical evidence that you can exploit randomness, you head to the race tracks…

Obviously there’s no bias in a fair coin. What’s actually happening is that we’re considering the 4-flip trial as the unit of analysis, which over-represents tails in several of the cases. Consider when two heads are flipped. The possible outcomes are—with *p(H|H)* (probability of heads, given a previous heads): TTHH (1), THTH (0), THHT (½), HTTH (0), HTHT (0), HHTT (½), resulting in an average *p(H|H)* of ⅓. The one H cases are worse: the average *p(H|H)* is 0.

Still not convinced? Read more about this “effect,” and implications related to Gambler’s Fallacy at Surprised by the Gambler’s and Hot Hand Fallacies? A Truth in the Law of Small Numbers by Joshua Benjamin Miller & Adam Sanjurjo.