Some of the most dependable algorithms we have rely on coin flips, which is an odd fact about computer science that isn’t discussed enough outside of the field. not symbolic ones. The machine is fed actual randomness as fuel. In the 1940s, the first program ever to run on an ENIAC simulated what would happen inside a hydrogen bomb using random numbers. That wasn’t a peculiarity of the time. It established a model that hasn’t truly disappeared.
The peculiar thing is how well it functions. Do a primality test. The brute-force method, which involves trying every potential divisor, would take longer than the universe’s age to determine whether a 300-digit number is prime.
| Topic | Key Information |
|---|---|
| Concept | Randomised algorithms in theoretical computer science |
| Origin | Mid-20th century, Manhattan Project simulations on ENIAC |
| Foundational Idea | Pierre de Fermat’s Little Theorem (17th century) |
| Famous Application | Miller-Rabin primality test (1976–1980) |
| Field of Study | Computational complexity theory, derandomization |
| Key Open Question | Whether P = BPP — i.e., can every randomised algorithm be replaced with a deterministic one of comparable speed? |
| Notable Researchers | Michael Rabin, Avi Wigderson, Russell Impagliazzo, Eric Blais |
| Modern Use Cases | Cryptography, machine learning, network routing, Monte Carlo simulations in physics and finance |
| Unresolved Tension | Randomness works beautifully in practice but theorists suspect it shouldn’t be necessary |
| Status | Active research area, decades old, no resolution in sight |
However, in the late 1970s, Michael Rabin and Gary Miller developed an algorithm based on a Fermat observation from the 17th century that solves the problem in a matter of seconds. The problem is that the algorithm is unable to determine the solution. It makes a guess. It selects a random number, performs a check, and determines that the input is unquestionably composite if the check is unsuccessful. The input is most likely prime if it passes. If you run it forty times, your chances of getting it wrong are less than the likelihood of an asteroid hitting Earth before you’ve finished reading this sentence.
which is acceptable aside from the fact that it annoys people. particularly theorists. Complexity researchers believe that randomness shouldn’t be required and that, with sufficient intelligence, a deterministic algorithm should be able to accomplish anything that a randomized algorithm can. For the better part of forty years, this well-known P versus BPP question has been sitting unanswered on whiteboards in Princeton, Tel Aviv, and Berkeley. The two classes are equal, according to the majority of experts. Few people are able to prove it.

Years ago, I read a paper by Avi Wigderson in which he described the “derandomization” project almost wistfully, as if someone were attempting to disassemble a beautiful machine in order to demonstrate that they could construct a duller one that performed the same function. The truth is that no one really understands why randomness is so beneficial. In the words of Rahul Santhanam at Oxford, “pure randomness is somehow giving you a handle on hidden structure.” You’re not investigating the issue. You’re not even considering it. Even so, you’re hitting the target with your darts.
Meanwhile, the number of applications has increased. Weather simulation, online polling, drug trial distribution, credit card encryption, and packet routing protocols all rely on random or pseudorandom numbers in one way or another. Naturally, the majority of these don’t employ true randomness. They employ deterministic algorithms called pseudorandom generators, which are intended to generate outputs that appear sufficiently random to trick the subsequent computation step. It feels like a minor triumph for physics over arithmetic that some serious systems now extract entropy from quantum processes or from radio static between stations.
The deeper question hasn’t changed, though. The haystack still contains the hay. Computer scientists are aware that practically any random selection will work for these algorithms, but they are unable to predict which particular one will. It’s difficult to avoid feeling as though the universe is keeping a little joke to itself when one observes the field circling this issue for decades. Perhaps randomness isn’t really necessary. Perhaps it isn’t. The proof, if it ever comes, will probably arrive the way these things usually do — quietly, in a paper nobody expected, written by someone who happened to look at the question from an angle nobody had tried.

