Close Menu
CheraghchiCheraghchi
  • Home
  • Privacy Policy
  • Disclaimer
  • About
  • Terms of Service
  • News
  • Research
  • Trending
What's Hot

The $150 Billion Bet: Why Big Tech is Repatriating Quantum Research to American Soil

May 10, 2026

The Randomised Algorithm That Changed Computer Science — and the Decades-Long Quest to Replace It With Something Deterministic

May 10, 2026

The Turing Test is Dead: What Happens When We Stop Trying to Distinguish Man from Machine?

May 10, 2026
  • All
  • Trending
  • News
  • Research
CheraghchiCheraghchi
Subscribe
  • Home
  • Privacy Policy
  • Disclaimer
  • About
  • Terms of Service
  • News
  • Research
  • Trending
CheraghchiCheraghchi
Home » The Randomised Algorithm That Changed Computer Science — and the Decades-Long Quest to Replace It With Something Deterministic
Research

The Randomised Algorithm That Changed Computer Science — and the Decades-Long Quest to Replace It With Something Deterministic

Brenda RodriguezBy Brenda RodriguezMay 10, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
The Randomised Algorithm That Changed Computer Science
The Randomised Algorithm That Changed Computer Science
Share
Facebook Twitter LinkedIn Pinterest Email

Some of the most dependable algorithms we have rely on coin flips, which is an odd fact about computer science that isn’t discussed enough outside of the field. not symbolic ones. The machine is fed actual randomness as fuel. In the 1940s, the first program ever to run on an ENIAC simulated what would happen inside a hydrogen bomb using random numbers. That wasn’t a peculiarity of the time. It established a model that hasn’t truly disappeared.

The peculiar thing is how well it functions. Do a primality test. The brute-force method, which involves trying every potential divisor, would take longer than the universe’s age to determine whether a 300-digit number is prime.

TopicKey Information
ConceptRandomised algorithms in theoretical computer science
OriginMid-20th century, Manhattan Project simulations on ENIAC
Foundational IdeaPierre de Fermat’s Little Theorem (17th century)
Famous ApplicationMiller-Rabin primality test (1976–1980)
Field of StudyComputational complexity theory, derandomization
Key Open QuestionWhether P = BPP — i.e., can every randomised algorithm be replaced with a deterministic one of comparable speed?
Notable ResearchersMichael Rabin, Avi Wigderson, Russell Impagliazzo, Eric Blais
Modern Use CasesCryptography, machine learning, network routing, Monte Carlo simulations in physics and finance
Unresolved TensionRandomness works beautifully in practice but theorists suspect it shouldn’t be necessary
StatusActive research area, decades old, no resolution in sight

However, in the late 1970s, Michael Rabin and Gary Miller developed an algorithm based on a Fermat observation from the 17th century that solves the problem in a matter of seconds. The problem is that the algorithm is unable to determine the solution. It makes a guess. It selects a random number, performs a check, and determines that the input is unquestionably composite if the check is unsuccessful. The input is most likely prime if it passes. If you run it forty times, your chances of getting it wrong are less than the likelihood of an asteroid hitting Earth before you’ve finished reading this sentence.

which is acceptable aside from the fact that it annoys people. particularly theorists. Complexity researchers believe that randomness shouldn’t be required and that, with sufficient intelligence, a deterministic algorithm should be able to accomplish anything that a randomized algorithm can. For the better part of forty years, this well-known P versus BPP question has been sitting unanswered on whiteboards in Princeton, Tel Aviv, and Berkeley. The two classes are equal, according to the majority of experts. Few people are able to prove it.

The Randomised Algorithm That Changed Computer Science
The Randomised Algorithm That Changed Computer Science

Years ago, I read a paper by Avi Wigderson in which he described the “derandomization” project almost wistfully, as if someone were attempting to disassemble a beautiful machine in order to demonstrate that they could construct a duller one that performed the same function. The truth is that no one really understands why randomness is so beneficial. In the words of Rahul Santhanam at Oxford, “pure randomness is somehow giving you a handle on hidden structure.” You’re not investigating the issue. You’re not even considering it. Even so, you’re hitting the target with your darts.

Meanwhile, the number of applications has increased. Weather simulation, online polling, drug trial distribution, credit card encryption, and packet routing protocols all rely on random or pseudorandom numbers in one way or another. Naturally, the majority of these don’t employ true randomness. They employ deterministic algorithms called pseudorandom generators, which are intended to generate outputs that appear sufficiently random to trick the subsequent computation step. It feels like a minor triumph for physics over arithmetic that some serious systems now extract entropy from quantum processes or from radio static between stations.

The deeper question hasn’t changed, though. The haystack still contains the hay. Computer scientists are aware that practically any random selection will work for these algorithms, but they are unable to predict which particular one will. It’s difficult to avoid feeling as though the universe is keeping a little joke to itself when one observes the field circling this issue for decades. Perhaps randomness isn’t really necessary. Perhaps it isn’t. The proof, if it ever comes, will probably arrive the way these things usually do — quietly, in a paper nobody expected, written by someone who happened to look at the question from an angle nobody had tried.

Algorithm Science
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleThe Turing Test is Dead: What Happens When We Stop Trying to Distinguish Man from Machine?
Next Article The $150 Billion Bet: Why Big Tech is Repatriating Quantum Research to American Soil
Brenda Rodriguez
  • Website

Brenda Rodriguez is a doctoral research student in computer science at Stanford University who is passionate about mathematics and computing. She studies the intricate relationship between theory, algorithms, and applied mathematics. She regularly delves into the most recent scholarly articles with a sincere love for research literature, deconstructing difficult concepts with accuracy and clarity.Brenda covers the latest advancements in computing and mathematics research as Senior Editor at cheraghchi.info, making cutting-edge concepts accessible to inquisitive minds worldwide. Brenda finds the ideal balance between the demanding academic life and the natural world by recharging outside when she's not buried in research papers or conducting experiments, whether it's hiking trails or just taking in the fresh air.

Related Posts

Research

The $150 Billion Bet: Why Big Tech is Repatriating Quantum Research to American Soil

May 10, 2026
Research

The Turing Test is Dead: What Happens When We Stop Trying to Distinguish Man from Machine?

May 10, 2026
Research

The Fast Fourier Transform: The Single Mathematical Equation That Built the Digital Age

May 10, 2026
Add A Comment
Leave A Reply Cancel Reply

You must be logged in to post a comment.

Research

The $150 Billion Bet: Why Big Tech is Repatriating Quantum Research to American Soil

Brenda RodriguezMay 10, 2026

The way IBM made its $150 billion announcement in late April seems almost archaic. There…

The Randomised Algorithm That Changed Computer Science — and the Decades-Long Quest to Replace It With Something Deterministic

May 10, 2026

The Turing Test is Dead: What Happens When We Stop Trying to Distinguish Man from Machine?

May 10, 2026

The Fast Fourier Transform: The Single Mathematical Equation That Built the Digital Age

May 10, 2026

The Information Theory Problem So Difficult That It Remained Unsolved for Three Decades — Until Now

May 10, 2026

AI at the Border: The Controversial Tech Policing American Immigration

May 10, 2026

The Monte Carlo Method: Why Las Vegas Math Runs Modern Computer Simulations

May 10, 2026
Most Popular

The Traveling Tournament Problem: How Math Schedules Professional Sports

May 2, 20261 Views

The $150 Billion Bet: Why Big Tech is Repatriating Quantum Research to American Soil

May 10, 20260 Views

The Randomised Algorithm That Changed Computer Science — and the Decades-Long Quest to Replace It With Something Deterministic

May 10, 20260 Views
About
About

The research published here sits at the boundary of theoretical computer science, coding theory, information theory, and cryptography. The central questions driving this work are mathematical in nature: what are the fundamental limits of reliable communication over noisy channels? How much information can be protected against adversarial tampering? How can high-dimensional sparse signals be recovered from few measurements? How does randomness help — or hinder — efficient computation?
These questions matter both as deep mathematical problems and as foundations for practical systems in data storage, communications, privacy, and security.

Discalimer

This website makes research papers, preprints, and manuscripts accessible for scholarly and instructional purposes. Research findings are subject to revision, correction, and peer review even though every attempt is made to ensure accuracy. The final published versions of preprints and manuscripts may be different from those posted here. For reference and citation purposes, readers should refer to the official published versions. A paper is not endorsed by any journal, conference, or publisher just because it appears on this website.

No Expert Guidance
This website does not provide any legal, financial, investment, medical, or other professional advice. Applications in communications, cryptography, data security, and computer systems are the subject of theoretical and scholarly research discussions. They shouldn’t be used as a guide when making operational, financial, or commercial decisions. A qualified professional should be consulted by readers who need professional advice.

Disclosure of Finances
Under grants NSF CCF-2107345 and NSF CCF-2006455, the US National Science Foundation provided partial funding for research carried out and published through this website. This funding does not constitute a financial stake in any commercial product, business, or technology; rather, it solely supports academic research activities.
This website doesn’t accept sponsored content, run advertisements, or get paid for highlighting, endorsing, or linking to any goods, services, or businesses. Any external links are not endorsements or commercial relationships; they are only included for academic reference and convenience.
Any business or product that may be discussed or cited in research published on this website has no financial stake in the author and is not compensated by them. Any significant changes to this will be made publicly known.

  • Home
  • Privacy Policy
  • Disclaimer
  • About
  • Terms of Service
  • News
  • Research
  • Trending
© 2026 ThemeSphere. Designed by ThemeSphere.

Type above and press Enter to search. Press Esc to cancel.