A tornado rips through Oklahoma a few weeks after a butterfly flaps its wings somewhere over the Amazon. By now, the picture has become so dated that it almost seems like a coffee mug cliche. However, the metaphor loses its appeal if you spend time with those who actually study these systems. It begins to feel like a caution.
Scientists have been walking into chaos for more than 50 years. Early in the 1960s, while seated at his computer, Edward Lorenz discovered that even the smallest rounding error in a weather simulation could result in a drastically different forecast a few days later. Meteorology, biology, and finance were all altered by that accident. The lesson from the textbook stuck: chaotic systems are theoretically deterministic but practically impossible to predict. Speaking with mathematicians now seems to indicate that they have come to terms with this. Mostly.
| Field | Detail |
|---|---|
| Concept | Chaos Theory & Predictive Algorithms |
| Origin | Edward Lorenz, 1960s — the “butterfly effect” |
| Recent Study | RIT team published in Nature Scientific Reports, 2025 |
| Lead Researchers | Adam Giammarese, Kamal Rana, Nishant Malik |
| Method | Tree-based machine learning (decision trees) |
| Comparison Method | Neural network reservoir computing |
| Key Advantage | Fewer parameters, smaller datasets, more interpretable |
| Adjacent Frontier | Undecidability in physical systems |
| Notable Voice | Toby Cubitt, University College London |
| Applications | Climate, weather, finance, neuroscience, encryption |
| Co-author (late) | Erik Bollt, Clarkson University |
The toolkit is what has recently changed. Working under associate professor Nishant Malik at Rochester Institute of Technology, Adam Giammarese and Kamal Rana argued in a paper published in Nature Scientific Reports last year that a large neural network is not necessary to predict a chaotic system. A decision tree is required. An algorithm written on a Tuesday afternoon by a graduate student. In an interview linked to the paper, Giammarese stated, “Forecasting chaos is an impossible problem,” before adding the crucial part: “but a good enough forecast, built on a transparent, lightweight model might be the more useful thing anyway.”
This is so counterintuitive that it’s difficult to ignore. Over the past few years, scale—more parameters, more GPUs, and more electricity—has dominated the machine learning narrative. That is not the case with decision trees. They are elderly. They can be explained. They are laptop-based. Furthermore, despite all the satellites overhead, weather and climate datasets are actually quite small in comparison to what neural networks typically consume. Occasionally, the outdated instrument is the ideal choice for the space.

However, the ground becomes more bizarre as you move deeper into this field. In an 1814 essay, Pierre-Simon Laplace described a demon that could predict the entire future if it had full knowledge of every particle in the universe. That dream was first destroyed by quantum mechanics. What was left was bruised by chaos theory. Undecidability is the third blow that physicists are still dealing with. According to Toby Cubitt of University College London, there are questions about a system’s future that are just unanswerable, regardless of God’s perspective on it. Not a “hard to answer.” Not a “we need a bigger computer.” Not able to. It’s next-level chaos, according to Catalan mathematician Eva Miranda, who works on similar terrain.
That distinction is more important than it may seem. According to chaos, you can’t predict because you can’t measure things accurately enough. Undecidability indicates that there is no computable form of the answer. It’s the distinction between a door that was never constructed and one that is locked.
This is what makes the RIT function, and research on a related topic known as chaotic learning feels subtly significant. If long-term future forecasting is out of the question, the next best option is a model that is both small enough for a working scientist to examine its internal workings and honest about what it can and cannot see. Bigger is always better, according to AI investors. Contrary to popular belief, climate modelers have spent the last sixty years battling butterflies.
Observing this unfold gives me the impression that the field is rediscovering something ancient. Brute force is not always the solution. A smaller model, a more transparent assumption, and a readiness to state that the forecast is valid for ten days rather than a hundred are some examples of how humility is rewarded. The butterfly continues to flutter. Finally, we’re learning how to listen for shorter periods of time.

