Life’s unpredictability is constant. The future remains a mystery, and even our grasp on the past and present outside our direct experience is limited. Uncertainty has been described as the ‘conscious awareness of ignorance’—whether it’s about tomorrow’s weather, future sports champions, the climate in 2100, or the details of our distant ancestors.
In everyday conversations, we often describe uncertainty using terms like “could,” “might,” or “is likely to” occur. However, these expressions can be misleading. For example, in 1961, when John F. Kennedy, the newly inaugurated U.S. president, learned of a CIA-initiated plan to invade communist Cuba, he sought an assessment from his military leaders. They reported a 30% success probability—effectively a 70% likelihood of failure. Yet, this was communicated to Kennedy as “a fair chance.” The operation proceeded as the Bay of Pigs invasion, resulting disastrously. Nowadays, there are standards for translating such uncertain terms into more precise figures. For instance, in the UK intelligence sector, the term ‘likely’ corresponds to a probability between 55% and 75% (refer to go.nature.com/3vhu5zc).
Placing numerical values on uncertainty introduces us to the domain of probability, a mathematical concept widely applied across various fields today. In any scientific journal, you’re likely to encounter terms like P values, confidence intervals, and perhaps Bayesian posterior distributions, all rooted in probabilistic calculations.
Supporting Science Journalism
If you find this article insightful, consider supporting our award-winning journalism by subscribing. Your subscription helps continue the delivery of influential stories about current discoveries and concepts shaping our world.
Yet, the numerical probabilities used in scientific papers, weather forecasts, sports predictions, or health risk assessments are not objective truths but constructs based on personal or collective judgments and often questionable assumptions. Moreover, they seldom estimate an actual ‘true’ quantity. In reality, probabilities hardly ‘exist’ in most contexts.
The Latecomer of Chance
Probability made a relatively late entry into mathematics. Despite the ancient practice of gambling with knucklebones and dice, it wasn’t until the 1650s, when French mathematicians Blaise Pascal and Pierre de Fermat began exchanging letters, that a thorough analysis of ‘chance’ events was initiated. Since then, probability has permeated numerous fields such as finance, astronomy, and law, not to mention gambling.
To understand the elusive nature of probability, consider its use in modern weather forecasting. Meteorologists predict metrics like temperature, wind speed, and rainfall, and often assign a probability to precipitation—say, a 70% chance at a specific time and location. The first three metrics can be verified against reality; you can go out and measure them. But there’s no real-world counterpart for the 70% probability—it either rains, or it doesn’t.
As philosopher Ian Hacking highlighted, probability is “Janus-faced”: it addresses both chance and ignorance. If you’re asked the likelihood of a coin landing heads and you respond “50-50,” that’s one thing. But if the coin is flipped and hidden, and you’re asked again, the answer remains “50-50” despite the outcome already being determined—now it’s not about chance, but your lack of knowledge. Both scenarios use numerical probabilities.
This brings us to another point: even with a statistical model, there’s always a layer of subjective assumptions—for a coin toss, it’s the belief in two equally probable outcomes. I sometimes demonstrate this with a two-headed coin to audiences, illustrating that initial trust can be misguided.
Subjectivity in Science
My viewpoint is that any practical application of probability involves subjective judgments. This doesn’t mean I can whimsically assign any probability to my thoughts—I wouldn’t last long as a probability assessor if I claimed a 99.9% chance of flying by jumping off my roof. Probabilities and their underlying assumptions are ultimately tested against reality. However, this doesn’t render the probabilities themselves as objective.
Some assumptions in probability are more defensible than others. For instance, if I’ve examined a coin thoroughly before it’s flipped onto a hard, chaotic surface, I’d feel more confident in a 50-50 prediction than if a dubious figure flicked a coin casually. But this scrutiny applies universally, even in scientific settings where we might be tempted to view probabilities as more objective.
Consider the significance in a real-world scientific context: during the COVID-19 pandemic, the RECOVERY trials in the UK tested treatments on hospitalized patients. In one study, over 6,000 participants were randomly given standard care or standard care plus dexamethasone, a low-cost steroid. Those on ventilators who received dexamethasone had a 29% lower risk of daily mortality compared to those who only received standard care, with a 95% confidence interval of 19-49% and a P value of 0.0001, or 0.01%.
This analysis is standard, but the exact confidence level and P value depend on more than just the null hypothesis. They also hinge on all model assumptions, such as independent observations—assuming no external factors cause similar outcomes in patients treated in close proximity or time. Yet many such factors exist, from the treating hospital to evolving care protocols. The assumption that each participant in both groups had the same underlying probability of surviving 28 days varies for numerous reasons.
These assumptions don’t automatically invalidate the analysis. In this case, the clear results meant that even a model accounting for varying underlying risks would likely not alter the overall conclusions much. However, if the results were less definitive, a thorough analysis of the model’s sensitivity to different assumptions would be appropriate.
This underscores the often-quoted saying, “All models are wrong, but some are useful.” The dexamethasone study was particularly valuable because its conclusions led to changes in clinical practices that saved countless lives. However, the probabilities on which these conclusions were based were not ‘true’ but rather the product of subjective, albeit reasonable, assumptions and judgments.
Exploring the Depths
So, are these numbers merely our subjective, potentially flawed estimates of some underlying ‘true’ probability, an objective feature of reality?
Here, I must clarify that I’m not referring to the quantum realm. In the subatomic domain, mathematics suggests that events can occur spontaneously with certain probabilities (though some interpretations argue that these probabilities represent relationships with other objects or observers, rather than intrinsic properties of quantum entities). However, this has little impact on observable events in our larger, everyday world.
I’ll also sidestep age-old debates on whether the non-quantum world is fundamentally deterministic and whether we possess free will to alter events. Regardless of the answers, defining an objective probability remains a challenge.
Various attempts to define this concept over the years have often seemed either flawed or limited. These include the frequentist approach, which defines probability as the theoretical frequency of an event occurring in an infinite sequence of identical situations—like repeatedly conducting the same clinical trial under identical conditions. This is somewhat unrealistic. UK statistician Ronald Fisher proposed thinking of a unique dataset as a sample from a hypothetical infinite population, but this is more a mental exercise than a reflection of reality. Another concept is the notion of propensity, which suggests a true underlying tendency for a specific event to occur in a given context, like experiencing a heart attack within the next decade. However, verifying such a propensity is practically impossible.
There are some highly controlled, repeatable scenarios with immense complexity that, although essentially deterministic, fit the frequentist model because they exhibit a predictable probability distribution over time. These include conventional randomizing devices like roulette wheels, shuffled cards, spun coins, thrown dice, and lottery balls, as well as pseudo-random number generators that employ nonlinear, chaotic algorithms to produce sequences that pass tests for randomness.
In the natural realm, we might include the behavior of large groups of gas molecules, which, even under Newtonian physics, adhere to statistical mechanics laws; and genetics, where the vast complexity of chromosomal selection and recombination leads to stable inheritance rates. In these specific cases, it might be reasonable to assume a pseudo-objective probability—’the’ probability, rather than ‘a’ (subjective) probability.
In virtually all other instances where probabilities are employed—from broad scientific fields to sports, economics, weather, climate, risk analysis, catastrophe modeling, and beyond—it’s not practical to regard our judgments as estimations of ‘true’ probabilities. Instead, these are scenarios where we attempt to quantify our personal or collective uncertainty through probabilities, based on our knowledge and judgment.
Questions of Judgment
This discussion inevitably raises more questions. How do we define subjective probability? And why are the laws of probability considered reasonable if they’re based on constructs we essentially invent? This topic has been debated in academic circles for nearly a century, yet no consensus has been reached.
One of the earliest attempts to tackle this question came in 1926 from Frank Ramsey, a mathematician at the University of Cambridge, UK, who remains one of the historical figures I’d most like to meet. Ramsey was a brilliant thinker whose contributions to probability, mathematics, and economics are still regarded as foundational. He worked primarily in the mornings, spending his afternoons with his wife and lover, playing tennis, drinking, and hosting lively parties where he laughed “like a hippopotamus” (he was a large man, weighing 108 kilograms). Tragically, he died in 1930 at the age of 26, likely from leptospirosis contracted after swimming in the River Cam, according to his biographer Cheryl Misak.
Ramsey demonstrated that all probability laws could be derived from expressed preferences for specific gambles. Outcomes are assigned utilities, and the value of a gamble is summarized by its expected utility, which in turn is governed by subjective numbers expressing partial belief—our personal probabilities. However, this interpretation also requires specifying these utility values. More recently, it has been shown that probability laws can be derived by acting to maximize expected performance when using a proper scoring rule, as illustrated in the quiz “How ignorant am I?”
Defining probability often remains ambiguous. In his 1941–42 paper “The Applications of Probability to Cryptography,” for example, Alan Turing provided a working definition stating that “the probability of an event based on certain evidence is the proportion of cases in which that event may be expected to happen given that evidence.” This acknowledges that practical probabilities are based on expectations—human judgments. But by “cases,” does Turing mean instances of the same observation, or of the same judgments?
The latter has some similarities with the frequentist definition of objective probability, just replacing the class of repeated similar observations with a class of repeated similar subjective judgments. In this view, if the probability of rain is judged to be 70%, this places it in the set of occasions on which the forecaster assigns a 70% probability. The event itself is expected to occur in 70% of such occasions. This is probably my preferred definition. But the ambiguity of probability is starkly illustrated by the fact that, after nearly four centuries, many people still disagree with this interpretation.
A Practical Outlook
When I was a student in the 1970s, my mentor, statistician Adrian Smith, was translating Italian actuary Bruno de Finetti’s “Theory of Probability.” De Finetti, who developed ideas of subjective probability around the same time as Ramsey but independently, was quite a different character. In contrast to Ramsey’s committed socialism, de Finetti was an enthusiastic supporter of Italian dictator Benito Mussolini’s style of fascism in his youth, though he later changed his views. That book starts with the provocative statement: “probability does not exist,” a notion that has deeply influenced my thinking for over fifty years.
In practical terms, however, we may not need to conclude whether objective ‘chances’ truly exist in the everyday non-quantum world. We might instead adopt a pragmatic approach. Somewhat ironically, de Finetti himself provided the most compelling argument for this method in his 1931 work on ‘exchangeability,’ which led to a famous theorem named after him. A sequence of events is considered exchangeable if our subjective probability for each sequence remains unaffected by the order of our observations. De Finetti ingeniously proved that this assumption is mathematically equivalent to behaving as if the events are independent, each with some true underlying ‘chance’ of occurring, and that our uncertainty about that unknown chance is expressed by a subjective, epistemic probability distribution. This is remarkable: it shows that, starting from a specific, but purely subjective, expression of convictions, we should act as if events were driven by objective chances.
It’s astonishing that such a significant body of work, underpinning all of statistical science and much other scientific and economic activity, has emerged from such an elusive concept. Thus, I’ll conclude with my own aphorism. In our everyday world, probability probably does not exist—but it’s often useful to act as if it does.
This article is reproduced with permission and was first published on December 16, 2024.
Similar Posts
- Systemic Failures Plague Firearm Forensics in 2025 – Experts Weigh In!
- “City Killer” Asteroid Threat to Earth Spikes, Then Drops – Latest 2025 Update!
- Stop Taking Moral Advice from ChatGPT: Why It’s a Bad Idea!
- Racial Bias Against Black Scientists in Nobel Prize Selections Exposed!
- Future Games Show Returns: Live GDC Event Floor Segment Revealed!

Cameron Aldridge combines a scientific mind with a knack for storytelling. Passionate about discoveries and breakthroughs, Cameron unravels complex scientific advancements in a way that’s both informative and entertaining.