In 1654, the French philosopher and mathematician Blaise Pascal explained the logic of believing in God, expressed as a wager. The cost to a believer of wrongly placing faith is trivial, but the cost of disbelieving if God exists could be infinite. Therefore, Pascal argued that for a rational actor, the choice is obvious: bet on God.
Today, we face an analogous wager about the arrival of immensely capable artificial intelligence, but with a crucial difference—the evidence for AI’s transformative impact is mounting daily, and the timeline in which it may occur isn’t eternity but the next handful of years.
The wager is this: Either AI will radically transform work, education, corporations, and society within a short interval of time, or it won’t. If we prepare for change and it doesn’t materialize, we’ve likely invested in digital literacy, rethought ossified institutions, and considered options for how to distribute income other than through wages. These are hardly catastrophic losses. But if we fail to prepare and transformation arrives quickly, we risk mass unemployment, obsolete educational institutions, and widespread social disruption.
The rational choice is clear. We must bet that transformation will happen.
Consider the job market, which already may be experiencing early, AI-induced tremors. Although hyperbolic, tech leaders such as Dario Amodei, founder and CEO of Anthropic, and Eric Schmidt, former CEO of Google, predict that AI will eliminate up to 50% of all entry-level white-collar jobs within one to five years. These two prognosticators join a fast-growing group of economists, Nobel laureate Geoffrey Hinton, omnipresent Elon Musk, and many other prominent academics and tech executives that warn of an impending “jobpocalypse”.
The comfortable response—I’ve heard it a thousand times—is that over millennia economies have weathered automation many times before and with each prior technical leap, the labor market adapts. New technologies always usher in a wave of new high-paying job roles to replace those they wipe out, right?
Maybe not this time. Previous waves of automation mostly replaced muscle; this one replaces judgment. Think of what will happen if easily scalable AI systems commoditize human intelligence? We’re not talking about gradually displacing factory workers over 100 or so years or small labor market shocks that can be resolved with retraining programs for a handful of service jobs. We’re talking about the displacement of a very large fraction of the white collar workforce in a very short window of time.
If we’re wrong about this transformation, what’s the cost of preparation? We’d probably create more flexible labor markets, portable benefits that aren’t tied to employment, and universal basic income pilots that might prove unnecessary. We’d teach children critical thinking and creativity instead of memorization. We’d help workers build AI-complementary skills. These investments aren’t wasted even in a “stable” world in which AI doesn’t turn the labor market upside down.
Regardless of the long-term impact of AI, our education system needs urgent reform. Our current system trains humans to do what AI does best: process information, follow rules, produce standardized outputs. In that case, universities are fine-tuning their students for obsolescence, and the latter are paying handsomely for this privilege. We already should be emphasizing judgment under uncertainty, ethical reasoning, creative problem-solving, and human connection—whatever things we believe will remain scarce when intelligence is available on demand.
It doesn’t stop there. Healthcare systems must prepare for hybrid AI-human medical teams and liability mechanisms for algorithmic decision makers. Financial markets may need circuit breakers for AI traders. Cities must plan for autonomous vehicles that will eliminate millions of driving jobs. Courts need frameworks for when agents sign contracts, invent things that are patented, or commit crimes. We also must prepare for a world where AI’s capabilities can be used for malice—powering cyberattacks, identity theft, and planning terrorist attacks.
Of great importance, we need new narratives about human worth that are less dependent on work and that account for AI’s full-fledged saturation of our lives. When machines out-think, out-work, and even out-create humans but also become the counterparty in many of our important relationships, what gives life meaning? Again, if we’re wrong about AI, we’ve done some healthy philosophical reflection. If we’re right and haven’t prepared philosophically, psychologically, and culturally, we risk a crisis of purpose that could manifest as mass mental health strain, or worse.
Some argue that AI progress has plateaued, that regulation will slow deployment, and that humans will always maintain an advantage. Perhaps. But Pascal’s logic holds: the asymmetry of outcomes demands action. There are likely to be benefits of preparing for a transformation even if it doesn’t come. The cost of not preparing for one that does will be enormous.
Pascal wagered on eternal matters. The AI wager is about the near future. The stakes for any individual might be lower in this case, but unlike Pascal’s God, AI’s arrival won’t wait for judgment day. It’s already knocking at the door. In this wager, the bad outcome isn’t that AI disappoints. It’s that it delivers on every promise while we’re still debating whether to believe it is happening.
Place your bet accordingly.
The post The Philosophical Bet We All Need to Make in the Age of AI appeared first on TIME.