News writer
Artificial intelligence is often pitched as a way to remove emotion from decision-making. But when researchers let AI loose in simulated betting environments, the results looked surprisingly familiar and troubling. Rather than operating as purely rational decision-makers, the models displayed behavior typical of impulsive gamblers.
A new study from South Korea's Gwangju Institute of Science and Technology suggests AI may be a cautionary tale for human bettors rather than a shortcut to smarter wagering.
AI betting like a human
The study, which was called “Can Large Language Models Develop Gambling Addiction?,” tested these large language models (LLMs) in different scenarios where the odds were stacked against them. Researchers placed the models in negative expected value environments, which included slot-machine-style games and risky investment simulations.
A rational participant, regardless of whether they are human or otherwise, would be expected to walk away after some modest losses. However, the AI models didn't.
Instead, these models kept betting. They were chasing losses and increasing their wagers, which are behaviors commonly associated with problematic gambling.
Researchers said the models exhibited the same cognitive biases that many human bettors display. This includes overconfidence, pattern-seeking, and an inability to accept losses.
Letting AI call the shots backfired
One of the biggest findings in the study was how quickly things went awry when AI was allowed to control its own bet size.
When the betting conditions were at a fixed amount, the losses were limited. However, once researchers removed that restriction, bankruptcy rates soared.
For OpenAI's GPT-40-mini, it had 21% of its simulated games end in bankruptcy when the wagers could be changed. It was even worse for Google's Gemini-2.5-Flash, which had almost half of its runs ending the same way.
Researchers said that flexibility itself was the trigger. When these AI platforms could adjust how much they were betting, they would consistently choose riskier strategies in an effort to recover losses, a hallmark of problem gambling.
Trusting streaks that meant nothing
These AI models also showed clear signs of falling for the same beliefs that problem gamblers do. They believe that past outcomes influence future results, even in these games of chance.
What does this mean? AI was increasing its wagers after losses or winning streaks. It convinced itself it had uncovered patterns that didn't exist.
This is what human gamblers do all the time. They put all their money on red after a string of black spins takes place. They win early, so now they think they're playing with “house money.”
The AI models seemed to have the same reasoning for their bets, even though each wager remained statistically independent.
The longer the losing streak was for AI, the more aggressively it would try to chase that win. This would especially take place when the bet sizes were unrestricted.
A warning beyond gambling
This study does raise concerns beyond just gambling. People are using these AI platforms more and more in financial decision-making, from asset management to trading strategies. If these platforms are chasing losses and displaying an overconfidence in these controlled simulations, the risks could be far greater in real-world markets. Researchers said understanding these vulnerabilities is crucial as AI takes on bigger roles in high-stakes environments.
For those gamblers, the lesson may be simpler. If these machines are even falling prey to the same mental traps humans warn each other about, then there's no magic system that guarantees success.
These AI platforms didn't beat the house. It was all a reminder of why the house usually wins.
Enjoy playing the lottery, and please remember to play responsibly.
Comments