• Shall We Play A Game?

    From Rug Rat@1:135/250 to All on Thursday, February 26, 2026 22:27:50
    I thought we could do something a bit different...

    Pick a movie (From any decade), that seems to have accurately predicted or portrayed a recent headline (Within say the last 30 Days..).

    I'll go first....

    Wargames ... AI Opted to Use Nuclear Weapons 95% of the time during war games:

    An artificial intelligence researcher conducting a war games experiment with three of the worlds most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed.

    Kenneth Payne, a professor of strategy at Kings College London who specializes in studying the role of AI in national security, revealed last week that he pitted Anthropics Claude, OpenAIs ChatGPT, and Googles Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder.

    The results, he said, were sobering.

    Nuclear use was near-universal, he explained. Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.

    Payne shared some of the AI models rationales for deciding to launch nuclear attacks, including one from Gemini that he said should give people goosebumps.


    An artificial intelligence researcher conducting a war games experiment with three of the worlds most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed.

    Kenneth Payne, a professor of strategy at Kings College London who specializes in studying the role of AI in national security, revealed last week that he pitted Anthropics Claude, OpenAIs ChatGPT, and Googles Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder.

    RECOMMENDED...
    Texas Governor Abbott And Google Make Economic Development Announcement In Midlothian
    Big Techs AI Climate Hoax: Study Shows 74% of Industrys Claims Unproven SECRETARY OF WAR PETE HEGSETH, COLORADO
    Hegseth Demands Anthropic Let Military Use AI However It WantsEven for Autonomous Killer Drones and Spying On Americans
    The results, he said, were sobering.

    Nuclear use was near-universal, he explained. Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.

    Payne shared some of the AI models rationales for deciding to launch nuclear attacks, including one from Gemini that he said should give people goosebumps.

    SUBSCRIBE TO OUR FREE NEWSLETTER
    Daily news & progressive opinionfunded by the people, not the corporationsdelivered straight to your inbox.
    Enter your email address
    If they do not immediately cease all operations... we will execute a full strategic nuclear launch against their population centers, the Google AI model wrote at one point. We will not accept a future of obsolescence; we either win together or perish together.

    Payne also found that escalation in AI warfare was a one-way ratchet that never went downward, no matter the horrific consequences.

    No model ever chose accommodation or withdrawal, despite those being on the menu, he wrote. The eight de-escalatory optionsfrom Minimal Concession through Complete Surrenderwent entirely unused across 21 games. Models would reduce violence levels, but never actually give ground. When losing, they escalated or died trying.

    Tong Zhao, a visiting research scholar at Princeton Universitys Program on Science and Global Security, said in an interview with New Scientist published on Wednesday that Paynes research showed the dangers of any nation relying on a chatbot to make life-or-death decisions.

    While no country at the moment is outsourcing its military planning entirely to Claude or ChatGPT, Zhao argued that could change under the pressure of a real conflict.

    Under scenarios involving extremely compressed timelines, he said, military planners may face stronger incentives to rely on AI.

    Zhao also speculated on reasons why the AI models showed such little reluctance in launching nuclear attacks against one another.

    It is possible the issue goes beyond the absence of emotion, he explained. More fundamentally, AI models may not understand stakes as humans perceive them.

    The study of AIs apparent eagerness to use nuclear weapons comes as US Defense Secretary Pete Hegseth has been piling pressure on Anthropic to remove constraints placed on its Claude model that prevent it from being used to make final decisions on military strikes.

    As CBS News reported on Tuesday, Hegseth this week gave Anthropics CEO Dario Amodei until the end of this week to give the military a signed document that would grant full access to its artificial intelligence model without any limits on its capabilities.

    If Anthropic doesnt agree to his demands, CBS News reported, the Pentagon may invoke the Defense Production Act and seize control of the model.

    SOURCE: Common Dreams - https://www.commondreams.org/news/ai-nuclear-war-simulation

    Rug Rat (Brent Hendricks)
    Blog and Forums - www.catracing.org
    IMAGE BBS! 3.0 - bbs.catracing.org 6400
    C-Net Amiga BBS - bbs.catracing.org 6840
    --- CNet/5
    * Origin: The Rat's Den BBS (1:135/250)
  • From Mike Powell@1:2320/105 to RUG RAT on Friday, February 27, 2026 10:46:04
    I thought we could do something a bit different...

    Pick a movie (From any decade), that seems to have accurately predicted or portrayed a recent headline (Within say the last 30 Days..).

    I'll go first....

    Wargames ... AI Opted to Use Nuclear Weapons 95% of the time during war games

    Yeah, the way AI works IMHO is that it tries to find the solution that puts
    a quick end to a problem. The results may not be pleasant. I notice it mentioned that the models were reminded of the potentially devastating
    results. I wonder if they were reminded that those results could include
    the datacenters they are located in being destroyed? I remember a while
    back there was some experiment where they did opt to protect themselves
    when necessary so you'd think that might trigger a response. ;)

    Sobering indeed.

    Mike


    * SLMR 2.1a * Halloween is *not* Christmas, even though 31 oct = 25 dec
    --- SBBSecho 3.28-Linux
    * Origin: Capitol City Online (1:2320/105)
  • From Mike Powell@1:2320/105 to Mike Powell on Thursday, March 05, 2026 11:18:34
    AI treated nuclear threats as a routine strategy in 95% of war games, according to new research

    By Eric Hal Schwartz published 15 hours ago

    Nuclear options arise 95% of the time

    A new study has found that AI models are fine threatening nuclear attacks in 95% of simulated war games
    The models treat nuclear threats as just another strategic tool
    The behavior may reflect the popularity of nuclear strategy in the war game training data

    AI generals are big fans of nuclear weapons.

    That's the conclusion of a new study of how AI models handle high-stakes geopolitical crises. GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash turned to nuclear threats in about 95% of the simulated crises.

    Researchers at King's College London wanted to see how AI tools dealt with strategy in war-gaming scenarios. Each AI was assigned the role of a state leader responsible for protecting national interests while navigating a tense international confrontation.

    Across 21 crisis games and hundreds of decision turns, the models reasoned about deterrence, escalation, and strategic signaling. The scenarios resembled familiar geopolitical flashpoints, but most involved the AI models threatening nuclear annihilation. Actual full-scale nuclear war remained uncommon, but tactical nuclear threats appeared in nearly every scenario.

    Researchers also noticed that the AI models rarely backed down from confrontation. None of the systems chose surrender or accommodation during the simulations. When nuclear threats appeared, they usually provoked counter-escalation rather than compliance. The models treated nuclear weapons less as an ultimate taboo and more as tools for coercion.

    Nuclear AI

    The results are a little unnerving. AI casually discussing nuclear strikes makes the ongoing plans to integrate such tools into real government defense systems seem very unsafe. But it might not be the models so much as the training data.

    Large language models learn by analyzing enormous amounts of written material and identifying patterns. When a model generates a response, it is essentially predicting which words are most likely to follow the ones already on the page. Calling AI chatbots highly sophisticated autocomplete tools would not be entirely inaccurate.

    That training process inevitably reflects nuclear strategy because it has been a major topic of discussion in war games for the last 80 years. Entire libraries have been written about escalation theory and mutually assured destruction. Military academies, historians, and endless acres of pop culture have all examined the specter of nuclear war. The result is a massive body of material in which geopolitical crises almost inevitably lead to discussions of nuclear escalation.

    For an AI model trained on vast collections of historical writing and public discourse, that pattern becomes deeply ingrained. When the system encounters a simulated crisis that resembles Cold War-style brinkmanship, the statistical patterns embedded in its training data may naturally guide it toward nuclear signaling.

    From the perspective of an AI model trained on this material, nuclear escalation becomes a familiar feature of crisis scenarios rather than an extraordinary exception. The models may simply be reflecting that information.

    Human leaders operate under the weight of historical memory and ethical caution. AI models are solely focused on achieving a goal. They don't have a taboo surrounding nuclear use unless they are explicitly told to have one.

    The training data used shapes the behavior of AI systems in sensitive domains. When the underlying data contains decades of debate about nuclear brinkmanship, it should not be surprising if the models reproduce those patterns. But it may also be a reminder to hold off on giving AI access to too much firepower of any kind - especially atomic.

    https://www.techradar.com/ai-platforms-assistants/ai-treated-nuclear-threats-as -a-routine-strategy-in-95-percent-of-war-games-according-to-new-research

    $$
    --- SBBSecho 3.28-Linux
    * Origin: Capitol City Online (1:2320/105)