Learn more

AI models in simulated wargames choose an aggressive approach, including using nuclear weapons, a new study suggests. Scientists – who say scenarios with “hard-to-predict escalations” often ended with nuclear attacks – have now warned against using machine learning robots such as large language models (LLMs) in sensitive areas like decision-making and defence. As part of the investigation, Cornell University used five LLMs – three different versions of OpenAI’s GPT, Claude developed by Anthropic, and Llama 2 developed by Meta – in wargames and diplomatic scenarios. According to the study, auto…

cuu