When used in simulated wargames and diplomatic scenarios, artificial intelligence (AI) tended to choose an aggressive approach, including using nuclear weapons, a new study shows. The scientists, who aimed to who conducted the tests urged caution when using large language models (LLMs) in sensitive areas like decision-making and defence. The study by Cornell University in the US used five LLMs as autonomous agents in simulated wargames and diplomatic scenarios: three different versions of OpenAI’s GPT, Claude developed by Anthropic, and Llama 2 developed by Meta. The rise of the Hitler chatbot…