Major artificial intelligence firms are expanding their in-house security expertise. They are recruiting specialists in chemical weapons, high-yield explosives, and biological risks to keep their systems from enabling military or catastrophic misuse. OpenAI is hiring a researcher focused on biological and chemical risks for up to £335,000 ($455,000). Anthropic is seeking a Policy Manager for Chemical Weapons and High-Yield Explosives with at least five years of experience in chemical weapons and/or explosives defense and knowledge of radiological dispersal devices (dirty bombs). It is offering an annual salary of $245,000 to $285,000, according to BBC News.
The hiring push comes amid intensifying concerns that advanced AI systems could be repurposed to cause serious harm, including by providing guidance related to chemical or radioactive weapons. A central worry is the democratization of deadly technologies. Increasingly capable AI tools can lower skill thresholds in areas such as coding, art, and language translation. Some experts caution that they may also lower barriers to assembling explosives or radiological devices.
There are also warnings that attempts to harden models against producing sensitive information may inadvertently expose them to more weapons-related data during the training and evaluation process, even if they are instructed not to use it.
“Supply chain risk”
Anthropic’s Claude assistant has already been tapped by multiple US national security agencies for intelligence analysis, operational planning, and cyber operations. The integration has extended to Pentagon systems used to model battle scenarios and assess intelligence, despite the US Department of Defense having previously designated Anthropic a “supply chain risk.”
Tensions over the scope of permissible military use surfaced before US and Israeli attacks on Iran, when Anthropic rebuffed demands for unrestricted access to its AI tools. Claude was nevertheless integrated into systems deployed by the US in the US-Israeli attack on Iran, according to Semafor. Beyond the Middle East, the US government is using AI companies for other military operations, including activities in Venezuela, highlighting the widening footprint of commercial AI in defense contexts and the urgency of setting clear boundaries on acceptable use.
Regulatory landscape
The regulatory landscape lags behind the technology’s spread. There is no international treaty or comparable global framework governing the use of AI in connection with chemical, radiological, or related weapons. Researchers have questioned the safety of entrusting sensitive weapons-related data to AI systems in the absence of such guardrails. Across the industry, leaders and researchers have continuously warned about potential existential threats posed by their own technology, but those alarms have not been accompanied by efforts to slow the pace of development. That has left companies to devise their own internal policies, staffing, and technical controls to mitigate risk.
Anthropic has said its new position focused on chemical weapons and high-yield explosives is in line with other roles it has created for sensitive domains. The company suggests it is building a broader program to manage high-consequence risks within its AI models and enterprise offerings.