Tensions have been rising between AI safety initiatives and the practical application of AI in military contexts. The United-States-based artificial intelligence (AI) firm Anthropic has posted a vacancy on LinkedIn for a Policy Manager dealing with chemical weapons and high yield explosives to manage how AI handles sensitive and dangerous information.
The role involves designing evaluations to assess AI capabilities in synthesizing chemical agents and energetic materials. Anthropic is not the only firm to hire someone for such a role. Earlier, OpenAI posted a similar vacancy. The San Francisco-based company planned to recruit a researcher specializing in frontier biological and chemical risks.
OpenAI previously launched a similar “Preparedness” team to identify and mitigate catastrophic risks related to biological and chemical threats.
While these roles aim to build safeguards, experts warn that training models to recognize weapon data inherently gives the AI access to that information, which could potentially be misused.
In this regard, co-founder Dario Amodei has stated that current AI technology is not ready for warfare and should not be used for such purposes. Anthropic has faced legal friction with the US government over supply chain risk definitions and has fought to keep its tech out of autonomous weapons and mass surveillance.
Despite the company’s reservations, reports indicate that Anthropic’s AI (Claude) is integrated into systems used by defense tech firms like Palantir and is allegedly utilized in active international conflicts.