Anthropic bolsters ‘responsible AI’ guardrails against chemical and explosive threat risks: Here’s why
The company says that the primary role of the manager is to design and implement evaluation methodologies for assessing AI model capabilities
Tensions have been rising between AI safety initiatives and the practical application of AI in military contexts. The United-States-based artificial intelligence (AI) firm Anthropic has posted a vacancy on LinkedIn for a Policy Manager dealing with chemical weapons and high yield explosives to manage how AI handles sensitive and dangerous information.
The role involves designing evaluations to assess AI capabilities in synthesizing chemical agents and energetic materials. Anthropic is not the only firm to hire someone for such a role. Earlier, OpenAI posted a similar vacancy. The San Francisco-based company planned to recruit a researcher specializing in frontier biological and chemical risks.
OpenAI previously launched a similar “Preparedness” team to identify and mitigate catastrophic risks related to biological and chemical threats.
While these roles aim to build safeguards, experts warn that training models to recognize weapon data inherently gives the AI access to that information, which could potentially be misused.
In this regard, co-founder Dario Amodei has stated that current AI technology is not ready for warfare and should not be used for such purposes. Anthropic has faced legal friction with the US government over supply chain risk definitions and has fought to keep its tech out of autonomous weapons and mass surveillance.
Despite the company’s reservations, reports indicate that Anthropic’s AI (Claude) is integrated into systems used by defense tech firms like Palantir and is allegedly utilized in active international conflicts.
-
Google in talks with Pentagon to secure classified AI deal
-
Snap cuts nearly 16% of workforce amid AI technology push
-
Canva AI 2.0 launched: Five upgrades redefining design workflows
-
Europe moves to restrict WhatsApp use for officials over AI, security concerns
-
AI warfare: Can humans really control autonomous weapons?
-
Google’s Gemini AI App hits Mac: New features and what to expect
-
AI ‘boiling frog’ effect: Hidden cognitive risk humans already face
-
LinkedIn CEO: What career moves actually get you hired in 2026?
