Anthropic bolsters ‘responsible AI’ guardrails against chemical and explosive threat risks: Here’s why
The company says that the primary role of the manager is to design and implement evaluation methodologies for assessing AI model capabilities
Tensions have been rising between AI safety initiatives and the practical application of AI in military contexts. The United-States-based artificial intelligence (AI) firm Anthropic has posted a vacancy on LinkedIn for a Policy Manager dealing with chemical weapons and high yield explosives to manage how AI handles sensitive and dangerous information.
The role involves designing evaluations to assess AI capabilities in synthesizing chemical agents and energetic materials. Anthropic is not the only firm to hire someone for such a role. Earlier, OpenAI posted a similar vacancy. The San Francisco-based company planned to recruit a researcher specializing in frontier biological and chemical risks.
OpenAI previously launched a similar “Preparedness” team to identify and mitigate catastrophic risks related to biological and chemical threats.
While these roles aim to build safeguards, experts warn that training models to recognize weapon data inherently gives the AI access to that information, which could potentially be misused.
In this regard, co-founder Dario Amodei has stated that current AI technology is not ready for warfare and should not be used for such purposes. Anthropic has faced legal friction with the US government over supply chain risk definitions and has fought to keep its tech out of autonomous weapons and mass surveillance.
Despite the company’s reservations, reports indicate that Anthropic’s AI (Claude) is integrated into systems used by defense tech firms like Palantir and is allegedly utilized in active international conflicts.
-
Apple rolls out new AirPods Max headphones with advanced features, including 'Live transmission'
-
Encyclopedia Britannica sues OpenAI over alleged use of its content for AI training
-
Nvidia set to reveal new chips, advance AI software at Nvidia GTC megaconference
-
US mayors raise concerns over AI data centres
-
THOR AI stuns scientists by solving century-old Physics problem in seconds
-
AI use among US doctors surges, survey finds
-
Inside TikTok-Meta algorithm war: How the race for engagement is putting users at risk
-
Global push grows for ‘Human-Made’ labels as AI use expand
