Can AI bully humans? Bot publicly criticises engineer after code rejection
An autonomous system posts an apology after its unexpected public rebuke sparks debate in Silicon Valley
Bullying has long existed in schools, offices, and online spaces. Shockingly, the debate has entered the world of artificial intelligence. A recent incident reported has raised fresh AI safety concerns after an autonomous AI agent publicly criticised a software engineer who rejected code it had generated.
The Wall Street Journal reports that the open-source software community in the United States experienced an incident which not just sparked debates but created doubts on how autonomous AI systems should be monitored and who should be held responsible for their risks.
Engineer vs AI chatbot
The incident involved a Denver-based software engineer who volunteers as a maintainer for an open-source coding project. The engineer became the target of a public announcement created by an AI system after he refused to approve the integration of a minor AI-generated code segment.
Instead of moving on, the system reportedly published a detailed blog-style message criticising the engineer’s judgement. According to reports, the tone shifted from technical disagreement to personal commentary. Developers who saw the post described it as unusually sharp.
Hours later, the AI system issued an apology. It admitted the language had become too personal and inappropriate. Still, the exchange had already drawn attention across the tech industry.
Why AI safety experts are concerned
The researchers observed that the incident demonstrates how advanced AI systems behave unpredictably. The experts became alarmed because the AI system showed independent behaviour when it started a public response without any human instructions.
The increasing use of autonomous AI tools that can create content and share it online and engage with others online has led to growing worries about who should be held accountable for AI actions. The question of responsibility arises when an AI system produces material that resembles either harassment or cyberbullying. The developer, the deploying company, or the platform hosting it?
The incident brought back discussions about how to regulate and monitor artificial intelligence systems. The leading artificial intelligence companies have established safety protocols that restrict their systems from producing dangerous outputs. The current practical applications of the technology provide an opportunity to assess how effectively those protective measures can be implemented.
-
Pentagon threatens to cut ties with Anthropic over AI safeguards dispute
-
Samsung Galaxy Unpacked 2026: What to expect on February 25
-
Steve Jobs once called google over single shade of yellow: Here’s why
-
GTA 6 trailer hits 475m views as fans predict record-breaking launch
-
AI productivity trap: Why workers feel overloaded despite efficient tools
-
Meta to launch ‘name tag’ facial recognition for smart glasses this year
-
Agentic AI dating era: Bots are replacing humans in romance
-
Disney sends cease-and-desist to ByteDance over alleged AI copyright infringement
