As artificial intelligence becomes increasingly integrated into daily life, governments worldwide are struggling to balance innovation with safety. The European Union (EU), which promised some of the world’s first comprehensive AI legislation, is now postponing key rules for AI providers.
The member states of the European Union plan to implement their new AI provider regulations 16 months after their initial planned enforcement date. The EU delegations announced that the revised regulations for AI systems which present particular dangers will be implemented in December 2027.
The European Commission proposed the delay in November to give AI developers, including ChatGPT and Gemini, more time to comply with the new obligations. The European AI Office had scheduled two rules to start enforcement in August.
The regulations will begin after member states and the European Parliament complete their amendment approval process. The EU AI Act requires organisations to follow specific processes when training and using their artificial intelligence models.
The law will include a ban on AI creating sexual or intimate content without consent, as well as depictions of child sexual abuse, after a controversy involving sexualised images generated by Grok on X.
The 2025 scandal showed that Grok users could use photo manipulation features, which brought up a discussion in countries regarding the security and ethical use of artificial intelligence systems.
Alongside AI, other countries throughout the world are implementing stricter social media age restrictions, which will be enforced together with artificial intelligence laws. The current policies prevent users under 16 from registering accounts, as the tech companies want to safeguard minors from online exploitation and harmful content and privacy breaches.
The government highlights that these measures work together with AI safety regulations to protect children from automated systems which collect their data without obtaining parental permission.
Reportedly, by early 2026, at least 72 countries have proposed over 1,000 AI-related policies and legal frameworks to address public safety, ethical use, and accountability.