Microsoft has tightened the guardrails on its Bing Image Creator, a DALL-E 3-powered AI, after users exploited its lax controls to generate inappropriate and copyright-infringing images. The AI now has a more nuanced approach to content protection, focusing on terrorism-related language and other potentially problematic keywords. However, the system still has imperfections, generating false positives and negatives. Despite the flaws, Microsoft’s efforts to regulate its technology are noteworthy.