OpenAI is testing its GPT-4 model to build a scalable, consistent, and customizable content moderation system. The AI can help develop policies, adapt to updates, and reduce the mental burden on human moderators. OpenAI claims that GPT-4 can complete six months of moderation work in a day. However, the company acknowledges that human review is still necessary to verify the model’s judgments and address potential biases.
Read more at Engadget…