OpenAI has developed a “text watermarking method” to identify AI-generated text, aiming to address concerns around students using tools like ChatGPT for cheating. This method, along with another involving cryptographically signed metadata, has been highlighted as highly effective against AI-generated content, even resisting localized tampering efforts such as paraphrasing. Additionally, OpenAI has explored using classifiers, similar to those in email spam filters, for automatically identifying AI-generated essays. Despite these advancements, the release of these tools is on hold due to potential issues. The watermarking technique struggles against global tampering methods, such as using translation services or modifying the text with other AI tools. Moreover, there’s a concern about the disproportionate impact on non-native English speakers, who may use AI as a writing aid, thus potentially stigmatizing its educational use. These challenges highlight the delicate balance between leveraging AI for educational support and ensuring academic integrity.
Read more at Tom’s Guide…