These kinds of studies also highlight how outdated universities’ current methods for assessing student work are, says Vitomir Kovanović, a senior lecturer who builds machine-learning and AI models at the University of South Australia, who was not involved in the project.ĭaphne Ippolito, a senior research scientist at Google specializing in natural-language generation, who also did not work on the project, raises another concern. Although the tools identified ChatGPT text with 74% accuracy, this fell to 42% when the ChatGPT-generated text had been tweaked slightly. They found that while the tools were good at identifying text written by a human (with 96% accuracy, on average), they fared more poorly when it came to spotting AI-generated text, especially when it had been edited. In the end, they had 54 documents to test the detection tools on. One set was edited manually by the researchers, who reordered sentences and exchanged words, while another was rewritten using an AI paraphrasing tool called Quillbot. The team then used ChatGPT to generate two additional texts each, which they slightly tweaked in an effort to hide that it’d been AI-generated. Those texts were passed through either the AI translation tool DeepL or Google Translate to translate them into English. Then each researcher wrote an additional text in Bosnian, Czech, German, Latvian, Slovak, Spanish, or Swedish.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |