|
This is the conclusion reached by the authors of a study , the results of which were published on the website of Cornell University in the United States. Thus, Deborah Weber-Wulf, a professor at the Berlin Institute of Engineering and Economics, worked with a group of researchers to evaluate the ability of 14 recognition tools to detect text written using OpenAI's ChatGPT.
The team found that all the tools they tested had a hard time identifying AI text that had been slightly altered by humans. As a result, all students needed to do was slightly adapt content writing service the essay generated by the neural network.
During the experiment, the researchers also found that the identification tools were excellent at recognizing human-written text (with an average accuracy of 96%). However, they performed much worse when it came to identifying AI content, especially if it was slightly edited.
While the tools identified ChatGPT text with 74% accuracy, that rate dropped to 42% when the text generated by ChatGPT was slightly modified.
![](https://afbdirectory.com/wp-content/uploads/2024/10/Content-Writing-Service-scaled.jpg)
If automated detection systems are going to be used in educational settings, it’s important to understand the false-positive rate, says Daphne Ippolito, a senior researcher at Google who specializes in natural language generation. She worries about students being falsely accused of using neural networks. “If too many AI-generated texts are being passed off as human-written, then the detection system is useless,” she says.
Compilatio, which makes one of the tools tested by the researchers, says
Such systems are only part of the approach to learning, they simply point out suspicious passages in the text
According to Compilatio, the responsibility for the authorship of papers lies with educational institutions and teachers who are checking them. Turnitin’s director of products, Annie Cecitelli, added that the system simply alerts the user to the presence of AI text, highlighting areas where further discussion of the work may be needed.
OpenAI's website warns that systems designed to detect neural network-generated content are "far from foolproof." But such claims haven't stopped companies from rushing out products that promise to do the job, says Tom Goldstein, an associate professor at the University of Maryland. Experts say
the very idea of identifying text written using neural networks is meaningless
"Don't try to detect AI - make its use unproblematic," the study authors emphasized.
WORLDWIDE IDEA
The issue of labeling AI content is currently being actively discussed in the world. In Russia, State Duma deputy Anton Nemkin put forward such an initiative in May . The Russian Technological University also proposed introducing it. Its employees advised preparing a program to protect critical infrastructure from possible cyberattacks using such systems. The initiative is related to the fact that
the widespread use of neural networks, even for entertainment purposes, poses a danger to the safety of users' personal data
Thus, on June 5, deputies began developing the concept of a law on labeling the content of neural networks, reported Deputy Chairman of the State Duma Committee on Information Policy, Information Technology and Communications Anton Gorelkin. The purpose of the bill is to reduce the risks of using products created using AI technologies. None of them have yet announced the technical side of implementing the idea. Anton Gorelkin did not respond to RSpectr's request at the time of writing.
In early June, it became known that the European Commission intends to oblige technology companies to label content created by neural networks. European Commission Vice President Vera Yourova stated that new AI technologies can be useful, but they have “dark sides with new risks and negative consequences for society.”
|
|