fbpx
Breaking Campus News. Launching Media Careers.
Academics can’t distinguish fake peer review articles from real ones, study finds

Robots are just as good at writing peer reviews of academic journal papers as humans, because humans can’t consistently distinguish the two.

Inside Higher Ed reports:

Using automatic text generation software, computer scientists at Italy’s University of Trieste created a series of fake peer reviews of genuine journal papers and asked academics of different levels of seniority to say whether they agreed with their recommendations to accept for publication or not.

In a quarter of cases, academics said they agreed with the fake review’s conclusions, even though they were entirely made up of computer-generated gobbledygook — or, rather, sentences picked at random from a selection of peer reviews taken from subjects as diverse as brain science, ecology and ornithology.

The new study – humorously titled “Your Paper has been Accepted, Rejected, or Whatever” – says

the actors involved in the publishing process are often driven by incentives which may, and increasingly do, undermine the quality of published work, especially in the presence of unethical conduits. …  While a tool of this kind cannot possibly deceive any rigorous editorial procedure, it could nevertheless find a role in several questionable scenarios and magnify the scale of scholarly frauds.

The key to fooling someone with a fake review is combining “sentences which are not too specific, but credible,” Trieste Prof. Eric Medvet said.

One such sentence, Medvet said: “It would be good if you can also talk about the importance of establishing some good shared benchmarks.”

Read the article.

Like The College Fix on Facebook / Follow us on Twitter

Add to the Discussion