About Us Contact Us Privacy Policy Terms of Use DMCA Opt-out of personalized ads
© Copyright 2023 Market Realist. Market Realist is a registered trademark. All Rights Reserved. People may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.

Science Journals To Use AI Software To Detect Image Manipulation; Here’s How

Science will deploy the Proofig platform as part of its image screening processes.
Cover Image Source: Pexels | Photo by Tara Winstead
Cover Image Source: Pexels | Photo by Tara Winstead

Science, the research publisher, has announced that all of its journals will soon use commercial software that uses artificial intelligence and automates the process of detecting improperly manipulated images in the works of authors. In an editorial, the group’s editor-in-chief, Holden Thorp confirmed the development and said that Science will deploy the Proofig platform as part of its image screening processes. The platform claims to use AI for detecting image reuse and duplication.


Thorp confirmed that the technology was trialed for several months and there is clear evidence that problematic figures can be detected in the papers before publication. This includes manipulating imagery to mislead readers. Further, the platform will work alongside text plagiarism-detection software which is already in use by the group. With this, Science joins the league of journal publishers like Cell Press (owned by Elsevier) in using software to vet submitted images. Until now, the platform’s staff had been manually checking images, and vetting them for publication.

As per a Cosmos Magazine report, Science will use Proofig to identify concerning items and issue a ‘please explain’ to alert authors. During the trial of the software, it was noted that several authors who were issued such notices during the trial “generally provided a satisfactory response.” However, some of the authors completely stopped from progressing through the editorial process.

The Proofig platform uses AI in the identification of images within the PDF of a research manuscript. As per an Arstechnica report, the software scans them all for overlapping features, even in cases of cropping and rotation.


However, in Science’s process, the final result which is a report highlighting potential similarities in different figures and showing the overlap's extent, will be sent to the journal’s editors. Any further decision regarding what to do with the discoveries is left to the editors.

Image manipulation has been a concern in academic circles for a long time. In the early 2000s, then managing editor of the Journal of Cell Biology, Mike Rossner implemented an image vetting policy amid an increasing number of digital submissions from researchers and the accessibility of image editing software, per the Cosmos report. Later in 2004, he and his colleague Ken Yamada published guidelines for image vetting.

Further, about 10 years ago, Enrico Bucci, now an adjunct professor at Temple University, US, ran a software analysis experiment on over 1,300 open-access papers. He found that about 5.7% of the papers contained at least one example of suspected image manipulation, per the report.

Image Source: Getty Images | Photo by Justin Sullivan
AI features are regularly being used (representational image) | Getty Images | Photo by Justin Sullivan

The risk of image manipulation in research has reached new levels in recent times. In 2016, leading scientific integrity consultant Elisabeth Bik reported around 4% of papers in a 20,000-strong sample had manipulated images. Further in 2022, in an opinion piece for The New York Times, she flagged concerns that AI may be used for malicious purposes as well apart from its application in accelerating review processes. She further told Cosmos that she reports at least one example of image manipulation every day.

While the move by the platform is a significant first step, there are some limitations of the Proofig software. While it can catch cases of image manipulation, fraudsters can also easily avoid being caught if they figure out how the software operates. This has been described on the website of the platform. However, being an ever-developing software like an antivirus, such systems are expected to only get better.