May 25, 2024

Medical Trend

Medical News and Medical Resources

ChatGPT’s Role in Retracted SCI Paper Reveals AI Academic Misconduct Surge

ChatGPT’s Role in Retracted SCI Paper Reveals AI Academic Misconduct Surge



 

ChatGPT’s Role in Retracted SCI Paper Reveals AI Academic Misconduct Surge.

If you copy someone’s answers during an exam, what’s frightening is not making a mistake, but copying their name as well. This seemingly humorous situation has indeed occurred, and it happened in an SCI paper.

Recently, the academic journal Physica Scripta retracted a newly published SCI paper [1].

This paper, published in Physica Scripta on August 9, 2023, underwent peer review and aimed to unveil a new solution to a complex mathematical equation. Initially, the paper seemed unproblematic.

 

ChatGPT's Role in Retracted SCI Paper Reveals AI Academic Misconduct Surge

 

However, astute readers discovered a peculiar and out-of-place phrase in the paper – “Regenerate response.”

 

ChatGPT's Role in Retracted SCI Paper Reveals AI Academic Misconduct Surge

 

In reality, this phrase was the label of a button on ChatGPT. Clicking this button would prompt ChatGPT to generate a new response.

ChatGPT, developed by the OpenAI AI Research Lab on November 30, 2022, is a cutting-edge conversational AI model, driven by artificial intelligence technology. ChatGPT can engage in conversations by learning and understanding human language and can interact contextually, resembling human-like conversation. Since its launch, ChatGPT’s robust capabilities have gained significant attention. Some published papers have shown that ChatGPT can generate very realistic fraudulent scientific papers, raising serious concerns about the integrity of scientific research and the credibility of published papers.

After readers pointed out the issue, Kim Eggleton, the peer review and research integrity officer at IOP Publishing, the publisher of Physica Scripta, stated that the paper’s authors admitted to using ChatGPT to assist in drafting the manuscript. However, this anomaly was not detected during the two-month peer review and typesetting process.

Now, the publisher has decided to retract the paper because the authors did not declare their use of ChatGPT for paper writing during submission, violating the journal’s ethical policies.

 

 

This is just the tip of the iceberg

The retracted paper is not the only one that used ChatGPT without proper disclosure and passed peer review.

Since April of this year, the prominent academic watchdog website PubPeer has flagged more than 12 papers containing phrases such as “Regenerate response” or “As an AI language model, I…” – distinctive ChatGPT markers. For example, a paper published on August 3 in the Resources Policy journal included the sentence: “Please note that as an AI language model, I am unable to generate specific tables or conduct tests…” and the paper’s authors were from domestic universities [2].

In fact, many academic publishing houses, including Elsevier and Springer, have not resisted the use of ChatGPT. They have stated that authors can use ChatGPT and other large language models (LLMs) to assist in paper writing as long as proper disclosures are made.

However, these papers with ChatGPT-specific phrases are just the tip of the iceberg, revealing only those papers that failed to remove traces. The actual number of peer-reviewed papers using ChatGPT for assistance in writing without disclosure may be much higher than currently known.

 

The Shocking Realism of AI Writing:

Fully or partially authored by AI without proper disclosure is not a new phenomenon. However, these papers typically contain subtle but detectable traces, such as specific language patterns or translation errors, that set them apart from human-authored work. But with the use of advanced AI tools like ChatGPT, which can generate seamless text, removing ChatGPT’s template phrases, these papers become much harder to detect.

Recently, the Journal of Medical Internet Research published a paper titled: “Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora’s Box Has Been Opened” [3].

This paper demonstrates that AI language models like ChatGPT can generate highly convincing fraudulent scientific papers.

 

ChatGPT's Role in Retracted SCI Paper Reveals AI Academic Misconduct Surge

 

 

The paper’s authors aimed to investigate ChatGPT’s ability to generate high-quality deceptive medical papers. They attempted to use ChatGPT, a popular AI chatbot based on the GPT-3 language model developed by OpenAI, to generate a completely fictional paper in the field of neurosurgery.

The results of this proof-of-concept study were astonishing – ChatGPT successfully generated a deceptive paper. The paper’s vocabulary usage, sentence structure, and overall composition closely resembled that of a genuine scientific paper. It included standard sections such as abstract, introduction, methods, results, and discussion, as well as tables and other data. Surprisingly, the entire paper creation process, with no special training of human users, took only one hour.

Renowned academic watchdog Elisabeth Bik expressed concerns that the rapid rise of ChatGPT and other generative AI tools would empower paper mills and exacerbate academic misconduct. She stated, “I am very concerned that there are now many papers out there that we cannot discern.”

 

 

Underlying Issues:

The problem of papers authored by AI without proper disclosure highlights a deeper issue – overburdened peer reviewers often lack the time to thoroughly scrutinize papers for problems. The entire academic ecosystem now operates under a “publish or perish” mentality, with an increasing number of papers and insufficient numbers of peer reviewers.

One common issue with papers generated by generative AI writing tools like ChatGPT is the fabrication of non-existent references, which could serve as a signal for reviewers to identify AI-authored papers. If a paper cites non-existent references, it is highly likely to be AI-generated. For example, Retraction Watch has reported a preprint paper authored by ChatGPT that featured completely fabricated references that did not exist in reality.

As ChatGPT and other generative AI tools become more widespread, researchers suggest that future paper reviewers may need to examine references more closely.

 

 

 

 

References:
1. https://iopscience.iop.org/article/10.1088/1402-4896/aceb40
2. https://doi.org/10.1016/j.resourpol.2023.103980
3. https://www.jmir.org/2023/1/e46924
4. https://www.nature.com/articles/d41586-023-02477-w

ChatGPT’s Role in Retracted SCI Paper Reveals AI Academic Misconduct Surge

(source:internet, reference only)


Disclaimer of medicaltrend.org


Important Note: The information provided is for informational purposes only and should not be considered as medical advice.