We found Chatgpt paw … in very serious scientific studies!

The study shows that the vocabulary used by scientists in their studies has changed since the arrival of language models. In biology and medicine, they use more and more “typical” words AI … which suggests that the authors used these technologies for their writing.

As we know, generative AI are more and more students at secondary schools or universities. However, this type of textual use seems to spread. A new study published in Progress It indicates that the work published in the field of biomedical sciences has a syntax that leaves doubts.

Scientists from the University of Tübingen in Germany have shown at least 15 million abstracts, a short summary that presents a study and written by scientists, to detect anomalies between 2010 and 2024.

Masked AI?

Their results show that since the timeestablishmentestablishment Generative tools such as Chatgpt, some words return much more often. Between them, Dives (search), Performance (highlighting), underline (underlining) potential (potential), findings (Discoveries) and critical (critical).

This AI wrote a scientific article about himself and will be published!

Since nothing very spectacular, but scientists have noticed some small problems. At first, these apparently quite common words are surprisingly among the favorite language models that use them incorrectly and through. In addition, if it is normal that vocabulary develops over time, this change is drastic: even the Covid crisis did not disturb the method of writing scientists to this point. In the end, the words presented here are primarily stylistic and do not hold on a specific report that could justify their use.

So many tracks that cause authors to assess, as well as about 13.5 % of abstracts published in 2024, of which it could benefit from support from support ChatgptChatgpt or one of his competitors. Although there is no mention of any help in studies.

We have shown that the effect of using language models in scientific writing is unprecedented (…)Write the authors in their study. This effect will certainly become more and more in the future and we can analyze more publications. At the same time, however, the use of language models can become better controlled and difficult to detect, which means that the actual proportion of the abstracts concerned is undoubtedly underestimated ”.

Yes for abstracts but not for the rest

The scientific community itself is shared in this use. Some scientists say they use it, others are afraid of plagiarism and cause ethical problems. In a survey conducted in May 2025 by reviewing NatureOf the 5,000 scientists interviewed, 90 % said they were for useIAIA Fix or translate articles.

False proofreading: Scientific Journals Wound

Regarding the elaboration of abstracts, 23 % did not see any problem and 45 % could tolerate if the use was registered directly for the study. Percentages that fall drastically in terms of other parts of the article, such as description of methods, results and conclusion.

On the other hand, opinions are much more resolved in terms of peer exams, the stage mandatory for research, in which experts have to read the work of their colleagues again. Here, the interviewees are mostly hostile to any AI intervention.

(Tagstotranslate) chatgpt

Leave a Comment