Revista Facultad de Ingeniería -redin-, Universidad de Antioquia, No.111, pp. 7-8, Apr-Jun 2024
EDITORIAL
Artificial Intelligence (AI) and Scientific Publications
Artificial Intelligence (AI) emerged shortly after World War
II with the development of the Turing test. The term was
first coined by mathematician John McCarthy in 1955. It is
a field of science based on computers and machines that
can simulate intelligent behavior, including reasoning,
learning, and acting in contexts usually requiring humans
or involving data whose scale exceeds what humans
can analyze. In recent years, artificial intelligence has
experienced an exponential growth. The number of
scientific publications on this subject tripled from 6,851
in 2000 to 51,085 in 2018. Since 2005, China has held the
lead in the number of publications in the area; Chinese
researchers published 25% of all publications in the area
in the 2014–2018 period, followed by researchers from
the USA, who participated in 15%. India has been in the
third position since 2013. AI has proven to be very useful
in different areas of science, such as diagnosis, imaging,
histopathology, and surgery, as well as in writing and
scientific publications [1, 2].
The introduction of Artificial Intelligence, Large Language
Models (LLM), ChatGPT, Bard, and Bing have changed
the scenario of writing, creating, and producing research
papers, generating a revolution in the academic publishing
world in an irreversible way. The adoption of AI raises
critical ethical questions about the responsibility,
obligations, and transparency of authors. AI tools
are already being used in academic publishing, for
example, in pre-peer review checks (language quality,
confirmation that a submission is within the scope of the
journal, etc.). The use of AI is not inherently unethical
and can be helpful, for example, for authors who do not
write in English as their first language. Although the use
of AI tools for translations may raise separate copyright
issues, it will be up to humans to take responsibility for
ensuring compliance with regulations because, ultimately,
the responsible application of technology requires human
oversight, controls, and monitoring [3].
Using AI tools such as ChatGPT (Chat Generative
Pre-Trained Transformer), Google Bard, or other tools
that are certainly trained to write, translate, review, and
edit academic manuscripts faces ethical challenges for
researchers and journals. For this reason, some scholarly
journals, such as Science, Nature, and many others, have
prohibited using LLM applications in received articles. For
example, AI tools cannot meet authorship requirements,
as they cannot take responsibility for the submitted work
because they are not ”persons,” nor do they have legal
status. They also cannot assert the presence or absence
of conflicts of interest or manage licensing and copyright
agreements. For these and other reasons, the world of
ethics is turning firmly against that idea, and it is easy to
see why.
AI is a field of study that seeks to develop systems
and algorithms capable of performing tasks that require
human intelligence. Among its various applications,
ChatGPT stands out as a next-generation language model
that uses machine-learning techniques to understand
and generate text coherently and naturally. ChatGPT
is an open-access artificial intelligence (AI) chatbot
developed in 2022 by OpenAI, which has revolutionized
text generation by storing large volumes of information
and capturing complex linguistic patterns. In this way,
said model is capable of answering questions, completing
sentences, editing images, and generating text based
on the context and the information provided [4]. It is a
model with more than 175 million parameters trained
to perform language-related tasks, from translation to
text generation, and it is being refined and grown through
supervised and reinforced learning.
The most surprising thing about ChatGPT is that it
can give accurate and complete answers and express
itself naturally with very exact information, which makes
it difficult to distinguish if a text has been generated by
an AI, which revolutionizes the editorial process, because
these programs use advanced techniques and natural
learning and language processes [5]. Journals, editors,
and referees are concerned because there is a real threat
of the production of fake articles produced by machines
that can stifle the scientific process. AI, at the moment,
cannot generate new ideas. Still, it can organize and
develop those provided to it, which is a starting point for
developing “human” style texts in the not-too-distant
future that could replace knowledge, creativity, and
scientific thinking [6], since AI can participate in writing
drafts, summaries and translations, in data collection
and analysis, in bibliography and even recomposing
texts to adjust them to the required size and formatting
or rewriting the language to make it more intelligible,
offering suggestions on its structure quickly and easily to
write the complete work eventually [7]. Such a tool could
also bridge the language gap by facilitating the publication
of research conducted and written in other languages.
The question that continues with the facilities of AI
is: How can you recognize if AI has generated a text?
These texts often lack nuance, style, and originality. AI
7 DOI: 10.17533/udea.redin.20240307
detectors or expert reviewers are also available. However,
unfortunately, many similar defects can be found in texts
written by “humans” (“copy-paste” of previous works,
errors in translations of works written in languages other
than the native one of the writers), so detection programs
of plagiarism can be wrong [8]. For this reason, publishers
and journals should provide themselves with AI detectors
as part of the editorial process to protect themselves.
The Recommendations of scientific communities and
journal editors regarding the publication of scientific
works are [3]: i) only humans can be authors; ii) authors
must acknowledge the sources of their materials; iii)
Authors must assume public responsibility for their
work; iv) authors should ensure that all cited material
is appropriately attributed, including full citations, and
that cited sources support the chatbot’s statements
(It is not unusual for chatbots to generate references
to works that do not exist); v) any use of chatbots in
the evaluation of the manuscript and the generation of
revisions and correspondence must be expressly indicated;
vi) publishers need appropriate digital tools to deal with
the effects of chatbots on publishing; vii) when an AI tool is
used to perform or generate analytical work, help report
results (for example, generate tables or figures) or write
computer code, this should be indicated in the body of
the article, both in the Summary as in Methods, to allow
scientific scrutiny, including replication and identification
and vii) publishers need appropriate developments to help
them detect content generated or altered by AI. Such tools
should be available to publishers regardless of the ability
to pay for them. This is even more significant to editors
of medical journals, where the adverse consequences of
misinformation include potential harm to individuals.
References
[1] I. N. de Ledo, “La inteligencia artificial y los artÍculos
cientÍficos,” Revista Venezolana de Oncología, vol. 36, no. 1,
2024. [Online]. Available: https://www.redalyc.org/journal/3756/
375675852002/html/
[2] (2020, May.,) Publicaciones científicas sobre inteligencia artificial.
[Online]. Available: http://www.forbes.com/sites/andyellwood/2013/
02/11/plagiarism-isnt-the-sincerest-form-of-flattery/
[3] E. Spinak. (2023, Aug. 30,) Inteligencia artificial y comunicación de
investigaciones. [Online]. Available: https://blog.scielo.org/es/2023/
08/30/inteligencia-artificial-y-comunicacion-de-investigaciones/
[4] J. F. Castañeda-López and T. Martínez-Villegas. (2022) La inteligencia
artificial en las publicaciones científicas. [Online]. Available: https:
//latinjournal.org/index.php/rcmm/article/view/1653/1342
[5] B. Gordijn and H. Ten-Have, “Chatgpt: evolution or revolution?”
Medicine, Health Care and Philosophy, vol. 26, Jan. 19 2023. [Online].
Available: https://doi.org/10.1007/s11019-023-10136-0
[6] M. Salvagno, F. S. Taccone, and A. G. Gerli, “Can artificial intelligence
help for scientific writing?” Critical Care, vol. 27, Feb. 25 2023.
[Online]. Available: https://doi.org/10.1186/s13054-023-04380-2
[7] C. A. Gao, F. M. Howard, N. S. Markov, E. C. Dyer, S. Ramesh,
and et la., “Comparing scientific abstracts generated by chatgpt to
real abstracts with detectors and blinded human reviewers,” npj
digital medicine, vol. 6, no. 75, Apr. 26 2023. [Online]. Available:
https://doi.org/10.1038/s41746-023-00819-6
[8] R. F. Samos-Gutiérrez, “La inteligencia artificial en la redacción y
autoría de publicaciones científicas,” Angiología, vol. 75, no. 5, Jun.
13 2023. [Online]. Available: https://dx.doi.org/10.20960/angiologia.
00512
Maryory Astrid Gómez Botero
Editor-in-Chief
Revista Facultad de Ingeniería -redin-
Professor-Universidad de Antioquia
https://orcid.org/0000-0001-9685-3080
http://www.redalyc.org/autor.oa?id=8587
https://scholar.google.com/citations?user=U_2Xx_
cAAAAJ&hl=es