Atıf İçin Kopyala
Balel Y.
EUROPEAN JOURNAL OF THERAPEUTICS, cilt.29, sa.4, ss.984-985, 2023 (ESCI)
-
Yayın Türü:
Makale / Tam Makale
-
Cilt numarası:
29
Sayı:
4
-
Basım Tarihi:
2023
-
Doi Numarası:
10.58600/eurjther1691
-
Dergi Adı:
EUROPEAN JOURNAL OF THERAPEUTICS
-
Derginin Tarandığı İndeksler:
Emerging Sources Citation Index (ESCI), TR DİZİN (ULAKBİM)
-
Sayfa Sayıları:
ss.984-985
-
Sivas Cumhuriyet Üniversitesi Adresli:
Hayır
Özet
Dear Editors,
I read your editorial content with great interest [1]. As a young academic in the spring of my career, I would like to share my views, suggestions, and experiences regarding the use of artificial intelligence in academic papers. Like any individual from Generation Y, I also grew up watching the adventures of the Jetsons family. The talking service robot, automated production lines, flying cars, and, most importantly for us now, robot doctors were all products of artificial intelligence, although I didn't know their name back then. My interest in artificial intelligence and researching its applicability in the field of healthcare may be attributed to these early experiences, but who knows for sure? I believe this is where my first encounter with artificial intelligence began.
After the COVID-19 pandemic, there has been a rapid development in artificial intelligence technologies. Whether the timing was purely coincidental or influenced by the quarantines and lockdowns, we do not know. ChatGPT, it seems, has become one of the most well-known advancements, both among academics and the general public. This chatbot talks with us, answers our questions, conducts research on our behalf, and even writes articles [2]. But can ChatGPT really be used for writing academic papers?
In my experience, using ChatGPT for academic paper writing is quite risky. It can generate a draft that an academic might spend weeks or even months trying to write, in a very short amount of time. This aspect is undoubtedly enticing. However, caution must be exercised when using it. The database on which ChatGPT is built consists not only of academic information but also includes information from any website. You never know which information ChatGPT is using to generate the text. When you ask it to provide references for the generated sentences, it can produce fake DOI numbers or give you the DOI of an unrelated article. The only way to verify the accuracy of the generated information is for authors to manually fact-check it.
High-impact scientific journals such as Springer-Nature and Science currently do not accept ChatGPT as a co-author [3,4]. Taylor & Francis journals have indicated that they will review this situation, while many Elsevier journals have already included ChatGPT as a co-author [5]. The underlying issue that journals have with this is determining who takes responsibility for the information in the articles. Additionally, the fact that ChatGPT does not possess a completely independent thought process and generates information based on the web can lead to plagiarism concerns.
So, is ChatGPT the only chatbot that can be used in the medical field? In fact, there are chatbots that can generate more superior information in the medical field than ChatGPT. Some of these models include BioLinkBERT, DRAGON, Galactica, PubMed GPT (now known as BioMedLM), and the upcoming Med-PALM 2. However, running these models requires at least some coding knowledge. According to Google's claims, Med-PALM 2 achieved an 86.5% success rate in the United States Medical License Exams (USMLE), while its closest competitor, PubmedGPT, achieved only a 50.3% success rate [6]. Med-PALM 2 could be an important chatbot for the medical field, or, more technically, a Large Language Model (LLM), but we will have to wait a little longer to see it in action.
Given the current situation, how can we benefit from these LLMs in academic paper writing? My recommendation is to use them to enhance the meaning of texts you have written rather than having them write the entire text from scratch. This way, the main context of the sentences remains the same, and the overall accuracy of the generated information does not change significantly. Additionally, ChatGPT is a valuable tool for translating your original text into different languages or for grammar corrections. While professional language editing services can cost between $100 and $500, ChatGPT is a free and faster alternative. However, it is important to read and check the translated or grammar-corrected text after using the chatbot. Sometimes it can generate sentences that are unrelated to your original ones. If you alert the chatbot to this issue, it will correct its responses, or you can simply open a new tab and write what you need from scratch, which I recommend the second option. Another useful feature of ChatGPT for article writing could be generating abstracts. Journals often have restrictive rules regarding word limits and abstract structures, and ChatGPT can facilitate solving these challenges.
In conclusion, whether it's ChatGPT or other LLMs, I believe that they are currently not entirely suitable for writing academic papers from scratch or being listed as co-authors. We need to closely follow developments in this field. Only when an LLM model is created that relies solely on academic databases and provides genuine references for each sentence it generates, can it be used for writing academic papers from scratch or being listed as a co-author. However, at that point, plagiarism issues should be carefully examined and discussed. We should not be prejudiced against LLMs and should explore new ways of using them while awaiting technological advancements.
Yours sincerely,