Since its introduction in November 2022, OpenAI’s artificial intelligence platform, ChatGPT, has sparked discussions in society about its capabilities and potential risks. People are increasingly relying on ChatGPT to perform a significant portion of their work, with reports indicating that some individuals are utilizing it for up to 80% of their job-related tasks. For instance, lawyers have found the chatbot to provide reasonably accurate answers to legal queries, albeit with occasional inaccuracies and a potential lack of technical legal requirements.
Supported by Microsoft (MSFT), OpenAI’s ChatGPT is being used by people for various purposes, such as identifying the best stocks, searching for the most favorable airline deals, or even assisting college students in writing essays or finding answers for tests.
However, educators have encountered challenges in utilizing ChatGPT to detect plagiarism among their students. Dr. Jared Mumm, a professor at Texas A&M University, decided to employ ChatGPT to check for cheating in the final assignments of the semester. Unfortunately, the professor discovered that ChatGPT claimed authorship of every passage entered into its system, leading to a delay in granting diplomas to half of the students in the class. The issue was resolved when the students provided Google Docs timestamps as evidence of their work. Thankfully, no students failed the course or were hindered from graduating.
These incidents highlight the fact that ChatGPT, despite its capabilities, is not foolproof. As a natural language processing model with 175 billion parameters, ChatGPT generates human-like text based on user prompts and context. While it has demonstrated proficiency in passing certain exams, such as the United States Medical Licensing Exam, it has also faced failures. According to the Feinstein Institutes for Medical Research, ChatGPT scored 65.1% and 62.4% in the 2021 and 2022 multiple-choice self-assessment tests for the American College of Gastroenterology, respectively, falling short of the required 70% to pass the exams.
As a consequence, the Feinstein Institutes advise against using ChatGPT for medical education in gastroenterology at present. The lack of research regarding its effectiveness in this field, along with its reliance on potentially outdated or non-medical sources, suggests that ChatGPT requires further development and verification before being implemented in healthcare.
It is important to note that ChatGPT lacks intrinsic understanding of topics or issues. Its shortcomings, such as limited access to paid subscription medical journals and questionable sources, might contribute to its failure in certain scenarios. More extensive research is necessary to ensure its reliability and accuracy.
In conclusion, OpenAI’s ChatGPT has garnered attention due to its wide-ranging applications and potential risks. While it has demonstrated value in various domains, caution must be exercised to account for its limitations. Society must continue to explore and refine AI technologies like ChatGPT to harness their potential effectively and mitigate any associated risks.