19 J Gandhara Med Dent Sci April - June 2025 ORIGINAL ARTICLE : : POSTGRADUATE STUDENTS PERCEPTION OF USING CHATGPT IN CLINICAL MANAGEMENT AND RESEARCH: A QUALITATIVE EXPLORATORY STUDY Muhammad Shah 1 , Shimee Shahzadi 2 , Shehzad Akbar Khan 3 How to cite this article Shah M, Shahzadi S, Khan SA. Postgraduate Students Perception of Using Chatgpt in Clinical Management and Research: A Qualitative Exploratory Study. J Gandhara Med Dent Sci. 2025; 12(2):19-25. http://doi.org/10.37762/jgmds.12-2.679 Date of Submission: 27 - 01 - 2025 Date Revised: 12 - 02 - 2025 Date Acceptance : 02 - 03 - 2025 1 Associate Professor, Department of Surgery, Hayatabad Medical Complex, Peshawar 3 Professor, Department of Surgery, Hayatabad Medical Complex, Peshawar Correspondence 2 Shimee Shahzadi, Assistant Professor, Department of Anatomy, Khyber Girls Medical College, Peshawar +92 - 33 3 - 5852983 drshimmishehzadi@yahoo.com ABSTRACT OBJECTIVES This study investigated how postgraduate residents see ChatGPT's function in clinical and research settings. METHODOLOGY May 1, 2024, to September 30, 2024, was the time frame for this research. Twelve postgraduate residents from three tertiary care insti tutions in Peshawar participated in a qualitative exploratory study. Semi - structured interviews were used to gather the data, and a thematic analysis was performed to determine the main topics. RESULTS While ChatGPT saved research time, study participants reported that it also generated issues with data privacy and information accuracy. Power users felt the instrument was easier to use, although there was a range of comfort levels. While its capabilities were promising, most participants worried that using AI to make clinical judgments was risky. CONCLUSION ChatGPT can be a helpful addition to research - based tasks, but like any other technology, it may be misused in clinical settings. This calls for better training and optimization standards, which must be addressed in our medical practice. KEYWORDS: ChatGPT, Qualitative, Postgraduate, Medical, Clinical INTRODUCTION The use of algorithms, chatbots, speech recognition, and other similar techniques in administration, diagnosis, and medical education is growing. Conversely, OpenAI Chatbot, or ChatGPT, is an Artificial Intelligence (AI) chatbot proposed as a possible educational and clinical management tool. It is strong enough to help with systematic literature reviews, manuscript drafts, and clinical decision - making because it can produce natural language text based on large datasets.¹ ChatGPT and such language models serve as educational aids for medical educ ation by providing quick searches of medical literature, virtual patient encounters, and analytical reasoning. 2,3 These tools help alleviate clinicians’ burden due to the relatively few hours of a given day by aggregating data and suggesting treatment.⁴ Th is trend has also translated into higher income for medical education models that use these technologies for the advantage of future health practitioners.⁵ However, new technologies arise, raising questions about reliability and ethical implications. The p rivacy and security of data remain pressing issues, especially for patient data management. 6 Further, while ChatGPT - like models are skilled in producing text, they have even greater capabilities of producing erroneous or incoherent information, which poses a problem when clinical decisions rely on such data. 7,8 This is incredibly accurate when applied in the recent criticism that has drawn attention to the higher need for human supervision when having to operate these technologies in a stake clinical enviro nment. 9 Research into the application of these technologies in the rest of the world, particularly in the region, is minimal, particularly regarding the experiences of postgraduate residents with ChatGPT - like technology. To close this gap, this study inves tigates how Peshawar postgraduate residents view the value of these instruments in clinical management and research. METHODOLOGY This study used a qualitative exploratory design to gather postgraduate residents' insights on using ChatGPT in clinical management and research. A qualitative approach was used for this study, as the design enabled an in - depth exploration of the topic to reflect the participants’ balance of participation and experience. Semi - structured interviews, the primary data collectio n method, promoted an active engagement with study participants while allowing flexibility for an
20 J Gandhara Med Dent Sci April - June 2025 in - depth exploration of topics relevant to the research questions. This study was conducted at Hayatabad Medical Complex (MTI - HMC), Khyber Teaching Hospital ( MTI - KTH), and Lady Reading Hospital Peshawar (MTI - LRH). The sample included these hospitals because they represent diverse postgraduate residency training programs, allowing access to multiple specialities in which ChatGPT may be used in clinical and resea rch contexts. The study period was from May 1 2024, to September 30 2024 (6 months). This time was sufficient for recruiting participants, conducting interviews, transcription, and analyzing the data. The institutional review boards of MTI - HMC, MTI - KTH and MTI - LRH obtained the study’s ethical approval before starting the survey. The MTI - HMC, MTI - KTH and MTI - LRH postgraduate residents are part of residency training, postgraduate trainees who have used ChatGPT for clinical management or in research for at lea st six months. Subjects who provided informed consent to participate in the study and a one - on - one interview were included. The residents who had never heard of ChatGPT or utilized it in a clinical or research setting were excluded. The sampling technique utilized in this investigation was purposive sampling. It is a specific method of non - probability sampling where a researcher selects a sample based on the criteria set forth. The intention was to get a great diversity of experiences and viewpoints, so th e sample was selected according to the following: Mode of practice (such as surgery or internal medicine) Level of residency training (from 1st to final year residents) Representation of both sexes (males and females as subjects) A total of 12 postgr aduate residents were enrolled. The sample size was decided based on saturation or when repetitive themes could no longer be heard in the conversations held during the interviews. Semi - structured interviews were the data collection method for this study. A n interview guide was formulated from previous literature and modified per the Technology Acceptance Model (TAM) framework to focus on the residents’ views of the utility, ease of use, and the overall impact of ChatGPT on their clinical and research work. Nine open - ended questions were included so the participants could recount their experiences with ChatGPT and provide expansive responses. Participant Recruitment: Potential participants were located and reached through emails, where they were briefed about the study’s aims. All participants provided written consent before the interviews. Interviews: Depending on the participants’ preferences and availability, they could be interviewed in person or over the Internet. All interviews were conducted in private rooms to ensure participants’ confidentiality, and all interviews were recorded with the participant’s permission. Each interview lasted approximately 45 - 60 minutes. The interviews followed a semi - structured interview protocol but were flexible enough to p ursue emerging themes as needed. Data Transcription: Audio recordings were transcribed verbatim, and all personally identifiable information was deleted from the data sets for confidentiality purposes. Semi - structured interview data was analyzed with a the matic approach, as Braun and Clarke (2006) outlined. The analysis was done in six steps: 1. Familiarization with the Data: The first step was reading the transcripts. During this process, patterns and concepts were noted down. 2. Generating Initial Codes: The data were structured through thematic analysis using key terms. Codes were created regarding the research objectives and responses to the participants’ answers. 3. Identifying Themes: After coding, the codes were consolidated into larger groups. The re searcher sought patterns within the data that could form themes connected to the students’ perceptions of ChatGPT. 4. Reviewing Themes: The first themes developed were compared to the actual transcripts to check if they were correct in their data assessment. It ensured the themes were strong and accurately captured the participants’ views. 5. Defining and Understanding themes: Every theme was explicitly defined, and sub - themes were identified where relevant to provide a better understanding of the data. Participants’ responses were summarized in theme names that represented the essence of their responses. 6. Writing the Report: The last step comprised writing an elaborate description of the findings and incorporating the themes in the order they hav e been identified. This description was the results part of the research article and contained participant citations. This study was conducted with the utmost ethical consideration to guarantee participants’ safety and confidentiality. At any stage of the study, participants were made aware of their right to withdraw without any penalties being imposed on them. Throughout the study, anonymity was upheld, and participants’ real names were replaced with pseudonyms in all reports and transcripts. The informati on was kept on password - protected devices; only the research team could access the stored data. RESULTS The findings from qualitative exploratory research are structured around three major themes which emerged from the participants’ collective responses. 12 postgraduate students took part in this research, and their impressions of ChatGPT in clinical effort and Postgraduate Students Perception of Using Chatgpt in Clinical
21 J Gandhara Med Dent Sci April - June 2025 scope are condensed into the following themes: (1) User Experience with ChatGPT, (2) Challenges and Concerns, (3) Perceived Benefits, an d (4) Impact on Clinical Practice and Research. Demographics and other characteristics of the study participants are presented in Table 1. Table 1: Demographics and Other Characteristics Demographic Variables Number of Participants (n=12) Specialty Surgery 0 6 Medicine 0 6 Gender Male 0 7 Female 0 5 Year of Residency 1st Year 0 2 2nd Year 0 2 3rd Year 0 3 4th Year 0 3 5th Year 0 2 Theme 1: User Experience with ChatGPT Sub - theme 1.1: Comfort Levels with ChatGPT The respondents highlighted that there were different perspectives when it comes to the use of ChatGPT's features, especially concerning research work. Six respondents noted that their experience was quite effortless, claiming that ChatGPT made it simple to conduct literature reviews and draft manuscripts. For instance, one participant (P3) said the following: "I can easily use the program because it is intuitive, especially when I am out of time for my research tasks." At the same time, four participants revealed different comfort levels, remarking that the Program had a learning curve that must be traversed before becoming proficient at using the tool. One participant (P6) noted the following: "I feel that it’s helpful once you are good with your queries, but it takes a bit of time to get used to it so you aren't banging your head against the desk." According to the remaining two participants, struggling to adopt ChatGPT within their clinical workflow to make clinical decisions did not make them feel comfortable. Their level of trust towar ds AI was global, and artificial intelligence was deemed insufficient to achieve clinical tasks without human assistance. more manageable after seve Sub - theme 1.2: Learning Curve and Initial Challenges Most of the participants described an adjustment period when utilizing ChatGPT. M ost residents reported a steep learning curve on their first few attempts at the tool, especially knowing how to ask questions sufficiently to receive accurate results. However, several participants mentioned that the tool became ral weeks of consistent use. A first - year resident (P8) stated: "I was surprised how difficult it was to learn it, but once I started getting the hang of it, it was simple to read literature with it." Some junior residents, however, described their challe nges with ChatGPT’s advanced functionalities for a longer time, suggesting that they need additional support. Theme 2: Challenges and Concerns Sub - theme 2.1: Concerns Regarding Data Privacy and Security Everyone brought up the possibility of privacy breach es of patient data in the chatGPT, which is a primary concern. Seven participants expressed hesitance to fully incorporate ChatGPT into their clinical practice due to uncertainties about sensitive patient data protection. Example quote from participant (P2 ) "Although I find ChatGPT helpful, I have concerns about violating patient confidentiality". These problems were less pertinent for trainees who learned in a clinical reasoning framework that involves interpreting the results from specific patients and a variety of other data. 2.2 Information Accuracy and Reliability Attention: Eight respondents mentioned problems with the accuracy and reliability of the information given by ChatGPT. Although many users liked the fast information extraction, some users commented that the information extraction capacity of the tool was doubtful as it was inaccurate in a few clinical settings. As one resident (P9) wrote: The information we are exposed to can somewhat deviate from the truth, which is very concerning, partic ularly when making clinical decisions. Ideally, anything generated from AI, particularly from a clinical perspective, should be compared to a standard reference to which it can be trusted, which leads participants to the apprehension of practice without va lidation. Sub - section 2.3: Risks associated with trusting AI too much Five respondents expressed worry about the issue of clinical practice relying too much on AI tools like ChatGPT. They feared that their ability to think deeply and make smart choices wou ld lessen as AI is used more to address simple issues. Noted one resident (P7) noted: "Something that concerns me is the potential that we use AI too much for tasks, which will change how we think critically or make decisions." While ChatGPT was helpful, these participants considered it should not substitute more traditional clinical and research - based decision - making forms. Postgraduate Students Perception of Using Chatgpt in Clinical
22 J Gandhara Med Dent Sci April - June 2025 Theme 3: Benefits of ChatGPT Sub - theme 3.1: Efficiency in Research Work As pointed out by participants, a key benefit of ChatGPT was the adequate time savings when carrying out literature reviews and writing research papers. Eight participants detailed that ChatGPT helped them expedite the processing of significant medical literature and increased their focus on analysis and interpreta tion instead of spending hours searching for relevant studies. One participant (P10) explained: " It speeds up the literature search process significantly, which allows me to focus more on analysis and writing." This benefit stood out, particularly for th e residents who were combining clinical work and academic work at the same time. Sub - theme 3.2: Enhanced Learning Experiences Seven participants claimed that ChatGPT aided in gaining and retaining knowledge by giving calls into complex medical concepts. Th ey reported the tool as a great aid to self - study regardless of their lectures, enabling them to grasp challenging topics much faster. Resident (P8) elaborated: "It provides quick information that helps in learning, especially when I'm stuck on a particul ar topic." Sub - theme 3.3: Support in Clinical Decision - Making Six residents mentioned ChatGPT's contribution in supporting some aspects of clinical decision - making, such as formulating possible differential diagnoses. However, participants appeared mindful and cautious about using artificial intelligence as a patient's sole clinical decision - maker, saying that ChatGPT was best used as a secondary source for the diagnosis. One of the residents, P3, said: "ChatGPT has helped me consider differential diagnoses I hadn't thought of, which has been quite helpful." Theme 4: Impact on Clinical Practice and Research Sub - theme 4.1: Influence on Diagnostic Processes Various residents have expressed their divided views on whether ChatGPT can effectively be inte grated into the sensitive world of clinical diagnostics. To their surprise, Four residents have found that ChatGPT proved helpful in providing clarifications during the determination stage, frequently drawing their attention to certain anomalies. One res ident P3 said: "I regard it as beneficial, but I do not depend on it for paramount decisions. Rather, I choose to use it as my secondary option." Three residents reported concern regarding the tool's ability to work autonomously on sophisticated issues wit hout substantial human intervention, noting that its influence on diagnostics was modest at best. Sub - theme 4.2: Integration into Daily Workflow Five participants reported some barriers preventing ChatGPT from fully integrating into their daily clinical wo rkflow. Of those participants, ChatGPT may have been considered helpful for research and case evaluations, but they employed it sparingly in their routines because it was challenging to fit AI - generated content into clinical practice. A quote from one resi dent (P5) goes as follows: "I haven't used it much in my routine yet, but I believe it can be an effective tool." Sub - theme 4.3: Impact on Research Activities Nine participants reported that ChatGPT enhanced research productivity, especially during the lit erature search and review process. Participants appreciated the rich information they could retrieve and how quickly they could synthesize it since it left more time to analyze the results and write the manuscripts. As one resident (P10) explained: "It spe eds up the literature search process significantly, which lets me spend more time on analysis and writing." Seven participants noted that ChatGPT helped them immensely with drafting research papers, especially when outlining the initial draft. They claimed that the AI tool was beneficial in overcoming writer’s block by providing an outline that could be further worked on. Nonetheless, four participants stated that the information given by ChatGPT was not rich in substance, requiring them to conduct addition al searches to complete their research rigorously. DISCUSSION This qualitative exploratory study examined the use of ChatGPT by postgraduate residents from the clinical management and research standpoint. The study results indicate a positive disposition regarding ChatGPT's performance, especially in conducting resea rch activities like literature reviews and drafting manuscripts. Data protection issues, privacy, accuracy, and overtrust in AI tools regarding clinical judgment were all significant issues which emerged during the discussion. These issues imply an urgent need for training and explicit policies on AI’s safe and practical application in the healthcare environment. The comfort levels varied among the residents utilizing ChatGPT, although users with more experience reported feeling more at ease and satisfied w ith ChatGPT’s usage. This is consistent with prior studies about the ease of use of AI tools, especially for research purposes.¹ On the other hand, junior residents reported more incredible difficulty in learning, which highlights the challenge of using AI in clinical settings for beginners.² With the passage of time and more exposure to AI, user experience improves, as noted in other studies, suggesting the importance of training and exposure to AI in healthcare settings.¹⁰ Data privacy and security were m ore serious issues or concerns in this research. Postgraduate Students Perception of Using Chatgpt in Clinical
23 J Gandhara Med Dent Sci April - June 2025 Participants were reluctant to use ChatGPT for any clinical functions requiring patient data because of data confidentiality concerns. This is not new and stems from other studies which cover risks associate d with sensitive medical data and AI.¹¹ Recently, it has been documented that while AI tools might revolutionize clinical practice, they must be guarded by privacy control mechanisms to protect sensitive information like the General Data Protection Regulat ion (GDPR).¹² The problem does not lie solely with ChatGPT but rather encompasses the broad spectrum of AI tools that work within a "black box" framework, where the design logic is obscure to the users. 13 Considering the ethical dimension of AI application s in health care, several writers have required the construction of well - defined ethical principles and organizational policies that govern the proper usage of AI tools while ensuring adherence to confidentiality standards. 14 These steps are significant fo r addressing the worries of health practitioners and providing them with assurance about the effective use of AI in clinical work. Validation of the information produced by ChatGPT and its level of accuracy was equally essential and raised by the study par ticipants. While the tool was commended for its ability to procure and summarize medical literature, multiple participants pointed out that the information served was sometimes inaccurate or incomplete, making it questionable when used for clinical decisio ns. In a particular study, investigators noted that AI - based tools, including ChatGPT and other large language models, performed as expected on the standardized tests but did so while providing wrong or contradictory answers. 15 This serves as a reminder of the need for human intervention when using AI tools in clinical settings. Medical professionals must apply AI - produced information with considerable caution since the implications of errors in clinical decision - making can represent a significant risk to p atient care. 16 Some participants raised concerns about over - dependence on AI tools like ChatGPT, believing it could adversely affect their critical thinking and clinical judgment skills. This makes sense because emerging literature warns of the potential n egative impacts of over - reliance on AI tools within the medical field. 17 AI can assist decision - making but is not a substitute for a trained professional. AI tools work best as supplements that aid human decision - making rather than eliminate it. Further re search on the influences of AI tools in clinical teaching has also stressed the need to balance AI - assisted training with conventional training methods. Evidence indicates that advanced AI tools can be beneficial in medical education, but never at the expe nse of critical thinking and problem - solving abilities, without which competent clinical practice is impossible. 18 Regardless of these issues or worries, respondents reported significant advantages regarding using ChatGPT, especially academically. Postgrad uate residents particularly appreciated the tool’s ability to rapidly sift through vast amounts of data and even assist in preparing research manuscripts. Another positive aspect emphasized by participants was ChatGPT’s ability to improve the overall learn ing experience among medical residents. With the tool’s help, intermediates could receive important information and more readily understand complicated medical terms. The literature also noted that AI technologies, including ChatGPT, could be useful in sel f - directed learning because of their capacity to deliver instant feedback regarding difficult subject matters. 19,20 The results of this study highlight the importance of establishing broad - based AI literacy education for healthcare professionals in which t hey are taught how to optimally leverage tools such as ChatGPT and understand the limitations of what such tools can do. Along with applying good practical and ethical practices of using AI tools, this literacy should encompass the exhaustive review of eth ical dilemmas, privacy concerns, and critical dissection of AI - generated output. In light of the privacy and data accuracy issues, it should be incumbent on healthcare organizations to have established protocols and policies regarding the use of AI in clin ical practice. They should address critical issues of personal data protection, limits of AI use in clinical decisions, and standards for validating AI outputs. Institutions, too , must consider the ethical and legal implications of deploying these technol ogies, especially in the event of an error. It is necessary to insist that AI be used alongside conventional systems of clinical judgement and clinical inquiry, as opposed to traditional systems of clinical judgement. Health practitioners need to be motiva ted to rely on AI with the understanding that it is a complementary resource, not a substitute, for their reasoning and clinical skills. This careful mixture of AI dependence will guarantee that the advantages of such technology are obtained without losing the quality of the patient's treatment. This research has some limitations, such as a small sample size and selection of only three hospitals in Peshawar, which limits the generalizability of the study. Being qualitative, this study depends on self - report ing, posing a bias risk. There is no independent evaluation of the clinical accuracy of ChatGPT, and the conclusion drawn from this will certainly be obsolete with the advancement of AI. Participants’ varying levels of experience with the use of ChatGPT mi ght have impacted how they experienced the tool. Also, concerns were raised regarding the ethics of privacy of patient data, but were not rigorously examined. More quantitative research with a more prominent and representative sample would be beneficial in Postgraduate Students Perception of Using Chatgpt in Clinical
24 J Gandhara Med Dent Sci April - June 2025 investigating the potential of ChatGPT in the context of medical education and facilitating clinical practice. LIMITATIONS This study has several limitations. The small sample size of twelve postgraduate residents from three tertiary care institutions in Peshawar limits the generalizability of the findings. Additionally, qualitative analysis relies on researchers’ interpretations, which may introduce bias. The study captures perceptions at a single point in time, leaving long - term effects unexplored. Pa rticipants' prior exposure to AI tools may have influenced their responses, and concerns about data privacy and accuracy were noted but not deeply examined. Future research should address these limitations by including a larger, more diverse sample and exp loring long - term implications. CONCLUSIONS This study gives important information about how postgraduate residents view the use of ChatGPT in clinical management and research. The tool's efficiency in research productivity and learning is lauded. However , issues surrounding privacy, accuracy of information, and excessive dependence on AI focus on the need for guidelines and adequate training. Further studies should investigate the implications of using AI in clinical practice and its effect on practitione rs’ critical thinking skills in the future. By solving these problems and using the opportunities offered by AI, the medical community will be able to mitigate the potential risks associated with using ChatGPT and comprehensively enhance the quality of car e and attention given to patients. CONFLICT OF INTEREST: None FUNDING SOURCES: None REFERENCES 1. Thomae AV, Witt CM, Barth J. Integration of ChatGPT Into a Course for Medical Students: Explorative Study on Teaching Scenarios, Students' Perception, and Applications. JMIR Med Educ. 2024 August 22;10:e50545. doi: 10.2196/50545 2. Gilson A, Safranek CW, Huang T, et al. How does ChatGPT perform on the United States Medical Licensing Examination (USMLE)? the implications of large language models for medical education and knowledge assessment. JMIR Med Educ. 2023 February 8;9:e45312. DOI: 10.2196/45312 3. Thurzo A, Strunga M, Urban R, Surovková J, Afrashtehfar KI. Impact of artificial intelligence on dental education: A review and guide for curriculum update. Educ Sci. 2023;13(2):101 - 15. doi: 10.3390/educsci13020150. 4. Lee H. The rise of ChatGPT: exploring its potential in medical education. Anat Sci Educ. 2024;17(5):926 931. doi: 10.1002/ase.2270 5. Bonsu E, Baffour - Koduah D. Determining students' perception and intention to use ChatGPT in Ghanaian higher education. J Educ Soc Multicult. 2023;4(1):1 - 29. doi: 10.2478/jesm - 2023 - 0001. 6. Arif T, Munaf U, Ul - Haque I. The future of medical education and research: Is ChatGPT a blessing or blight in disguise? Med Educ Online. 2023;28(1):2181052. doi: 10.1080/10872981.2023.2181052. 7. Kasneci E, Sessler K, Küchemann S, et al. ChatGPT for good? on opportunities an d challenges of large language models for education. Learn Individ Differ. 2023 Apr;103:102274. DOI:10.1016/j.lindif.2023.102274 8. Bonsu E, Baffour - Koduah D. From the consumers' side: Determining students' perception and intention to use ChatGPT in Ghana ian higher education. J Educ Soc Multicult. 2023;4(1):1 - 29. doi: 10.2478/jesm - 2023 - 0001. 9. Abd - Alrazaq A, AlSaad R, Alhuwail D, et al. Large language models in medical education: opportunities, challenges, and future directions. JMIR Med Educ. 2023 June 1;9:e48291. DOI: 10.2196/48291 10. Mijwil MM, Aljanabi M, Ali AA. ChatGPT: Exploring the role of cybersecurity in the protection of medical information. Mesopotamian J Cybersecurity. 2023;5(1):30 - 45. doi: 10.58496/mjcs/2023/004. 11. Arif T, Munaf U, Ul - Ha que I. The future of medical education and research: Is ChatGPT a blessing or blight in disguise? Med Educ Online. 2023;28(1):2181052. doi: 10.1080/10872981.2023.2181052. 12. Topol EJ. High - performance medicine: The convergence of human and artificial inte lligence. Lancet. 2019;393(10173):85 - 91. doi: 10.1016/S0140 - 6736(18)30173 - 0. 13. Weidener L, Fischer M. Artificial intelligence teaching as part of medical education: qualitative analysis of expert interviews. JMIR Med Educ. 2023 April 24;9:e46428. doi: 10 .2196/46428. 14. Weidener L, Fischer M. Artificial intelligence in medicine: cross - sectional study among medical students on application, education, and ethical aspects. JMIR Med Educ. 2024 Jan 5;10:e51247. doi: 10.2196/51247. 15. Pinto Dos Santos D, Gies e D, Brodehl S, et al. Medical students' attitude towards artificial intelligence: a multicentre survey. Eur Radiol. 2019 Apr;29(4):1640 1646. DOI: 10.1007/s00330 - 018 - 5601 - 1 16. Holderried F, Stegemann - Philipps C, Herschbach L, et al. A generative pretrai ned transformer (GPT) - powered chatbot as a simulated patient to practice history taking: prospective, mixed methods study. JMIR Med Educ. 2024 Jan 16;10:e53961. doi: 10.2196/53961. 17. Ayers JW, Poliak A, Dredze M, et al. Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Intern Med. 2023 Jun 1;183(6):589 596. doi: 10.1001/jamainternmed.2023.1838. 18. Topol EJ. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. New York: Basic Books; 2019. Available at: https://psnet.ahrq.gov/issue/deep - medicine - how - artificial - intelligence - can - make - healthcare - human - again 19. Hopkins AM, Logan JM, Kichenadasse G, Sorich MJ. Artificial intelligence chatbots will revolution ize how cancer patients access information: ChatGPT represents a paradigm - shift. JNCI Cancer Spectr. 2023 Mar 1;7(2):pkad010. DOI: 10.1093/jncics/pkad010 20. Jairoun AA, Al - Hemyari SS, Jairoun M, El - Dahiyat F. Readability, accuracy and comprehensibility o f patient information leaflets: the missing pieces to the puzzle of problem - solving related to safety, efficacy and quality of medication use. Res Social Adm Pharm. 2022 Apr;18(4):2557 2558. DOI: 10.1016/j.sapharm.2021.10.005 Postgraduate Students Perception of Using Chatgpt in Clinical
25 J Gandhara Med Dent Sci April - June 2025 LICENSE: JGMDS publishes its articles under a Creative Commons Attribution Non-Commercial Share-Alike license ( CC-BY-NC-SA 4.0 ). COPYRIGHTS: Authors retain the rights without any restrictions to freely download, print, share and disseminate the article for any lawful purpose.It includes scholarlynetworks such as Research Gate, Google Scholar, LinkedIn, Academia.edu, Twitter, and other academic or professional networking sites. CONTRIBUTORS 1. Muhammad Shah - Concept & Design ; Data Acquisition; Data Analysis/Interpretation; Drafting Manuscript; Critical Revision; Supervision , Final Approval 2. Shimee Shahzadi - Data Acquisition; Data Analysis/Interpretation 3. Shehzad Akbar Khan - Data Analysis/Interpretation Postgraduate Students Perception of Using Chatgpt in Clinical