Artwork

Content provided by BMJ Group. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by BMJ Group or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

ChatGPT-fabricated Abstracts in Gynecologic Oncology with Gabriel Levin and Behrouz Zand

47:51
 
Share
 

Manage episode 425321075 series 2474076
Content provided by BMJ Group. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by BMJ Group or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode of the IJGC podcast, Editor-in-Chief Dr. Pedro Ramirez is joined by Drs. Gabriel Levin and Behrouz Zand to discuss ChatGPT-fabricated abstracts in gynecologic oncology. Dr. Gabriel Levin is a gynecologic oncology Fellow at McGill University. His research encompasses population database studies with clinical implication and innovations in medical education and health care. He has published more than 180 peer reviewed original articles.. Dr. Behrouz Zand is a gynecologic oncologist at Houston Methodist Hospital's Neal Cancer Center and Department of Obstetrics and Gynecology, and an assistant professor at Weill Cornell College at Houston Methodist Academic Institute. Specializing in innovative cancer care and clinical trials, he is passionate about integrating AI in medicine, a recent alumnus of the physician program at MIT for AI integration in healthcare. Dr. Zand combines cutting-edge research with compassionate patient care to advance the field.

Highlights:

Reviewers had difficulty in discriminating ChatGPT-written abstracts. Reviewers correctly identified only 46.3% of ChatGPT-generated abstracts, with human-written abstracts slightly higher at 53.7%.

Senior reviewers and those familiar with AI had higher correct identification rates, with senior reviewers at 60% and juniors/residents at 45%. Experience and familiarity with AI were independently associated with higher correct identification rates.

ChatGPT assists researchers by generating reviews, summaries, and enhancing writing clarity, but it raises ethical concerns and could diminish human expertise. For non-English speaking authors, it improves writing quality and clarity. In scientific writing, it enhances clarity, summarizes concisely, brainstorms ideas, assists with terminology, and offers data interpretation, augmenting human expertise.

ChatGPT and AI in scientific writing can lead to ethical issues, factual inaccuracies, and may eventually diminish human expertise and critical thinking.

  continue reading

363 episodes

Artwork
iconShare
 
Manage episode 425321075 series 2474076
Content provided by BMJ Group. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by BMJ Group or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode of the IJGC podcast, Editor-in-Chief Dr. Pedro Ramirez is joined by Drs. Gabriel Levin and Behrouz Zand to discuss ChatGPT-fabricated abstracts in gynecologic oncology. Dr. Gabriel Levin is a gynecologic oncology Fellow at McGill University. His research encompasses population database studies with clinical implication and innovations in medical education and health care. He has published more than 180 peer reviewed original articles.. Dr. Behrouz Zand is a gynecologic oncologist at Houston Methodist Hospital's Neal Cancer Center and Department of Obstetrics and Gynecology, and an assistant professor at Weill Cornell College at Houston Methodist Academic Institute. Specializing in innovative cancer care and clinical trials, he is passionate about integrating AI in medicine, a recent alumnus of the physician program at MIT for AI integration in healthcare. Dr. Zand combines cutting-edge research with compassionate patient care to advance the field.

Highlights:

Reviewers had difficulty in discriminating ChatGPT-written abstracts. Reviewers correctly identified only 46.3% of ChatGPT-generated abstracts, with human-written abstracts slightly higher at 53.7%.

Senior reviewers and those familiar with AI had higher correct identification rates, with senior reviewers at 60% and juniors/residents at 45%. Experience and familiarity with AI were independently associated with higher correct identification rates.

ChatGPT assists researchers by generating reviews, summaries, and enhancing writing clarity, but it raises ethical concerns and could diminish human expertise. For non-English speaking authors, it improves writing quality and clarity. In scientific writing, it enhances clarity, summarizes concisely, brainstorms ideas, assists with terminology, and offers data interpretation, augmenting human expertise.

ChatGPT and AI in scientific writing can lead to ethical issues, factual inaccuracies, and may eventually diminish human expertise and critical thinking.

  continue reading

363 episodes

Alle episoder

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide