When OpenAI released ChatGPT-3 publicly last November, several doctors decided to try out a free AI tool that learns language and writes text like humans. Some doctors find chatbots make mistakes and stop using them, while others are happy with the results and plan to use them more often.
“We had been playing around with it. It was very early in AI and we realized that it was giving us misinformation with respect to clinical guidance,” says Monalisa Tailor, MD, an internal medicine physician at Norton Health Care in Louisville, Kentucky. “We decided not to proceed with it,” he said.
Orthopedic spine surgeon Daniel Choi, MD, who has a small medical/surgical practice in Long Island, New York, tested the chatbot’s performance with several administrative tasks, including writing job listings for administrators and prior authorization letters.
He was very enthusiastic. “A well-polished job post that used to take me 2-3 hours to write is done in 5 minutes,” says Choi. “I was blown away by her writing – so much better than anything I could write.”
Chatbots can also automate administrative tasks in physician practices from appointment scheduling and billing to clinical documentation, saving doctors time and money, experts say.
Most doctors proceed with caution. About 10% of more than 500 medical group leaders said their practice regularly uses AI tools when they responded to a March poll by the Medical Group Management Association.
More than half of respondents who don’t use AI said they wanted more proof that the technology was working as intended.
“Nothing works as advertised,” said one respondent.
MGMA practice management consultant Dawn Plested admits that many of the physician practices she works with are still wary. “I haven’t come across a practice that uses any AI tools, even something as low-risk as scheduling appointments,” he says.
Physician groups may be concerned about the cost and logistics of integrating ChatGPT with their electronic health record (EHR) system and how it will work, Plested said.
Doctors may also be skeptical of AI based on their experiences with EHR, he said.
“They are promoted as a panacea for many problems; they are supposed to automate business practices, reduce staff and physician work, and improve billing/coding/documentation. Unfortunately, they have become a major source of physician frustration,” Plested said.
Drawing Lines in Patient Care
Patients worry that their doctors are relying on AI for their care, according to a Pew Research Center poll released in February. About 60% of US adults say they would be uncomfortable if their healthcare professional relied on artificial intelligence to do things like diagnose disease and recommend treatment; about 40% said they would be comfortable with this.
“We have not used ChatGPT for clinical purposes and would be very careful with this type of application due to concerns about inaccuracies,” said Choi.
Practice leaders reported in an MGMA poll that the most common uses of AI are nonclinical, such as:
-
Patient communications, including call center answering services to assist with call triage, to sort/distribute incoming fax messages, and outreach such as appointment reminders and marketing materials
-
Capture clinical documentation, often with natural language processing or speech recognition platforms to help with virtual writing
-
Improve billing operations and predictive analytics
Some doctors also tell The New York Times that ChatGPT helps them communicate with patients in a more compassionate way.
They use chatbots “to find the words to deliver bad news and express concern about a patient’s suffering, or to explain medical recommendations more clearly,” the story says.
Is Regulation Needed?
Some legal experts and medical groups say that AI should be regulated to protect patients and doctors from risks, including medical errors, that could harm patients.
“It’s critical to evaluate the accuracy, safety, and privacy of language learning models (LLMs) before integrating them into medical systems. The same should be true for any new medical device,” says Mason Marks, MD, JD, a health law professor at Florida State University College of Law in Tallahassee.
In mid-June, the American Medical Association approved two resolutions calling for greater government oversight of AI. The AMA will develop proposed state and federal regulations and work with the federal government and other organizations to protect patients from false or misleading medical advice generated by AI.
Marks points to existing federal rules that apply to AI. “The Federal Trade Commission already has regulations in place that could potentially be used to combat unfair or deceptive trade practices associated with chatbots,” he said.
Additionally, “The US Food and Drug Administration can also regulate these tools, but will need to update the way it approaches AI-related risks. The FDA has an old-fashioned view of risks as physical harm, for example, from traditional medical devices. That risk view needs updating and expanded to cover unique AI hazards,” said Marks.
There should also be more transparency about how LLM software is used in medicine, he said. “That could be the norm enforced by LLM developers and it could also be enforced by federal agencies. For example, the FDA could require developers to be more transparent about data and training methods, and the FTC could require greater transparency about how consumer data might be used and opportunities to opt out of certain uses,” Marks said.
What Should the Doctor Do?
Marks advises doctors to use caution when using ChatGPT and other LLMs, especially for medical advice. “The same would be true for any new medical device, but we know that the current generation of LLMs is very prone to far-fetched, which can lead to medical errors when relying on the clinical setting,” he said.
There is also the potential for breach of patient confidentiality if physicians enter clinical information. Tools that enable Chat-GPT and OpenAI may not be HIPAA compliant, which sets national standards for protecting individual medical records and individually identifiable health information.
“The best approach is to use chatbots with caution and skepticism. Don’t include patient information, confirm the accuracy of the information generated, and don’t use it as a substitute for professional judgment,” advises Marks.
Plested suggests that clinicians looking to experiment with AI start with low-risk tools like appointment reminders that could save staff time and money. “I would never recommend they start with something as high risk as coding/billing,” he says.
Christine Lehmann, MA, is an editor and senior writer for Medscape Business of Medicine based in the Washington, DC area. He has been published in WebMD News, Psychiatric News, and The Washington Post. Contact Christine at clehmann@medscape.net or via Twitter @writing_health
For more news, follow Medscape on Facebook, TwitterInstagram, YouTube, and LinkedIn
#Document #Love #Stay #Alert