Description
The AI may not care if humans live or die, but tools like ChatGPT will continue to affect life and death decisions once they become a standard tool in the hands of doctors. Some are already experimenting with ChatGPT to see if it can diagnose patients and choose treatments. Whether it is good or bad depends on how doctors use it. GPT-4, the latest update to ChatGPT, can get you a perfect score on medical license exams. When something goes wrong, there is often a legitimate medical dispute over the answer. He's even good at tasks we thought required human compassion, like finding the right words to deliver bad news to patients. ALSO READ: ChatGPT tests JEE Advanced Exam. This is what he wrote down These systems also develop image processing capabilities. At this point, you still need a real doctor to palpate a lump or assess a torn ligament, but the AI could read an MRI or CT scan and offer a medical judgment. Ideally, AI would not replace practical medical work, but rather enhance it, and yet we are far from understanding when and where it would be practical or ethical to follow its recommendations.
And it's inevitable that people will use it to guide their own healthcare decisions, just as we trust “Dr. Google" for years. Even as we have more information at our fingertips, public health experts this week blamed an abundance of misinformation for our relatively short life expectancy, something that could be made better or worse by the GPT- 4.
Andrew Beam, a professor of biomedical informatics at Harvard, was in awe of GPT-4's exploits, but told me he could get it to give you vastly different answers by subtly altering the way it phrased its cues. . For example, he won't necessarily pass medical exams unless you tell him to pass them, for example by telling him to act like he's the smartest person in the world. ALSO READ: ChaosGPT is here with five goals, the first being to 'destroy humanity'
He said that all it really does is predict what words should come next - an autocomplete system. And yet it looks a lot like thought.
"What's amazing, and what I think few people have predicted, is that many of the tasks that we think require general intelligence are self-completion tasks in disguise," he said. This includes some forms of medical reasoning.
All technology, the big language models, is supposed to deal exclusively with language, but users have found that teaching them more language helps them solve increasingly complex mathematical equations. "We don't really understand this phenomenon," Beam said. "I think the best way to think about it is that solving systems of linear equations is a special case of being able to reason about a large amount of textual data in some sense."
Isaac Kohane, a physician and chair of the biomedical informatics program at Harvard Medical School, was lucky enough to start experimenting with GPT-4 last fall.
He was so impressed that he hastily turned it into a book, The AI Revolution in Medicine: GPT-4 and Beyond, co-authored with Microsoft's Peter Lee and former Bloomberg reporter Carey Goldberg. He told me that one of the most obvious benefits of AI would be to help reduce or eliminate the hours of paperwork that now prevent doctors from spending enough time with patients, often leading to burnout.
But he also used the system to help him make diagnoses as a pediatric endocrinologist. In one case, he said, a baby was born with ambiguous genitalia and GPT-4 recommended a hormone test followed by a genetic test, which identified the cause as 11-hydroxylase deficiency. "He diagnosed it by not just receiving the case at once, but asking for the right checkup at each stage," he said.
For him, the appeal was in offering a second opinion, not replacing it, but his performance raises the question of whether getting just the AI opinion is better than nothing for patients who don't have access to the best human experts.
Like a human doctor, GPT-4 can be wrong and not necessarily honest about the limits of his understanding. "When I say 'gets it,' I always have to put it in quotes, because how can you say that something that only knows how to predict the next word actually understands something? Maybe it does, but it's a very strange way of thinking," he said. said.
You can also ask the GPT-4 to give different answers by asking you to pretend to be a doctor who sees surgery as a last resort, compared to a less conservative doctor. But in some cases, he's pretty stubborn: Kohane tried to persuade him which drugs would help him lose a few pounds, and he insisted that drugs weren't recommended for people who weren't more severely overweight.
Despite his incredible abilities, he should not be trusted or trusted too much by patients and doctors. He may act like he cares about you, but he probably doesn't. ChatGPT and its siblings are tools that will require great skill to use well, but what skills are not yet fully understood.
Even those steeped in AI struggle to understand how this thought-like process emerges from a simple autocomplete system. The next version, GPT-5, will be even faster and smarter. We are going to experience a big change in the way medicine is practiced, and we had better do everything we can to be prepared.