GPT-4 Fails On Real Healthcare Tasks: New HealthBench Test Reveals The Gaps
By: mpost io|2025/05/14 22:45:05
0
Share
Large language models are everywhere — from search to coding and even patient-facing health tools. New systems are being introduced almost weekly, including tools that promise to automate clinical workflows. But can they actually be trusted to make real medical decisions? A new benchmark, called HealthBench, says not yet. According to the results, models like GPT-4 (from OpenAI) and Med-PaLM 2 (from Google DeepMind) still fall short on practical healthcare tasks, especially when accuracy and safety matter most.HealthBench is different from older tests. Instead of using narrow quizzes or academic question sets, it challenges AI models with real-world tasks. These include picking treatments, making diagnoses, and deciding what steps a doctor should take next. That makes the results more relevant to how AI might actually be used in hospitals and clinics.Across all tasks, GPT-4 performed better than previous models. But the margin was not enough to justify real-world deployment. In some cases, GPT-4 chose incorrect treatments. In others, it offered advice that could delay care or even increase harm. The benchmark makes one thing clear: AI might sound smart, but in medicine, that’s not good enough.Real Tasks, Real Failures: Where AI Still Breaks in MedicineOne of HealthBench’s biggest contributions is how it tests models. It includes 14 real-world healthcare tasks across five categories: treatment planning, diagnosis, care coordination, medication management, and patient communication. These aren’t made-up questions. They come from clinical guidelines, open datasets, and expert-authored resources that reflect how actual healthcare works.On many tasks, large language models showed consistent errors. For instance, GPT-4 often failed at clinical decision-making, such as determining when to prescribe antibiotics. In some examples, it was overprescribed. In others, it missed important symptoms. These types of mistakes are not just wrong — they could cause real harm if used in actual patient care.The models also struggled with complex clinical workflows. For example, when asked to recommend follow-up steps after lab results, GPT-4 gave generic or incomplete advice. It often skipped context, didn’t prioritize urgency, or lacked clinical depth. That makes it dangerous in cases where time and order of operations are critical.In medication-related tasks, accuracy dropped further. The models frequently mixed up drug interactions or gave outdated guidance. That’s especially alarming since medication errors are already one of the top causes of preventable harm in healthcare.Even when the models sounded confident, they weren’t always right. The benchmark revealed that fluency and tone didn’t match clinical correctness. This is one of the biggest risks of AI in health — it can “sound” human while being factually wrong.Why HealthBench Matters: Real Evaluation for Real ImpactUntil now, many AI health evaluations used academic question sets like MedQA or USMLE-style exams. These benchmarks helped measure knowledge but didn’t test whether models could think like doctors. HealthBench changes that by simulating what happens in actual care delivery.Instead of one-off questions, HealthBench looks at the entire decision chain — from reading a symptom list to recommending care steps. That gives a more complete picture of what AI can or can’t do. For instance, it tests if a model can manage diabetes across multiple visits or track lab trends over time.The benchmark also grades models on multiple criteria, not just accuracy. It checks for clinical relevance, safety, and the potential to cause harm. That means it’s not enough to get a question technically right — the answer also has to be safe and useful in real-life settings.Another strength of HealthBench is transparency. The team behind it released all prompts, scoring rubrics, and annotations. That allows other researchers to test new models, improve evaluations, and build on the work. It’s an open call to the AI community: if you want to claim your model is useful in healthcare, prove it here.GPT-4 and Med-PaLM 2 Still Not Ready for ClinicsDespite recent hype around GPT-4 and other large models, the benchmark shows they still make serious medical mistakes. In total, GPT-4 only achieved around 60–65% correctness on average across all tasks. In high-stakes areas like treatment and medication decisions, the score was even lower.Med-PaLM 2, a model tuned for healthcare tasks, didn’t perform much better. It showed slightly stronger accuracy in basic medical recall but failed at multi-step clinical reasoning. In several scenarios, it offered advice that no licensed physician would support. These include misidentifying red-flag symptoms and suggesting non-standard treatments.The report also highlights a hidden danger: overconfidence. Models like GPT-4 often deliver wrong answers in a confident, fluent tone. That makes it hard for users — even trained professionals — to detect errors. This mismatch between linguistic polish and medical precision is one of the key risks of deploying AI in healthcare without strict safeguards.To put it plainly: sounding smart isn’t the same as being safe.What Needs to Change Before AI Can Be Trusted in HealthcareThe HealthBench results aren’t just a warning. They also point to what AI needs to improve. First, models must be trained and evaluated using real-world clinical workflows, not just textbooks or exams. That means including doctors in the loop — not just as users, but as designers, testers, and reviewers.Second, AI systems should be built to ask for help when uncertain. Right now, models often guess instead of saying, “I don’t know.” That’s unacceptable in healthcare. A wrong answer can delay diagnosis, increase risk, or break patient trust. Future systems must learn to flag uncertainty and refer complex cases to humans.Third, evaluations like HealthBench must become the standard before real deployment. Just passing an academic test is no longer enough. Models must prove they can handle real decisions safely, or they should stay out of clinical settings entirely.The Path Ahead: Responsible Use, Not HypeHealthBench doesn’t say that AI has no future in healthcare. Instead, it shows where we are today — and how far there is still to go. Large language models can help with administrative tasks, summarization, or patient communication. But for now, they are not ready to replace or even reliably support doctors in clinical care.Responsible use means clear limits. It means transparency in evaluation, partnerships with medical professionals, and constant testing against real medical tasks. Without that, the risks are too high.The creators of HealthBench invite the AI and healthcare community to adopt it as a new standard. If done right, it could push the field forward — from hype to real, safe impact.The post GPT-4 Fails On Real Healthcare Tasks: New HealthBench Test Reveals The Gaps appeared first on Metaverse Post.
You may also like

These days, even hackers are losing money
Although hackers possess excellent skills and can complete a meticulous harvest in a matter of hours, the market does not care where the chips come from; in the face of a bear market, everyone is treated equally.

Arm Chips In-House: Rewire News Brief
For Intel and AMD, the x86 Moat Just Got a Little Less Secure

IOSG: Stablecoin Reshaping Asia Cross-Border Payments? Strategic Landscape and Investment Opportunities Analysis
Stablecoins have not truly addressed the two core pain points of domestic settlement and exchange rate conversion.

\$73 Billion OpenAI Aims for IPO: Drops Sora, Snubs Disney, Puts Microsoft in Risk Factors
Altman is Telling a Growth Story in Subtraction

The Chip Industry's Most Secure Middleman Just Took a Very Risky Turn
Arm's decision to fabricate chips is essentially competing with its own customers

CZ's Latest Interview: My Experience is Replicable, Writing a Book to Inspire Young Entrepreneurs
How will CZ measure the success of this industry and how far have we really come?

Morning News | Invesco acquires a $900 million on-chain fund from Superstate; ParaFi has raised $125 million for its new fund; Solana Foundation launches developer platform SDP
Overview of Important Market Events on March 24

What is the background of this new fund that the two major prediction market platforms have rarely joined forces to create?
When Klashi's early employees went out to raise funds, the two CEOs chose to appear together on the list of investors.

SIREN, another leveraged scam
What kind of experience can we gain from these similar situations?

Token has become extremely popular, and the blockchain is very sad
When AI's tokens become the new "digital oil," blockchain can only watch its once-dreamed dreams materialize in a completely unfamiliar way. This misaligned popularization is a victory for AI, but also the deepest helplessness for blockchain.

Tether's major shareholder invests £12 million to support the "British version of Trump" in the cryptocurrency sector
In the United States, the story of the cryptocurrency industry pouring money to support Trump and reclaiming regulatory dominance has come to an end. In the United Kingdom, the same script is being replayed.

Huang Renxun's Latest Podcast: Will NVIDIA Reach $1 Trillion? Will the Number of Programmers Increase Instead of Decrease? How to Deal with AI Anxiety?
Hashpower will determine everything; human work will only be restructured, not disappear

Besides Resolv Hack, This DeFi Vulnerability Type Has Occurred Four Times
17 minutes, 100k turned into 25M.

Trump Cries Peace, $1.5 Billion Dash | Rewire News Evening Brief
In the first 15 minutes of trading, $1.5 billion in futures trades have already taken place

From x402 to MPP: Cloudflare's crucial vote, will it go to Coinbase or Stripe?
Cloudflare is both building walls and opening windows. It provides both blocking tools and paid access tools. They decide what is kept out, what is allowed in, and under what conditions it can enter.

BlackRock CEO issues annual open letter: The wave of tokenization has arrived, and we will lead this trend
Rebuild capitalism that belongs to everyone.

When Backpack backstabs the community
Once a fundamental rift in trust appears, the cost that Backpack must pay to repair it is likely far more expensive than the profits previously "harvested" through service fees.

When gold is no longer a safe haven, and Bitcoin continues to panic
The whole world is waiting for the Strait of Hormuz to reopen. Why not guess which type of asset will return to pre-war levels first?
These days, even hackers are losing money
Although hackers possess excellent skills and can complete a meticulous harvest in a matter of hours, the market does not care where the chips come from; in the face of a bear market, everyone is treated equally.
Arm Chips In-House: Rewire News Brief
For Intel and AMD, the x86 Moat Just Got a Little Less Secure
IOSG: Stablecoin Reshaping Asia Cross-Border Payments? Strategic Landscape and Investment Opportunities Analysis
Stablecoins have not truly addressed the two core pain points of domestic settlement and exchange rate conversion.
\$73 Billion OpenAI Aims for IPO: Drops Sora, Snubs Disney, Puts Microsoft in Risk Factors
Altman is Telling a Growth Story in Subtraction
The Chip Industry's Most Secure Middleman Just Took a Very Risky Turn
Arm's decision to fabricate chips is essentially competing with its own customers
CZ's Latest Interview: My Experience is Replicable, Writing a Book to Inspire Young Entrepreneurs
How will CZ measure the success of this industry and how far have we really come?
