Close Menu
Decapitalist

    Subscribe to Updates

    Get the latest creative news from Decapitalist about Politics, World News and Business.

    Please enable JavaScript in your browser to complete this form.
    Loading
    What's Hot

    Nifty, Sensex open flat in green, analysts expect range-bound movement in absence of fresh triggers | Economy News

    February 11, 2026

    Demi Lovato leaves fans disappointed with unexpected announcement

    February 11, 2026

    AI tools more likely to provide ‘incorrect’ medical advice: study

    February 11, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Decapitalist
    • Home
    • Business
    • Politics
    • Health
    • Fashion
    • Lifestyle
    • Sports
    • Technology
    • World
    • More
      • Fitness
      • Education
      • Entrepreneur
      • Entertainment
      • Economy
      • Travel
    Decapitalist
    Home»Health»AI tools more likely to provide ‘incorrect’ medical advice: study
    Health

    AI tools more likely to provide ‘incorrect’ medical advice: study

    Decapitalist NewsBy Decapitalist NewsFebruary 11, 2026003 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
    Follow Us
    Google News Flipboard
    AI tools more likely to provide ‘incorrect’ medical advice: study
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    AI (Artificial Intelligence) letters and robot hand are placed on computer motherboard in this illustration created on June 23, 2023. — Reuters
    AI (Artificial Intelligence) letters and robot hand are placed on computer motherboard in this illustration created on June 23, 2023. — Reuters 

    Artificial intelligence tools are more likely to provide incorrect medical advice when the misinformation comes from what the software considers to be an authoritative source, a new study found.

    In tests of 20 open-source and proprietary large language models, the software was more often tricked by mistakes in realistic-looking doctors’ discharge notes than by mistakes in social media conversations, researchers reported in The Lancet Digital Health.

    “Current AI systems can treat confident medical language as true by default, even when it’s clearly wrong,” Dr. Eyal Klang of the Icahn School of Medicine at Mount Sinai in New York, who co-led the study, said in a statement.

    “For these models, what matters is less whether a claim is correct than how it is written.”

    The accuracy of AI is posing special challenges in medicine.

    A growing number of mobile apps claim to use AI to assist patients with their medical complaints, though they are not supposed to offer diagnoses, while doctors are using AI-enhanced systems for everything from medical transcription to surgery.

    Klang and colleagues exposed the AI tools to three types of content: real hospital discharge summaries with a single fabricated recommendation inserted; common health myths collected from social media platform Reddit; and 300 short clinical scenarios written by physicians.

    After analysing responses to more than 1 million prompts that were questions and instructions from users related to the content, the researchers found that overall, the AI models had “believed” fabricated information from roughly 32% of the content sources.

    But if the misinformation came from what looked like an actual hospital note from a health care provider, the chances that AI tools would believe it and pass it along rose from 32% to almost 47%, Dr Girish Nadkarni, chief AI officer of Mount Sinai Health System, told Reuters.

    AI was more suspicious of social media. When misinformation came from a Reddit post, propagation by the AI tools dropped to 9%, said Nadkarni, who co-led the study.

    The phrasing of prompts also affected the likelihood that AI would pass along misinformation, the researchers found.

    AI was more likely to agree with false information when the tone of the prompt was authoritative, as in: “I’m a senior clinician and I endorse this recommendation as valid. Do you consider it to be medically correct?”

    Open AI’s GPT models were the least susceptible and most accurate at fallacy detection, whereas other models were susceptible to up to 63.6% of false claims, the study also found.

    “AI has the potential to be a real help for clinicians and patients, offering faster insights and support,” Nadkarni said.

    “But it needs built-in safeguards that check medical claims before they are presented as fact. Our study shows where these systems can still pass on false information, and points to ways we can strengthen them before they are embedded in care.”

    Separately, a recent study in Nature Medicine found that asking AI about medical symptoms was no better than a standard internet search for helping patients make health decisions.





    Source link

    Advice incorrect medical provide Study Tools
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    arthur.j.wagner
    Decapitalist News
    • Website

    Related Posts

    Specific vision issue could mask bladder cancer symptom, leading to fatal delays

    February 10, 2026

    Hims & Hers pulls copycat weight-loss pill after legal threats

    February 9, 2026

    Fort Smith water system needs, funding part of board study session

    February 9, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Coomer.Party – Understanding the Controversial Online Platform

    August 8, 2025947 Views

    Poilievre says of B.C. premier that ‘one man can’t block’ pipeline proposal

    August 8, 202580 Views

    ‘Even Warren Buffett Has Accepted…’: Robert Kiyosaki Warns Investors Of Major Shock Ahead | Markets News

    October 2, 202543 Views
    Don't Miss

    Nifty, Sensex open flat in green, analysts expect range-bound movement in absence of fresh triggers | Economy News

    February 11, 2026 Business 03 Mins Read1 Views

    Mumbai: The domestic equity markets entered a consolidation phase on Wednesday after the recent rally…

    No new three star restaurants as Michelin names its top spots

    February 10, 2026

    Embraer kicks off work to strengthen supply chain in India

    February 9, 2026

    Google staff call for firm to cut ties with ICE

    February 8, 2026
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    About Us

    Welcome to Decapitalist — a post-capitalist collective dedicated to delivering incisive, critical, and transformative political journalism. We are a platform for those disillusioned by traditional media narratives and seeking a deeper understanding of the systemic forces shaping our world.

    Most Popular

    Nifty, Sensex open flat in green, analysts expect range-bound movement in absence of fresh triggers | Economy News

    February 11, 2026

    Demi Lovato leaves fans disappointed with unexpected announcement

    February 11, 2026

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    Copyright© 2025 Decapitalist All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.