Close Menu
Decapitalist

    Subscribe to Updates

    Get the latest creative news from Decapitalist about Politics, World News and Business.

    Please enable JavaScript in your browser to complete this form.
    Loading
    What's Hot

    How Romney Studios Is Challenging Traditional Fitness for Women Over 40

    April 14, 2026

    10 Powerful Tourism Marketing Trends for 2026 You Can’t Ignore

    April 14, 2026

    Democrat’s Controversy – Hollywood Life

    April 14, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Decapitalist
    • Home
    • Business
    • Politics
    • Health
    • Fashion
    • Lifestyle
    • Sports
    • Technology
    • World
    • More
      • Fitness
      • Education
      • Entrepreneur
      • Entertainment
      • Economy
      • Travel
    Decapitalist
    Home»Technology»Why opinion on AI is so divided
    Technology

    Why opinion on AI is so divided

    Decapitalist NewsBy Decapitalist NewsApril 14, 2026004 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
    Follow Us
    Google News Flipboard
    Why opinion on AI is so divided
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

    In an industry that doesn’t stand still, Stanford’s AI Index, an annual roundup of key results and trends, is a chance to take a breath. (It’s a marathon, not a sprint, after all.)

    This year’s report, which dropped today, is full of striking stats. A lot of the value comes from having numbers to back up gut feelings you might already have, such as the sense that the US is gunning harder for AI than everyone else: It hosts 5,427 data centers (and counting). That’s more than 10 times as many as any other country.  

    There’s also a reminder that the hardware supply chain the AI industry relies on has some major choke points. Here’s perhaps the most remarkable fact: “A single company, TSMC, fabricates almost every leading AI chip, making the global AI hardware supply chain dependent on one foundry in Taiwan.” One foundry! That’s just wild.

    But the main takeaway I have from the 2026 AI Index is that the state of AI right now is shot through with inconsistencies. As my colleague Michelle Kim put it today in her piece about the report: “If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock.” (The Stanford report notes that Google DeepMind’s top reasoning model, Gemini Deep Think, scored a gold medal in the International Math Olympiad but is unable to read analog clocks half the time.)

    Michelle does a great job covering the report’s highlights. But I wanted to dwell on a question that I can’t shake. Why is it so hard to know exactly what’s going on in AI right now?  

    The widest gap seems to be between experts and non-experts. “AI experts and the general public view the technology’s trajectory very differently,” the authors of the AI Index write. “Assessing AI’s impact on jobs, 73% of U.S. experts are positive, compared with only 23% of the public, a 50 percentage point gap. Similar divides emerge with respect to the economy and medical care.”

    That’s a huge gap. What’s going on? What do experts know that the public doesn’t? (“Experts” here means US-based researchers who took part in AI conferences in 2023 and 2024.)

    I suspect part of what’s going on is that experts and non-experts base their views on very different experiences. “The degree to which you are awed by AI is perfectly correlated with how much you use AI to code,” a software developer posted on X the other day. Maybe that’s tongue-in-cheek, but there’s definitely something to it.

    The latest models from the top labs are now better than ever at producing code. Because technical tasks like coding have right or wrong results, it is easier to train models to do them, compared with tasks that are more open-ended. What’s more, models that can code are proving to be profitable, so model makers are throwing resources at improving them.

    This means that people who use those tools for coding or other technical work are experiencing this technology at its best. Outside of those use cases, you get more of a mixed bag. LLMs still make dumb mistakes. This phenomenon has become known as the “jagged frontier”: Models are very good at doing some things and less good at others.

    The influential AI researcher Andrej Karpathy also had some thoughts. “Judging by my [timeline] there is a growing gap in understanding of AI capability,” he wrote in reply to that X post. He noted that power users (read: people who use LLMs for coding, math, or research) not only keep up to date with the latest models but will often pay $200 a month for the best versions. “The recent improvements in these domains as of this year have been nothing short of staggering,” he continued.

    Because LLMs are still improving fast, someone who pays to use Claude Code will in effect be using a different technology from someone who tried using the free version of Claude to plan a wedding six months ago. Those two groups are speaking past each other.

    Where does that leave us? I think there are two realities. Yes, AI is far better than a lot of people realize. And yes, it is still pretty bad at a lot of stuff that a lot of people care about (and it may stay that way). Anyone making bets about the future on either side should bear that in mind.



    Source link

    divided Opinion
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    arthur.j.wagner
    Decapitalist News
    • Website

    Related Posts

    Trump officials may be encouraging banks to test Anthropic’s Mythos model

    April 13, 2026

    A journalist recounts how he used ChatGPT to develop a fitness plan to prepare for the Paris Marathon, resulting in a 20-pound weight loss and faster race times (Derek Wallbank/Bloomberg)

    April 12, 2026

    The House Opinion Article | Worker Bees: Inside The Burnham Operation

    April 11, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Coomer.Party – Understanding the Controversial Online Platform

    August 8, 2025959 Views

    Which country doesn’t have a capital city, and why? |

    November 30, 202582 Views

    Poilievre says of B.C. premier that ‘one man can’t block’ pipeline proposal

    August 8, 202580 Views
    Don't Miss

    Dhaka sees ‘golden opportunity’ for ties

    April 14, 2026 Business 02 Mins Read1 Views

    Bilateral trade below $1b as KCCI proposes direct flights, enhanced banking channels KARACHI: Bangladesh High…

    Crude oil surges up 8% above $100 on peace talks deadlock

    April 13, 2026

    How the Iran War Is Affecting Inflation

    April 12, 2026

    Airports warn of ‘systemic’ jet fuel shortage if Strait of Hormuz stays closed

    April 11, 2026
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    About Us

    Welcome to Decapitalist — a post-capitalist collective dedicated to delivering incisive, critical, and transformative political journalism. We are a platform for those disillusioned by traditional media narratives and seeking a deeper understanding of the systemic forces shaping our world.

    Most Popular

    How Romney Studios Is Challenging Traditional Fitness for Women Over 40

    April 14, 2026

    10 Powerful Tourism Marketing Trends for 2026 You Can’t Ignore

    April 14, 2026

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    Copyright© 2025 Decapitalist All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.