Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(60,939 posts)
Wed Feb 11, 2026, 07:22 PM 7 hrs ago

Some reality about Matt Shumer's "Something Big Is Happening" AI hypefest some DUers are overreacting to

From Gary Marcus, always a voice of sanity in response to pro-AI hype and lies:

https://garymarcus.substack.com/p/about-that-matt-shumer-post-that

About that Matt Shumer post that has nearly 50 million views
Something big is allegedly happening

Gary Marcus
Feb 11, 2026

All morning people have been asking me about a blog post by Matt Shumer that has gone viral, with nearly 50 million views on X.

It’s a masterpiece of hype, written in the style of the old direct marketing campaigns, with bold-faced call outs like “I know this is real because it happened to me first” and “I am no longer needed for the actual technical work of my job”. It’s chock full of singularity vibes:

-snip-

As I told a journalist who asked me about the post, I wouldn’t take it so seriously:

Shumer’s blog post is weaponized hype that tells people want they want to hear, but stumbles on the facts, especially with respect to reliability. He gives no actual data to support this claim that the latest coding systems can write whole complex apps without making errors. Similarly, when he describes how AI’s are doing longer and longer tasks on METR’s famous task-time benchmark, he neglects to say the criterion on that benchmark is 50% correct, not 100%, and that the benchmark is only about coding and not tasks in general. No AI system can reliably do every five-hour long task humans can do without error, or even close, but you wouldn’t know that reading Shumer’s blog, which largely ignores all the hallucination and boneheaded errors that are so common in every day experience. And of course Shumer didn’t cite the new Caltech/Stanford article that reviews a wide range of reasoning errors in so-called reasoning models [or the Apple reasoning paper or the ASU mirage paper, etc]. The picture he sells just isn’t realistic, however much people might wish it were true. I should add that Shumer is the guy who was once famous for apparently exaggerated claims about a big model of his that didn’t replicate and that many people saw a fraud; he likes to sell big. But that doesn’t mean we should take him seriously.


-snip-


Gary then goes into detail on why Shumer's hysterical post shouldn't be taken seriously. AI models can write code. It's still all too often flawed code that is a security risk.

Gary links to this article:

https://www.itpro.com/software/development/ai-generated-code-is-fast-becoming-the-biggest-enterprise-security-risk-as-teams-struggle-with-the-illusion-of-correctness

AI-generated code is fast becoming the biggest enterprise security risk as teams struggle with the ‘illusion of correctness’
Security teams are scrambling to catch AI-generated flaws that appear correct before disaster strikes

By Emma Woollacott
published 5 February 2026


AI has overtaken all other factors in reshaping security priorities, with teams now forced to deal with AI-generated code that appears correct, professional, and production-ready – but that quietly introduces security risks.

-snip-

Aikido found that AI-generated code is now the cause of one-in-five breaches, with 69% of security leaders, engineers, and developers on both sides of the Atlantic having found serious vulnerabilities.

These risk factors are further exacerbated by the fact many developers are placing too much faith in the technology when coding. A separate survey from Sonar found nearly half of devs fail to check AI-generated code, placing their organization at huge risk.


Emphasis added.


Some posts from Bluesky about Shumer below...and also see reply 3 for background on other hype Shumer has been ridiculed for, and for the info that the AI company he's CEO of offers "cutting edge AI tools like "Team Member Praise Generator" and "AI Sympathy Message Generator by HyperWrite for Heartfelt Cards".

Surely, the author of the “AI is an existential threat to the economy and your wellbeing, and the only solution is to use more AI immediately, please” has no financial incentive in making people believe that. Now to take a big sip of my coffee and check out his bio…

Luis Paez-Pumar (@lpp.bsky.social) 2026-02-11T13:47:42.108Z


I shouldn’t even bother engaging with this idiotic article but I always find it funny when AI guys are like “you better become an early adopter or you’ll be left behind!” If the tech is as good as they claim, and can do what they claim, any idiot should be able to use it with no practice.

Luis Paez-Pumar (@lpp.bsky.social) 2026-02-11T13:51:03.109Z


But of course, the goal isn’t to change the world or whatever, it’s just to take everyone’s money. I don’t think that writing this alarmist crap with the lesson of “buy a Claude subscription or you’ll ruin your life” should work, but given the reaction, it just might.

Luis Paez-Pumar (@lpp.bsky.social) 2026-02-11T13:53:10.821Z


reads like an ai generated article to me

cavan (@cfos.bsky.social) 2026-02-11T15:59:00.764Z


He admitted it was

Luis Paez-Pumar (@lpp.bsky.social) 2026-02-11T19:33:23.108Z


Everyone reading ‘Something Big is Happening’ and losing their minds.

In reality Matt Shumer has consistently overhyped AI.

In 2024 he was accused of fraud when researchers were unable to replicate the supposed top performance of a new LLM he released called Reflection 70B.

Steve Strickland (@stevestrickland6.bsky.social) 2026-02-11T21:24:31.445Z


19 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Some reality about Matt Shumer's "Something Big Is Happening" AI hypefest some DUers are overreacting to (Original Post) highplainsdem 7 hrs ago OP
Tyvm for clarifying! SheltieLover 7 hrs ago #1
Yvw, Sheltie! highplainsdem 7 hrs ago #5
😊👍 SheltieLover 7 hrs ago #8
Thanks! Appreciate your effort in calling out some of the hyperbole , and attendant hubris. - - - - -(nt)- stopdiggin 7 hrs ago #2
You're welcome! I should have done more checking and posted this earlier, but had a lot of other highplainsdem 7 hrs ago #7
More on Matt Shumer here, starting with a Bluesky post from a journalist working for PC Gamer: highplainsdem 7 hrs ago #3
i regularly correspond with Gary Marcus Metaphorical 7 hrs ago #4
I've had some great exchanges with Gary, too. I'd have trusted him more than Matt Shumer in any highplainsdem 7 hrs ago #10
I've brought this up before in your posts, but as a developer with 25 years experience, it really IS scary. FascismIsDeath 7 hrs ago #6
this also ... A well thought, and absolutely solid 'take' on the overall picture. Thanks! stopdiggin 7 hrs ago #11
I'm sorry that AI's coding ability is becoming such a threat to developers' jobs. But part of the reason highplainsdem 6 hrs ago #12
The "junior level" developers who prompt and review this code are just fall guys. hunter 3 hrs ago #18
Pascal was the first language I really learned. FascismIsDeath 1 hr ago #19
Thanks for putting this together. Good stuff. WhiskeyGrinder 7 hrs ago #9
You're welcome! I wish I'd read about Shumer hours earlier. highplainsdem 6 hrs ago #13
See reply 14. Shumer needed AI to write that crap. He's really AI-addled. highplainsdem 5 hrs ago #15
Yeah it was clear from reading it. WhiskeyGrinder 5 hrs ago #16
That twit Shumer needed AI to help him write that piece of weaponized hype (as Gary Marcus called it): highplainsdem 5 hrs ago #14
Kick highplainsdem 4 hrs ago #17

highplainsdem

(60,939 posts)
7. You're welcome! I should have done more checking and posted this earlier, but had a lot of other
Wed Feb 11, 2026, 07:54 PM
7 hrs ago

stuff requiring attention first.

highplainsdem

(60,939 posts)
3. More on Matt Shumer here, starting with a Bluesky post from a journalist working for PC Gamer:
Wed Feb 11, 2026, 07:43 PM
7 hrs ago


https://www.pcgamer.com/gaming-industry/tech-investor-declares-ai-games-are-going-to-be-amazing-posts-an-ai-generated-demo-of-a-god-awful-shooter-as-proof/

Tech investor declares 'AI games are going to be amazing,' posts an AI-generated 'demo' of a god-awful shooter as proof

By Lincoln Carpenter published October 24, 2025


On an almost daily basis, a terminally tech-brained individual insists that AI will determine the future of human creativity if we simply believe hard enough. Today, that soothsayer is Matt Shumer, an investor who offered a demonstration of his vision for AI's videogame development potential—and was quickly blasted for it, because the vision sucks to look at.

Shumer—CEO of HyperWrite, a company offering cutting edge AI tools like "Team Member Praise Generator" and "AI Sympathy Message Generator by HyperWrite for Heartfelt Cards"—posted a video on X yesterday with the caption "AI games are going to be amazing." Judging from the boastful " (sound on)," he seemed convinced the video would illustrate the inevitability of our magnificent AI future.

In actuality, what he uploaded was an embarrassingly incoherent AI-generated mockup of imitation shooter gameplay that feels like it could trigger new and exciting forms of psychosis if you watch it long enough. It's a moving kaleidoscope of AI sludge that's only amazing in how clearly it communicates that AI's biggest pushers are operating with a different set of standards.

-snip-

"They're not finished products… they're glimpses of what's coming," Shumer said. "Is this an AAA game today? Of course not. Will AI-powered games be incredible in 5 years? Definitely."

-snip-

Metaphorical

(2,606 posts)
4. i regularly correspond with Gary Marcus
Wed Feb 11, 2026, 07:50 PM
7 hrs ago

I would trust his word FAR more than I would Matt Shumer, who's a paid shill.

Yes, automation (by which I include various kinds of AIs, as well as software as a service and a great deal of what would be considered algorithmic software) is eroding what jobs COULD at least partially replaced, but here's some the reality:

* For the most part, consumers are not adopting AI at noticeable levels, and in many cases are actively moving away from it, seeking out alternatives.
* OpenAI in particular will never be profitable. Not now, not in ten years, not in fifty (assuming it even survives to ten years old). Investors are hitting pause, and a lot of the money that's been committed is now being held back, even with the threat of contract breaches.
* Vibe coding hit its peak about a year ago with the introduction of the agentic tech. Now most of that code is being ripped out because it is unmanageable, has egregious bugs, and is a massive security risk (I work on tech that's intended to ground AIs, but even given that, I'm dubious about CodeGen beyond creating relatively simple frameworks).
* AI is being adopted in the coding field as well as asset generation, but those tend to be low hanging fruit.
* AI does not do appreciably better in detecting cancers or other diseases than your average layman, and usually fares worse than doctors do.
* Some forms of AI (not generative) are being used for biochemical and genetic research. This is not what the GenAI folk are peddling.
* Many businesses that made a splash by firing their development teams to boost their stock prices have quietly been trying to rehire them, not always successfully.
* The regulatory world is finally catching up to the Tech Bros, and questions are being raised around the circular financing that looks an awful lot like Lehman Brothers, et al, in 2008.
* Most of the big tech stalks have seen their share prices drop significantly in the last six months. I don't think we're going to have a panic crash, but I do see the air leaking out of the balloon.

highplainsdem

(60,939 posts)
10. I've had some great exchanges with Gary, too. I'd have trusted him more than Matt Shumer in any
Wed Feb 11, 2026, 07:59 PM
7 hrs ago

case, but especially after reading more about Shumer (see reply 3 for more info about him).

FascismIsDeath

(106 posts)
6. I've brought this up before in your posts, but as a developer with 25 years experience, it really IS scary.
Wed Feb 11, 2026, 07:53 PM
7 hrs ago

I read the Marcus article and yea, he is exaggerating. You aren't going to put together sound architecture and ensure secure code for large software systems by prompting some things just "letting it rip".

But the leaps ChatGPT and some other AI platforms have taken has me worried I might end up having to find a new career before I'm eligible for retirement. And I have no clue what that career would even be in terms of making the money I make at my senior experience level.

I'm 46 years old, I have like 20 more years to go. Technology tends to evolve at an accelerated pace. I full believe "they" will be able to replace me with AI sooner rather than later.

As it stands right now, I am still needed. I write my own damn code. And in terms of AI, someone could potentially hand me snippets of AI generated code and I can go through and clean it up, correct where its wrong, add security and put it out there. But the amount of work doing that compared to me writing everything myself is substantially smaller.

The company I work for has promised us that they will not replace us with AI. And they've been a trustworthy company to work for. But if anything were to happen that I were to lose this job for whatever reason, I really don't know what happens next. Maybe some other company would hire me for far less pay to do exactly what I just described.

But then what happens when they refine these models to the point that some junior level dev with very little knowledge of the inner workings can, as I said, put together sound architecture and ensure secure code for large software systems by prompting some things and just "let it rip".

I have every reason in the world to have disdain for what is happening because its a personal threat to my ability to work and make the money I need to retire with dignity some day. And I know you obviously share that disdain. But don't let that disdain get in the way of accepting reality for what it is.

AI is not as good as the people who are making money from it, want to pretend it is.

But its also much better than AI skeptics want to admit and has the potential to actually live up to the hype within the next 2 to 5 years.

We don't need denial of AI's unfortunate potential. Denial helps no one. What we need are laws that will protect so many of us that could lose everything otherwise.

highplainsdem

(60,939 posts)
12. I'm sorry that AI's coding ability is becoming such a threat to developers' jobs. But part of the reason
Wed Feb 11, 2026, 08:42 PM
6 hrs ago

it's a threat is that employers are lowering their standards for security, and I'm expecting some catastrophic security failures, which I hope will impact the execs who made the dumb choice to trust AI more than they impact people who weren't involved in that foolish decision.

Coding is the one area in our society where AI is being adopted most rapidly, and where failures are most dangerous. There have been lots of warnings about AI code not being secure, but apparently they're being ignored by a lot of people. From the OP:

A separate survey from Sonar found nearly half of devs fail to check AI-generated code, placing their organization at huge risk.


I hope we have serious enough security failures to end this madness, without it hurting too many people.

As for "accepting reality" - reality is that the backlash against AI in education and the arts seems to be building, and will not go away. I'm still hoping to see AI writing, visual art and music becoming completely socially unacceptable, both because they're based on IP theft and because they're basically fraud.

I hope to see the insane experiment with AI in education ended before more students are deprived of learning and more teachers quit.

I still hope to see some AI bros in prison for IP theft.

Generative AI is a wrecking ball for our society and our natural environment, however convenient a shortcut it seems to be for coding, and it must be stopped. It's a technology almost entirely built on theft, and the techies I knew years ago when I first got online, and especially the comoderator I had who was a spokesman for an international organization focused on social responsibility, would have been appalled by what's happening.

And if you went back in time and told the science fiction writers at a worldcon about the hype for this badly flawed.tech, they'd laugh...until they learned it was based on IP theft.

No one in their right mind wants the.future the AI bros and this flawed technology are pushing us toward.

hunter

(40,496 posts)
18. The "junior level" developers who prompt and review this code are just fall guys.
Thu Feb 12, 2026, 12:01 AM
3 hrs ago

When this code fails catastrophically they'll be the ones blamed, not the idiots who thought this AI stuff was a good idea and invested heavily in it.

Potential computer science majors may be realizing they are setting themselves up for careers in the digital sweatshops -- sweat shops where they'll risk getting fired if they dare complain about the fancy new sewing machines the bosses bet the futures of their companies on.

I've had two jobs in the computer industry. One for a mainframe manufacturer that was sinking, where almost everyone had already cast off in the lifeboats, and one job writing 1802 assembly code, something that literally left me scarred for life because I was young and foolish and didn't have the good sense not to mix my personal life with whatever that income producing activity was. It was certainly not professional. That was back when Pascal was the hot new teaching language. I still have my silver user manual and report. That'll tell you how long ago that was. I first signed onto the internet in the late 'seventies, before there was a World Wide Web.

For the sake of my sanity I'm glad I didn't go down that path. It turned out I was much, much happier studying evolutionary biology. Nevertheless it's a rare day I'm not up to some mischief on my computers.

It's appalling that people in our nation have to fear for their futures as they grow older -- not just their comfort and dignity, but for their very survival.

FascismIsDeath

(106 posts)
19. Pascal was the first language I really learned.
Thu Feb 12, 2026, 01:57 AM
1 hr ago

That was in the late 90s... I had messed with some BASIC before that but that was the first one I learned properly. I was 17. Of course I learned many other languages and variations of those languages since then.

But yea, you are an OG if you were working with mainframes and assembler.

I like programming but I don't love it. The things I love doing will never make me a living, lol. But I like it. I don't mind doing it for a living and I'm good at it... at least my niche of it which are primarily web applications.

Latest Discussions»General Discussion»Some reality about Matt S...