General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsSome reality about Matt Shumer's "Something Big Is Happening" AI hypefest some DUers are overreacting to
From Gary Marcus, always a voice of sanity in response to pro-AI hype and lies:
https://garymarcus.substack.com/p/about-that-matt-shumer-post-that
Something big is allegedly happening
Gary Marcus
Feb 11, 2026
All morning people have been asking me about a blog post by Matt Shumer that has gone viral, with nearly 50 million views on X.
Its a masterpiece of hype, written in the style of the old direct marketing campaigns, with bold-faced call outs like I know this is real because it happened to me first and I am no longer needed for the actual technical work of my job. Its chock full of singularity vibes:
-snip-
As I told a journalist who asked me about the post, I wouldnt take it so seriously:
-snip-
Gary then goes into detail on why Shumer's hysterical post shouldn't be taken seriously. AI models can write code. It's still all too often flawed code that is a security risk.
Gary links to this article:
https://www.itpro.com/software/development/ai-generated-code-is-fast-becoming-the-biggest-enterprise-security-risk-as-teams-struggle-with-the-illusion-of-correctness
Security teams are scrambling to catch AI-generated flaws that appear correct before disaster strikes
By Emma Woollacott
published 5 February 2026
AI has overtaken all other factors in reshaping security priorities, with teams now forced to deal with AI-generated code that appears correct, professional, and production-ready but that quietly introduces security risks.
-snip-
Aikido found that AI-generated code is now the cause of one-in-five breaches, with 69% of security leaders, engineers, and developers on both sides of the Atlantic having found serious vulnerabilities.
These risk factors are further exacerbated by the fact many developers are placing too much faith in the technology when coding. A separate survey from Sonar found nearly half of devs fail to check AI-generated code, placing their organization at huge risk.
Emphasis added.
Some posts from Bluesky about Shumer below...and also see reply 3 for background on other hype Shumer has been ridiculed for, and for the info that the AI company he's CEO of offers "cutting edge AI tools like "Team Member Praise Generator" and "AI Sympathy Message Generator by HyperWrite for Heartfelt Cards".
Surely, the author of the âAI is an existential threat to the economy and your wellbeing, and the only solution is to use more AI immediately, pleaseâ has no financial incentive in making people believe that. Now to take a big sip of my coffee and check out his bioâ¦
— Luis Paez-Pumar (@lpp.bsky.social) 2026-02-11T13:47:42.108Z
I shouldnât even bother engaging with this idiotic article but I always find it funny when AI guys are like âyou better become an early adopter or youâll be left behind!â If the tech is as good as they claim, and can do what they claim, any idiot should be able to use it with no practice.
— Luis Paez-Pumar (@lpp.bsky.social) 2026-02-11T13:51:03.109Z
But of course, the goal isnât to change the world or whatever, itâs just to take everyoneâs money. I donât think that writing this alarmist crap with the lesson of âbuy a Claude subscription or youâll ruin your lifeâ should work, but given the reaction, it just might.
— Luis Paez-Pumar (@lpp.bsky.social) 2026-02-11T13:53:10.821Z
reads like an ai generated article to me
— cavan (@cfos.bsky.social) 2026-02-11T15:59:00.764Z
He admitted it was
— Luis Paez-Pumar (@lpp.bsky.social) 2026-02-11T19:33:23.108Z
Everyone reading âSomething Big is Happeningâ and losing their minds.
— Steve Strickland (@stevestrickland6.bsky.social) 2026-02-11T21:24:31.445Z
In reality Matt Shumer has consistently overhyped AI.
In 2024 he was accused of fraud when researchers were unable to replicate the supposed top performance of a new LLM he released called Reflection 70B.
SheltieLover
(78,496 posts)highplainsdem
(60,939 posts)SheltieLover
(78,496 posts)stopdiggin
(15,206 posts)highplainsdem
(60,939 posts)stuff requiring attention first.
highplainsdem
(60,939 posts)https://www.pcgamer.com/gaming-industry/tech-investor-declares-ai-games-are-going-to-be-amazing-posts-an-ai-generated-demo-of-a-god-awful-shooter-as-proof/
By Lincoln Carpenter published October 24, 2025
On an almost daily basis, a terminally tech-brained individual insists that AI will determine the future of human creativity if we simply believe hard enough. Today, that soothsayer is Matt Shumer, an investor who offered a demonstration of his vision for AI's videogame development potentialand was quickly blasted for it, because the vision sucks to look at.
ShumerCEO of HyperWrite, a company offering cutting edge AI tools like "Team Member Praise Generator" and "AI Sympathy Message Generator by HyperWrite for Heartfelt Cards"posted a video on X yesterday with the caption "AI games are going to be amazing." Judging from the boastful " (sound on)," he seemed convinced the video would illustrate the inevitability of our magnificent AI future.
In actuality, what he uploaded was an embarrassingly incoherent AI-generated mockup of imitation shooter gameplay that feels like it could trigger new and exciting forms of psychosis if you watch it long enough. It's a moving kaleidoscope of AI sludge that's only amazing in how clearly it communicates that AI's biggest pushers are operating with a different set of standards.
-snip-
"They're not finished products they're glimpses of what's coming," Shumer said. "Is this an AAA game today? Of course not. Will AI-powered games be incredible in 5 years? Definitely."
-snip-
Metaphorical
(2,606 posts)I would trust his word FAR more than I would Matt Shumer, who's a paid shill.
Yes, automation (by which I include various kinds of AIs, as well as software as a service and a great deal of what would be considered algorithmic software) is eroding what jobs COULD at least partially replaced, but here's some the reality:
* For the most part, consumers are not adopting AI at noticeable levels, and in many cases are actively moving away from it, seeking out alternatives.
* OpenAI in particular will never be profitable. Not now, not in ten years, not in fifty (assuming it even survives to ten years old). Investors are hitting pause, and a lot of the money that's been committed is now being held back, even with the threat of contract breaches.
* Vibe coding hit its peak about a year ago with the introduction of the agentic tech. Now most of that code is being ripped out because it is unmanageable, has egregious bugs, and is a massive security risk (I work on tech that's intended to ground AIs, but even given that, I'm dubious about CodeGen beyond creating relatively simple frameworks).
* AI is being adopted in the coding field as well as asset generation, but those tend to be low hanging fruit.
* AI does not do appreciably better in detecting cancers or other diseases than your average layman, and usually fares worse than doctors do.
* Some forms of AI (not generative) are being used for biochemical and genetic research. This is not what the GenAI folk are peddling.
* Many businesses that made a splash by firing their development teams to boost their stock prices have quietly been trying to rehire them, not always successfully.
* The regulatory world is finally catching up to the Tech Bros, and questions are being raised around the circular financing that looks an awful lot like Lehman Brothers, et al, in 2008.
* Most of the big tech stalks have seen their share prices drop significantly in the last six months. I don't think we're going to have a panic crash, but I do see the air leaking out of the balloon.
highplainsdem
(60,939 posts)case, but especially after reading more about Shumer (see reply 3 for more info about him).
FascismIsDeath
(106 posts)I read the Marcus article and yea, he is exaggerating. You aren't going to put together sound architecture and ensure secure code for large software systems by prompting some things just "letting it rip".
But the leaps ChatGPT and some other AI platforms have taken has me worried I might end up having to find a new career before I'm eligible for retirement. And I have no clue what that career would even be in terms of making the money I make at my senior experience level.
I'm 46 years old, I have like 20 more years to go. Technology tends to evolve at an accelerated pace. I full believe "they" will be able to replace me with AI sooner rather than later.
As it stands right now, I am still needed. I write my own damn code. And in terms of AI, someone could potentially hand me snippets of AI generated code and I can go through and clean it up, correct where its wrong, add security and put it out there. But the amount of work doing that compared to me writing everything myself is substantially smaller.
The company I work for has promised us that they will not replace us with AI. And they've been a trustworthy company to work for. But if anything were to happen that I were to lose this job for whatever reason, I really don't know what happens next. Maybe some other company would hire me for far less pay to do exactly what I just described.
But then what happens when they refine these models to the point that some junior level dev with very little knowledge of the inner workings can, as I said, put together sound architecture and ensure secure code for large software systems by prompting some things and just "let it rip".
I have every reason in the world to have disdain for what is happening because its a personal threat to my ability to work and make the money I need to retire with dignity some day. And I know you obviously share that disdain. But don't let that disdain get in the way of accepting reality for what it is.
AI is not as good as the people who are making money from it, want to pretend it is.
But its also much better than AI skeptics want to admit and has the potential to actually live up to the hype within the next 2 to 5 years.
We don't need denial of AI's unfortunate potential. Denial helps no one. What we need are laws that will protect so many of us that could lose everything otherwise.
stopdiggin
(15,206 posts)highplainsdem
(60,939 posts)it's a threat is that employers are lowering their standards for security, and I'm expecting some catastrophic security failures, which I hope will impact the execs who made the dumb choice to trust AI more than they impact people who weren't involved in that foolish decision.
Coding is the one area in our society where AI is being adopted most rapidly, and where failures are most dangerous. There have been lots of warnings about AI code not being secure, but apparently they're being ignored by a lot of people. From the OP:
I hope we have serious enough security failures to end this madness, without it hurting too many people.
As for "accepting reality" - reality is that the backlash against AI in education and the arts seems to be building, and will not go away. I'm still hoping to see AI writing, visual art and music becoming completely socially unacceptable, both because they're based on IP theft and because they're basically fraud.
I hope to see the insane experiment with AI in education ended before more students are deprived of learning and more teachers quit.
I still hope to see some AI bros in prison for IP theft.
Generative AI is a wrecking ball for our society and our natural environment, however convenient a shortcut it seems to be for coding, and it must be stopped. It's a technology almost entirely built on theft, and the techies I knew years ago when I first got online, and especially the comoderator I had who was a spokesman for an international organization focused on social responsibility, would have been appalled by what's happening.
And if you went back in time and told the science fiction writers at a worldcon about the hype for this badly flawed.tech, they'd laugh...until they learned it was based on IP theft.
No one in their right mind wants the.future the AI bros and this flawed technology are pushing us toward.
hunter
(40,496 posts)When this code fails catastrophically they'll be the ones blamed, not the idiots who thought this AI stuff was a good idea and invested heavily in it.
Potential computer science majors may be realizing they are setting themselves up for careers in the digital sweatshops -- sweat shops where they'll risk getting fired if they dare complain about the fancy new sewing machines the bosses bet the futures of their companies on.
I've had two jobs in the computer industry. One for a mainframe manufacturer that was sinking, where almost everyone had already cast off in the lifeboats, and one job writing 1802 assembly code, something that literally left me scarred for life because I was young and foolish and didn't have the good sense not to mix my personal life with whatever that income producing activity was. It was certainly not professional. That was back when Pascal was the hot new teaching language. I still have my silver user manual and report. That'll tell you how long ago that was. I first signed onto the internet in the late 'seventies, before there was a World Wide Web.
For the sake of my sanity I'm glad I didn't go down that path. It turned out I was much, much happier studying evolutionary biology. Nevertheless it's a rare day I'm not up to some mischief on my computers.
It's appalling that people in our nation have to fear for their futures as they grow older -- not just their comfort and dignity, but for their very survival.
FascismIsDeath
(106 posts)That was in the late 90s... I had messed with some BASIC before that but that was the first one I learned properly. I was 17. Of course I learned many other languages and variations of those languages since then.
But yea, you are an OG if you were working with mainframes and assembler.
I like programming but I don't love it. The things I love doing will never make me a living, lol. But I like it. I don't mind doing it for a living and I'm good at it... at least my niche of it which are primarily web applications.
