General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsReporter doing story on Epstein & Silicon Valley CEOs was told by an SV comms rep that _Grok_ said he was lying
Bluesky thread from the NYT tech reporter who did the story, followed by some of the replies about idiots trusting AI.
While reporting this, I had something happen that's never happened. A comms rep for one of the co's disputed my reporting and said what I was telling them was untrue because it was not in Grok, xAI's chatbot.
— Ryan Mac ð (@rmac.bsky.social) 2026-02-05T17:38:53.472Z
I was looking directly at the files. And this person was using AI to challenge the truth.
The comms rep just had no ability to comprehend that AI takes in the information that already exists in the world and repackages it. Our reporting had yet to be published and therefore wasn't out in the world hadn't been ingested by any chatbot.
— Ryan Mac ð (@rmac.bsky.social) 2026-02-05T17:42:50.194Z
And they just... believed the chatbot.
Multiply that by 50 million US voters as see where that gets us.
— Ralph (@ralphhhenson.bsky.social) 2026-02-05T17:45:17.985Z
Yiiikesâ¦
— Phillip Vander Klay (@vanderklay.bsky.social) 2026-02-05T17:47:54.927Z
I thought this would be an emerging issue in some pockets of the population like high schoolers or or people generally unused to doing any kind of research . I did not expect it to already be an issue for white color information-based professionals
it's because Silicon Valley has done its best to frame these bots as search engines, instead of what they actually are, which is larger-scale versions of the shitty autoprediction tool in your phone that constantly predicts everything incorrectly.
— five pennies in a trenchcoat (@snickettes.bsky.social) 2026-02-05T19:26:38.212Z
I can't find the skeet now, but (if I recall correctly) a few months ago @tressiemcphd.bsky.social mentioned some guy was insisting she was married because that's what Google AI Overview was telling him and was even sending her screenshots as "proof" â as if she wouldn't know her own marital status.
— Rebecca Kennison (@rrkennison.bsky.social) 2026-02-05T19:39:03.948Z
Yes!
— Tressie McMillan Cottom (@tressiemcphd.bsky.social) 2026-02-05T19:44:18.050Z
AI makes people stupid. By design.
— Kelly Barnhill (@kellybarnhill.bsky.social) 2026-02-05T19:42:38.069Z
mwmisses4289
(3,557 posts)I almost want to go onto bluesky to ask him: Dude, where have you been for the last few months? Did you miss the major story on the young lawyers around the country being ripped by judges for using AI to write their error filled briefs? Or the companies whose teams have used AI for preliminary reports and have had to go back and correct the outright errors done by AI?
Is AI becoming the new "if it's on the internet it must be true" thing?
highplainsdem
(60,812 posts)about people being gullible, where chatbots are concerned, from teachers who saw this in their students. And I have kept up and often posted about adults being gullible as well.
I still felt surprised that anyone in Silicon Valley would be quite so naive. Obviously.Ryan Mac had been surprised, too, even though as a tech journalist he would've known about all the news stories on lawyers (not always young ones) and other educated adults being foolish about AI.
Hell, I still feel some disbelief when DUers post AI Overviews from Google, or want to tell everyone what ChatGPT said, or what Gemini said (and we do have some DUers who apparently consider Grok a great and reliable source of information).
mwmisses4289
(3,557 posts)there were people who would believe anything posted to the internet was 100% true. It was a commercial for an insurance company, of all things. I remember turning to my husband, with a rather shocked look on my face, and asking him was that true? After he had laughed at my shocked expression, he told me yeah, it was true.
I guess most of us are too trusting.
highplainsdem
(60,812 posts)students refused to believe an encyclopedia article because ChatGPT disagreed.
That was appalling, but at least it was a schoolkid, ChatGPT was fairly new, and another teacher posted about the same time that almost none of the students in his classes were aware ChatGPT could be wrong.
That reporter was dealing with a comms rep for a Silicon Valley company.
There's no way that comms rep could have been unaware of the standard warnings from AI companies that genAI makes mistakes and results must always be checked.
But they still trusted the chatbot.
FHRRK
(1,404 posts)Last month I got an AI generated story on ASU recruiting a highly rated player. (Good so far)
Last paragraph of three paragraph article stated, the recruiting rankings may go way up as there are other highly rated recruits that may commit. Issue was, the potential recruits listed, played at ASU in the late nineties and early 2000s, played in the NFL and are currently in their 40s and retired.
highplainsdem
(60,812 posts)FHRRK
(1,404 posts)It popped up on a Google feed.
eppur_se_muova
(41,299 posts)Because "perpetual beta" is about the only kind of software that tech bros are pushing anymore.
