Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

AZJonnie

(3,768 posts)
Mon Apr 13, 2026, 11:38 PM Monday

As someone who must use AI daily for coding for work, when people try to convince me that AI can't "think"?

I have to shake my head a bit

Because if you use it regularly, you know AI is absolutely capable of fairly complex logical thinking. And honestly, Gemini figuring out what I was asking it below was *child's play* compared to what I've seen it do in other situations. This was a very easy question.

Also, this was just Gemini 3.1 Flash Preview, not even the 3.1 Pro Preview ("Flash" is the cheaper, less deep-thinking version vs "Pro" ).

Check out my prompt below, and its reply. Here, I'm describing how our company's yearly retainer "projects" (which have a fixed number of hours for a year we can bill to the client) are divvied up into specific tasks (like 'coding' vs 'planning' or the like) , and then each task has an 'estimated hours' assigned for each of the next 12 months. I told it I want to build a graph for every project that looks at how many hours have been billed as 'worked' by employees across all tasks in the retainer, plus another metric that will make that particular metric useful to managers. But I did not tell AI what that metric was, I challenged it.

Me:

Now I'm going to challenge you a little bit to figure something out. A retainer PROJECT is
generally 1 year, with some number of quoted hours. These projects generally have a collection of 1-4
purpose-driven tasks (typically there are 2, and they are "Implementation and Testing" and "Planning and
Communication", but that doesn't HAVE to be the case, but what we DO always do is estimate all the quoted hours out
into task-based estimate records over the course of the year), with the same start and end dates as the project.
Then, we have monthly estimates per task, which LOOSELY will be about 75% of hours into I&T, and 25% into P&C, and
the total quoted hours are split across the 12 months, with small adjustments based on 'work days per month' as I
mentioned in my very first post. Whatever the math is there should be a black box to you, the important point is
we'll have estimates for each month of each task under each retainer project. Therefore, looking purely at % of
quoted hours used (as of today) is basically meaningless. There's one more, very basic metric we need to make "how many
hours have we burned" into a MEANINGFUL statistic. What do imagine that metric might be?


AI Reply:

✦ Based on your description of how retainers are structured over a 1-year lifespan with monthly-level granularity, the
"missing link" to make those percentages meaningful is Time Elapsed (or "Project Velocity" ).

If we are 50% through the quoted hours, but only 20% through the project's duration (e.g., it's March), we are burning
through the budget way too fast. Conversely, if it's November and we've only burned 50%, we are significantly
under-utilizing the retainer.

The metric we need is Percentage of Time Elapsed (or Elapsed Work Days).

Why this is the "Golden Metric":
By comparing % Hours Burned vs. % Time Elapsed, we get a real-time "Health Score":
* Hours % > Time %: We are "running hot" (over-burning).
* Hours % < Time %: We are "under budget" (under-burning).
* Hours % ≈ Time %: we are perfectly on track.

How to calculate it accurately for your system:
Since you emphasized that "work days" are the source of truth rather than calendar days, the most accurate way to
calculate this would be:
1. Total Work Days in Project: Sum of all work days (minus holidays/weekends) from Project Start Date to Project End
Date.
2. Work Days Elapsed: Sum of all work days from Project Start Date to Today.
3. % Time Elapsed: (Work Days Elapsed / Total Work Days in Project) * 100.

My Proposal for the UI:
Imagine the Project Card having two markers on that progress bar (or a second bar beneath it):
1. The Solid Bar showing % Hours Burned.
2. A Vertical Needle or Ghost Bar showing % Time Elapsed.

If the needle is ahead of the bar, the manager breathes easy. If the bar has passed the needle, it's an immediate
visual red flag.

Is Percentage of Work Days Elapsed the metric you were thinking of?


Now ... I invite someone to explain to me, in all seriousness, that what AI just did here is manifestly different from the general concept of "thinking", as a human would do?

I don't think that is "mindless prediction" based on "word patterns". That's not a "glorified search engine that gives a fancy-sounding answers". That's not "hallucinations", nor did it require the consumption of petabytes of "copyrighted materials" in order to come up with that answer.

By any meaningful definition, AI understands what you're saying, and it's fucking CRAZY how good it is at doing so, and then at problem-solving based on what you prompted it to think about.

AI is going to be humanities downfall, but I'll tell you right now, it won't be because AI "sucks" or is "fake/not real intelligence/not real thinking". It's going to fuck us by the exact opposite means. It's going to outdo us in "intelligence". And by so doing, make BILLIONS of people who make a living by "ability to reason/think" basically obsolete. "Smarts/thinking" will become low-value skills, due to the TERRIFYING and completely UNREGULATED power of AI. Which becomes better at "reasoning" every damn day.

So, after this, I said basically "Exactly! Now, implement that in the code (I like your vertical needle idea), make the little mini-graph on each of the Project cards on the Project Dashboard". 2 minutes later the code was written, and the graphs showing us our project burn-down rates over the year were on the web page in my dev environment



The only way you can keep a job programming computers these days is leveraging AI, because it just did in 2 minutes what it would take me 2 hours to implement myself. At LEAST.

AI is scary because of how *skilled* it is, not how stupid/mindless it is. "We" underestimate it at our own peril.
12 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

Disaffected

(6,464 posts)
3. I have had similar experiences with AI
Tue Apr 14, 2026, 12:14 AM
Yesterday

albeit in a more limited way (writing apps in HTML code based on descriptions of what I would like the app to do). I is uncannily like having a very competent programmer working for you. Methinks "computer programming" has/is changed for ever and, as you say, many other "thinking" tasks, despite what the naysayers seem to believe.

I'm kinda mystified by claims that software development organizations seem to claim they have received little or no net benefit from AI.

LudwigPastorius

(14,806 posts)
4. Due to the hype around Large Language Models,...
Tue Apr 14, 2026, 01:00 AM
Yesterday

a lot of people conflate them with the entirety of artificial intelligence, but there is a lot more to it than that.

Also, I get a kick out of people who unequivocally state that AI can never be conscious...as if we understand what our own consciousness is and what structure it arises from.

It has become obvious to me that there will be no slowing of AI research. There are too many governments, companies, and power-hungry billionaires competing in the race toward AGI and ASI to prevent catastrophe.

When I think about what's coming, I am glad of my age (for the first time). That I am not likely to live beyond the next 10 to 20 years seems like a blessing.

WhiskeyGrinder

(27,065 posts)
6. Posts like these tell me as much about what people think about thinking as they do about AI.
Tue Apr 14, 2026, 08:05 AM
Yesterday
because it just did in 2 minutes what it would take me 2 hours to implement myself
Two hours to come up with the answer? Or two hours to code these formulas?

AZJonnie

(3,768 posts)
9. Code for the queries and converting data into graphs and getting the graphs on the page showing how I want it to
Tue Apr 14, 2026, 01:00 PM
Yesterday

I already knew the answer

highplainsdem

(62,501 posts)
7. Yes, if you cherry-pick responses, you can convince yourself genAI models are actually thinking. Especially
Tue Apr 14, 2026, 08:22 AM
Yesterday

if you really, really enjoy chatting with them.

This effect on humans is called the ELIZA effect, for a chatbot developed 60 years ago:

https://en.wikipedia.org/wiki/ELIZA_effect

I don't have to spend time coding to know that many developers and software engineers consider genAI models stupid much of the time, and maddeningly inconsistent. I've seen plenty of messages from those coders online, especially on Hacker News. And that goes for every genAI model, including Claude.

I've seen entirely too many examples of chatbots offering one wrong answer after another, very confidently, and apologizing for each error and then offering another wrong answer, just as confidently.

I've also read too many stories about people being led into all sorts of mistakes after they decided the mindless software they were chatting with was really intelligent.

Which is why I've continued posting news stories and studies about how flawed and harmful this tech is.

Because the AI companies stole the world's intellectual property for training data, algorithms allow for almost instant responses copying the structure of language (and code) well enough to seem persuasive.

But the warnings AI companies always include - that their AI models make mistakes and users should always check results - aren't there because they're worrying too much and not realizing their bots are actually thinking, as you say you know AI is thinking from your personal experience. The warnings are there because those companies actually know damn well that their bots aren't truly intelligent and can hallucinate at any time, in any way, and they don't want to be held liable.

People often fall for the hype the AI companies peddle when they aren't trying to set up legal firewalls to protect them from responsibility for their products' flaws. They also fall for the bots' sycophancy.

AZJonnie

(3,768 posts)
11. I used to see this all the time, but with the latest models of Claude and Gemini
Tue Apr 14, 2026, 01:14 PM
Yesterday

They are both doing this a LOT less lately with Sonnet 4.6 and Gemini 3.1.

I've seen entirely too many examples of chatbots offering one wrong answer after another, very confidently, and apologizing for each error and then offering another wrong answer, just as confidently.


Part of my point here is they're getting better at figuring stuff out, i.e. not repeatedly giving wrong answers, promising to do better next time, then repeating the same mistake again and again. I know exactly what you're talking about with that. Less than a year ago I was still seeing it a lot, esp. on cheaper/free models. But they are moving forward from that, very rapidly with the latest, paid models.

And I'm sorry, but the question I posed required the model to do something that is not demonstrably different from "thinking". It couldn't have just derived that answer from the mess of words I posted (some of it was even redundant and unclear once I read it over again, but it was not misled by that) without understanding what I was asking. These things are thinking more and more like humans with every passing iteration. You can believe me or not, but I'm telling you, I use the shit every day. Yes, it still makes mistakes, but overall, it's getting freaking smart as hell. It's scary.

hunter

(40,752 posts)
8. Oh my! Exemplifying the horrors of Imitation Intelligence in defense of it.
Tue Apr 14, 2026, 12:49 PM
Yesterday

I remember when Microsoft PowerPoint was introduced and large screen televisions or projection screens became an essential fixture in conference rooms, classrooms, and meeting halls.

Any fool could make a PowerPoint presentation and many did.

Welcome to hell.

I don't reject the use of this technology entirely; a few months ago I used it to read some Polish websites. But people should be aware of the social and environmental costs of it.

A lot of problems currently associated with this technology will go away when the subsidies stop. "Synthetic Content" YouTube videos, for example, are no longer going to be profitable when producers have to pay actual dollar cost plus profit for the service. The free ride won't last forever.

Oneironaut

(6,306 posts)
10. AI does not "think." Sorry. The structure of it is composed of "neurons," but it does not think logically.
Tue Apr 14, 2026, 01:00 PM
Yesterday

“AI” is a buzz word now. The subject of this is more LLMs. AI is not reasoning like a brain when it spits out content for you. It is trained based on certain inputs to provide outputs, like an overly complex search engine. That’s all it is.

Impressive? Definitely! It is not reasoning like a brain, though.

gulliver

(14,024 posts)
12. The bright side could be pretty bright
Tue Apr 14, 2026, 01:41 PM
Yesterday

I've coded with AI too, and it's pretty breathtaking. Thanks for putting together this example. I have my IDE linked in real time to an AI via plug-in. After the AI generates the whole program, I can get in and edit the code manually. As I type, it autocompletes practically everything...whole new sections of logic pop up as proposals after I type about four characters. It "reads my mind." Coding is ridiculously easier. And the code's better too.

I think the bright side of AI is something we should be planning, and I don't think we're doing it very well yet. Our leaders don't seem to get it. I'm not sure they even get plain old automation and plain old software and how those affect us all. (Anyone who really understands IT or business and can really lead is probably not going to be too attracted to public service. We've made the job practically untenable.)

I think we're going to need to find a way to tax AI and ensure that it and all automation benefit all people. If we do that, things can work better than they're working now. AI is literally more intelligence in the world.

I think we may already be seeing improvements in the overall wisdom of how people behave. Call me superstitious, but I think a lot of people who needed mentoring (and who really doesn't?) are getting it now. That's usually a good thing, even though sometimes it's a terrible thing.


Latest Discussions»General Discussion»As someone who must use A...