Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(61,705 posts)
Mon Mar 16, 2026, 10:03 AM 2 hrs ago

Social news website Hacker News thread asked users, "How is AI-assisted coding going for you professionally?"

and the responses are interesting. The thread has 329 points (recs) and 529 comments (replies).

If you're not familiar with Hacker News, see https://en.wikipedia.org/wiki/Hacker_News - the article on HN starts

Hacker News (HN) is an American social news website focusing on computer science and entrepreneurship. It is run by the investment fund and startup incubator Y Combinator. In general, content that can be submitted is defined as "anything that gratifies one's intellectual curiosity."[1]


This is the thread asking about professional experiences with AI-assisted coding:

https://news.ycombinator.com/item?id=47388646

Some of the comments:

I noticied what previously would take 30 mins, now takes a week. For example we had a performance issue with a DB, previously I'd just create a GSI (global secondary index), now there is a 37 page document with explanation, mitigation, planning, steps, reviews, risks, deployment plan, obstacles and a bunch of comments, but sure it looks cool and very professional.

It makes my work suck, sadly. Team dynamics also contributes to that, admittedly.
Last year I was working on implementing a pretty big feature in our codebase, it required a lot of focus to get the business logic right and at the same time you had be very creative to make this feasible to run without hogging to much resources.
When I was nearly done and worked on catching bugs, team members grew tired of waiting and starting taking my code from x weeks ago (I have no idea why), feeding it to Claude or whatever and then came back with a solution. So instead of me finishing my code I had to go through their version of my code.
Each one of the proposals had one or more business requirements wrong and several huge bugs. Not one was any closer to a solution than mine was.

At work, the devs up the chain now do everything with AI – not just coding – then task me with cleaning it up. It is painful and time consuming, the code base is a mess. In one case I had to merge a feature from one team into the main code base, but the feature was AI coded so it did not obey the API design of the main project. It also included a ton of stuff you don’t need in the first pass - a ton of error checking and hand-rolled parsing, etc, that I had to spend over a week unrolling so that I could trim it down and redesign it to work in the main codebase. It was a slog, and it also made me look bad because it took me forever compared to the team who originally churned it out almost instantly. AI tools are not good at this kind of design deconflicting task, so while it’s easy to get the initial concept out the gate almost instantly, you can’t just magically fit it into the bigger codebase without facing the technical debt you’ve generated.

There is an alternative way make the necessary point here.. Let it go through with comments to the effect that you can not attest to the quality or efficacy of the code and let the organization suffer the consequences of this foray into LLM usage. If they can't use these tools responsibly and are unwilling to listen to the people who can, then they deserve to hit the inevitable quality wall Where endless passes through the AI still can't deliver working software and their token budget goes through the ceiling attempting to make it work.
(This comment got a reply saying, "I think you're falling victim to the just-world fallacy." )

I don't use it.
I know my mind fairly well, and I know my style of laziness will result in atrophying skills. Better not to risk it.
One of my co-workers already admitted as much to me around six months ago, and that he was trying not to use AI for any code generation anymore, but it was really difficult to stop because it was so easy to reach for. Sounded kind of like a drug addiction to me. And I had the impression he only felt comfortable admitting it to me because I don't make it a secret that I don't use it.
Another co-worker did stop using it to generate code because (if I'm remembering right) he can tell what it generates is messy for long-term maintenance, even if it does work and even though he's new to React. He still uses it often for asking questions.
A third (this one a junior) seemed to get dumber over the past year, opening merge request that didn't solve the problem. In a couple of these cases my manager mentioned either seeing him use AI while they were pairing (and it looked good enough so the problems just slipped by) or saw hints in the merge request with how AI names or structures the code.
4 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Social news website Hacker News thread asked users, "How is AI-assisted coding going for you professionally?" (Original Post) highplainsdem 2 hrs ago OP
Yep. Hubby's company pressures them to use it, tanyev 2 hrs ago #1
Sorry to hear that. It's been a problem with genAI all along, no matter what AI tool is used for which highplainsdem 1 hr ago #3
Huge fan of Hacker News here LearnedHand 2 hrs ago #2
You're welcome! And I agree about the UI at Hacker News. highplainsdem 20 min ago #4

tanyev

(49,165 posts)
1. Yep. Hubby's company pressures them to use it,
Mon Mar 16, 2026, 10:13 AM
2 hrs ago

and he spends more time correcting it than if he’d done the coding himself.

highplainsdem

(61,705 posts)
3. Sorry to hear that. It's been a problem with genAI all along, no matter what AI tool is used for which
Mon Mar 16, 2026, 11:38 AM
1 hr ago

reason, but I think a lot of people hoped this flawed tech with its hallucinations would somehow become perfect for coding. And I have seen people say they found it a great help. But so far, judging by what I've read, it also delivers a lot of headaches and code that isn't secure.

Flawed code that the companies using AI for coding either don't want to talk about, or want to blame on humans even though they know AI models can hallucinate - make mistakes - at any time, even though it's accurate at other times. You can never trust it not to introduce errors.

So humans become stressed fact-checkers, fixers...and scapegoats for AI.

LearnedHand

(5,381 posts)
2. Huge fan of Hacker News here
Mon Mar 16, 2026, 10:23 AM
2 hrs ago

I don’t typically read the comments because the user interface is so messy, so thanks for including these interesting ones.

Latest Discussions»General Discussion»Social news website Hacke...