General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsSocial news website Hacker News thread asked users, "How is AI-assisted coding going for you professionally?"
and the responses are interesting. The thread has 329 points (recs) and 529 comments (replies).
If you're not familiar with Hacker News, see https://en.wikipedia.org/wiki/Hacker_News - the article on HN starts
This is the thread asking about professional experiences with AI-assisted coding:
https://news.ycombinator.com/item?id=47388646
Some of the comments:
I noticied what previously would take 30 mins, now takes a week. For example we had a performance issue with a DB, previously I'd just create a GSI (global secondary index), now there is a 37 page document with explanation, mitigation, planning, steps, reviews, risks, deployment plan, obstacles and a bunch of comments, but sure it looks cool and very professional.
It makes my work suck, sadly. Team dynamics also contributes to that, admittedly.
Last year I was working on implementing a pretty big feature in our codebase, it required a lot of focus to get the business logic right and at the same time you had be very creative to make this feasible to run without hogging to much resources.
When I was nearly done and worked on catching bugs, team members grew tired of waiting and starting taking my code from x weeks ago (I have no idea why), feeding it to Claude or whatever and then came back with a solution. So instead of me finishing my code I had to go through their version of my code.
Each one of the proposals had one or more business requirements wrong and several huge bugs. Not one was any closer to a solution than mine was.
At work, the devs up the chain now do everything with AI not just coding then task me with cleaning it up. It is painful and time consuming, the code base is a mess. In one case I had to merge a feature from one team into the main code base, but the feature was AI coded so it did not obey the API design of the main project. It also included a ton of stuff you dont need in the first pass - a ton of error checking and hand-rolled parsing, etc, that I had to spend over a week unrolling so that I could trim it down and redesign it to work in the main codebase. It was a slog, and it also made me look bad because it took me forever compared to the team who originally churned it out almost instantly. AI tools are not good at this kind of design deconflicting task, so while its easy to get the initial concept out the gate almost instantly, you cant just magically fit it into the bigger codebase without facing the technical debt youve generated.
There is an alternative way make the necessary point here.. Let it go through with comments to the effect that you can not attest to the quality or efficacy of the code and let the organization suffer the consequences of this foray into LLM usage. If they can't use these tools responsibly and are unwilling to listen to the people who can, then they deserve to hit the inevitable quality wall Where endless passes through the AI still can't deliver working software and their token budget goes through the ceiling attempting to make it work.
(This comment got a reply saying, "I think you're falling victim to the just-world fallacy." )
I don't use it.
I know my mind fairly well, and I know my style of laziness will result in atrophying skills. Better not to risk it.
One of my co-workers already admitted as much to me around six months ago, and that he was trying not to use AI for any code generation anymore, but it was really difficult to stop because it was so easy to reach for. Sounded kind of like a drug addiction to me. And I had the impression he only felt comfortable admitting it to me because I don't make it a secret that I don't use it.
Another co-worker did stop using it to generate code because (if I'm remembering right) he can tell what it generates is messy for long-term maintenance, even if it does work and even though he's new to React. He still uses it often for asking questions.
A third (this one a junior) seemed to get dumber over the past year, opening merge request that didn't solve the problem. In a couple of these cases my manager mentioned either seeing him use AI while they were pairing (and it looked good enough so the problems just slipped by) or saw hints in the merge request with how AI names or structures the code.
tanyev
(49,165 posts)and he spends more time correcting it than if hed done the coding himself.
highplainsdem
(61,705 posts)reason, but I think a lot of people hoped this flawed tech with its hallucinations would somehow become perfect for coding. And I have seen people say they found it a great help. But so far, judging by what I've read, it also delivers a lot of headaches and code that isn't secure.
Flawed code that the companies using AI for coding either don't want to talk about, or want to blame on humans even though they know AI models can hallucinate - make mistakes - at any time, even though it's accurate at other times. You can never trust it not to introduce errors.
So humans become stressed fact-checkers, fixers...and scapegoats for AI.
LearnedHand
(5,381 posts)I dont typically read the comments because the user interface is so messy, so thanks for including these interesting ones.