HACKER Q&A
📣 throwaway-ai-qs

Is anyone else sick of AI splattered code


Between code reviews, and AI generated rubbish, I've had it. Whether it's people relying on AI to write pull request descriptions (that are crap by the way), or using it to generate tests.. I'm sick of it.

Over the year, I've been doing a tonne of consulting. The last three months I've watched at least 8 companies embrace AI generated pip for coding, testing, and code reviews. Honestly, the best suggestions I've seen are found by linters in CI, and spell checkers. Is this what we've come to?

My question for my fellow HNers.. is this what the future holds? Is this everywhere? I think I'm finally ready to get off the ride.


  👤 madamelic Accepted Answer ✓
As others have said, LLM generation of code is no excuse for not self-reviewing, testing, and understanding their own code.

It's a tool. I still have the expectation of people being thoughtful and 'code craftspeople'.

The only caveat is verbosity of code. It drives me up the wall how these models try to one-shot production code and put a lot of cruft in. I am starting to have the expectation of having to go in and pare down overly ambitious code to reduce complexity.

I adopted LLM coding fairly early on (GPT3 / GPT3.5) and the difference between then and now is very wild. It's a fast-moving technology still so I don't have the expectation that the model I use today will be the one I use in 3 months.

I have switched modalities and models pretty regularly to try to keep cutting edge and getting the best results. I think people who refuse to leverage LLMs for code generation to some degree are going to be left behind.


👤 nharada
My biggest annoyance is that people aren't transparent about when they use AI, and thus you are forced to review everything through the lens that it may be human created and thus deserving of your attention and benefit of the doubt.

When an AI generates some nonsense I have zero problem changing or deleting it, but if it's human-written I have to be aware that I may be missing context/understanding and also cognizant of the author's feelings if I just re-write the entire thing without their input.

It's a huge amount of work offloaded on me, the reviewer.


👤 barrell
I'm not convinced it's what the future holds for three main reasons:

1. I was a pretty early adopter of LLMs for coding. It got to the point where most of my code was written by an LLM. Eventually this tapered off week by week to the level it is now... which is literally 0. It's more effort to explain a problem to an LLM than it is to just think it through. I can't imagine I'm that special, just a year ahead of the curve.

2. The maintenance burden of code that has no real author is felt months/years after the code in written. Organizations then react a few months/years after that.

3. The quality is not getting better (see gpt 5) and the cost is not going down (see Claude Code, cursor, etc). Eventually the bills will come due and at the very least that will reduce the amount of code generated by an LLM.

I very easily could be wrong, but I think there is hope and if anyone tells me "it's the future" I just hear "it's the present". No one knows what the future holds.

I'm looking for another technical co-founder (in addition to me) to come work on fun hard problems in a hand written Elixir codebase (frontend is clojurescript because <3 functional programming), if anyone is looking for a non-LLM-coded product! https://phrasing.app


👤 Herring
AI will keep improving

https://epoch.ai/blog/can-ai-scaling-continue-through-2030

https://epoch.ai/blog/what-will-ai-look-like-in-2030

There's a good chance that eventually reading code will become like inspecting assembly.


👤 duxup
I'm really not seeing a lot of code that I can say is bad AI code.

I and my coworkers use AI, but the incoming code seems pretty ok. But my view is just my current small employer.


👤 vegancap
Yeah, I get the feeling. I'm torn to be honest, because I quite enjoy using it, but then I sift through everything line by line, correct things, change the formatting. Alter parts it's gotten wrong. So for me, it's saving me a little bit of time manually writing it all out. My colleagues are either like me, or aren't sold on it. So I think there's a level of trust and recognition that even if we are using it, we're using it cautiously, and wouldn't just YOLO some AI generated code straight into main.

But we're a really small but mature engineering org, I can't imagine the bigger companies with hundreds of less experienced engineers, just using it without car and caution, it must just cause absolutely chaos (or will soon).


👤 andrewstuart
No I love it.

When I see AI code I feel excited that the developer is building stuff beyond their previous limits.


👤 codingdave
I think we're going to look back on this time as "Remember when basically all new software dev spun its wheels for years while everyone tried to figure out where AI fit in?"

I'm not sick of AI. I'm just sick of people thinking that AI should be everything in our industry. I don't know how many times I can say "It is just a tool." Because it is. We're 3 years deep into LLM-based products, and people are just now starting to even ask... "Hey, where are the strengths and weaknesses of this tool, and best practices for when to use it or not?"


👤 sexyman48
I'm finally ready to get off the ride

c ya, wouldn't wanna b ya.


👤 yomismoaqui
AI coding should be better with a little profesionalism thrown in. I mean, if you have commit that code you are responsible for it. Period.

And I say this as a grumpy senior that has found a lot of value in tools like Copilot and specially Claude Code.


👤 sys13
> the best suggestions I've seen are found by linters in CI, and spell checkers

I don't think this is a rational take on the utility of AI. You really are not leveraging it well.


👤 incomingpain
>I think I'm finally ready to get off the ride.

I'm sorry you feel that way. Yes, this is probably the future.

AI is a new tool or really a huge new category of different AI tools that will need time to gain competency on.

AI doesnt eliminate the need for developers, it's just a whole new load of baggage and we will NEVER get to the point where that new pile of problems becomes 0.

A tool that gemini cli really loves if Ruff, I run it often :)


👤 codr7
Not my future.

👤 MongooseStudios
I'm sick of AI everything. Every day I hope today is the day the grift machine finally implodes.

In the short term it's going to make things suck even more, but I'm ready to rip that bandaid off.

P.S. To anyone that is about to reply to this, or downvote it, to tell me that AI is the future, you should be aware that I also hope someone places a rotting trout in your sock drawer each day.


👤 thesuperbigfrog
If you eat lots of highly processed food, don't be surprised if it makes you less healthy.

👤 twalichiewicz
I get why it feels bleak—low-effort AI output flooding workflows isn’t fun to deal with. But the dynamic isn’t new. It only feels unprecedented because we’re living through it. Think back: the loom, the printing press, the typewriter, the calculator.

When Gutenberg’s press arrived, monks likely thought: “Who would want uniform, soulless copies of the Bible when I can hand-craft one with perfect penmanship and illustrations? I’ve spent my life mastering this craft.”

But most people didn’t care. They wanted access and speed. The same trade-off shows up with mass-market books, IKEA furniture, Amazon basics. A small group still prizes the artisanal version, but the majority just wants something that works.


👤 juancn
I only use small local models like those of IntelliJ (under 100M each), which just save you the tedium of typing some common boilerplate.

But I don't prompt them, they typically just suggest a completion, usually better than what we had before from pure static analysis.

Anything more it detracts. I learn nothing, and the code is believable crap, which requires mindbogglingly boring and intense code reviews.

It's sometimes fine to prototype throw-away code (specially if you don't to intend to invest in learning the tech deeply), but I don't like what I miss by not doing the thinking by myself.


👤 greenavocado
AI generated code by Claude Sonnet, Kimi K2 0905, GLM-4.5 is not good enough to simultaneously maintain structure and implement features in complex code without doing insane things like grossly violating each SOLID principle. If you impose too much structure upon them, they fall apart as they don't truly understand the long range ramifications of their code too often. These assistants are best suited for generating highly testable snippets of code and pushing them to work in a large codebase pushes their capabilities too far, too often.

👤 breppp
Due to Brandolini's Law, there's an asymmetry between the time it takes to generate crap code and the time it takes to review crap code.

That what makes it seem disrespectful, as if someone is wasting your time when they could have done better


👤 bigstrat2003
I think people will eventually wake up and realize LLMs aren't actually good for generating code, but it might take a while. The hype train is rolling at full steam and a lot of people won't get off until they get personally burned.

👤 gerash
One downside IMHO is reimplementing the same building blocks rather than refactoring and reusing because it’s cheap to reimplement.

👤 add-sub-mul-div
I am so glad I spent 25 years in this field, made my bag, and got out right before it became the norm to stop doing the fun part of the job yourself.

👤 apple4ever
I'm sick of AI in general.

👤 throwacct
I'm using "AI" almost exclusively to scaffold projects. I spent 2 days trying to find the reason the code wasn't working the way it supposed to work. Where I work we're using it with moderation, knowing that if you generate code, you must double check everything and confirm that what you generated doesn't smell. You'll be held accountable if something brakes because you were eager to push unreviewed code.

👤 cadamsdotcom
Make your agent do TDD.

Claude struggles with writing a test that’s meant to fail but it can be coaxed into doing it on the second or third attempt. Luckily it does not struggle with me insisting the failure be for the right reason. (As opposed to failing because of a setup issue or a problem elsewhere in the code)

When doing TDD with Claude Code I lean heavily on asking the agent two things: “can we watch it fail” and “does it fail for the right reason”. These questions are generic enough to sleepwalk through building most features and fixing all bugs. Yes I said all bugs.

Reviewing the code is very pleasant because you get both the tests and production code and you can rely on the symmetry between them to understand the code’s intent and confirm that it does what it says.

In my experience over multiple months of greenfield and brownfield work, Claude doing TDD produces code that is 100% the quality and clarity I’d have achieved had I built the thing myself, and it does it 100% of the time. Big part of that is because TDD compartmentalizes each task making it easy to avoid a single task having too much complexity.