Throw-away account because my original one is easily identifiable.
Does any starts to feel depressed about AI push and hype? I'm around ~45 and have been happily hacking and delivering stuff for 25 years.
I use AI daily — it's a useful tool. But the gap between the marketing and reality for many of us is hard to describe. The people and corporations and all those LinkedIn gurus, podcasters declaring our obsolescence are overwhelmingly people who've never built or maintained anything complex in their lives. I'm sick of posts showing developers as awesome managers orchestrating fleets of Codex and Claude Code instances — I don't know a single person who actually has access to unlimited quotas for that. I'm now scared to publish open source because some random AI agent might spam my repo with garbage PRs and issues. Are we really expected to deliver mediocre C compilers while emitting millions of tons of CO2 into the atmosphere just to make a handful of rich people even more rich? And suddenly we have something like Moltbook to pollute our planet even more. Where are we going with this?
Anybody feels something like that? I seriously thinking about leaving the industry to keep my mental health in control or switch to some tech that is hard for AI.
I do think that like all trendy hypes, it will go away after awhile. And the people that are focused on the next thing now are going to be a step ahead once the AI hype gets old.
For startups specifically I think the next big thing will be in-person social media. The AI slop will get old after awhile, and someone will figure out how to make Meetup.com actually work.
In five years time AI will be just another tool in the toolbox and nobody will remember the names of the hypers. I agree it is depressing: there are quite a few people banging this drum and because of that it becomes harder to be heard. They, like AI have the advantage of quantity. There is one character right here on HN that spews out one low effort AI generated garbage article after another and it all gets upvoted as if it is profound and important. It isn't. All it shows is how incredibly bland all this stuff is.
Meanwhile, here I am, solving a real problem. I use AI as well but mostly to serve as a teacher and I check each and every factoid that isn't immediately obviously true. And the degree to which that turns up hallucinations is proof enough to me that our jobs are safe, for now.
A good niche is cleaning up after failed AI projects ;)
best of luck there!
Jacques
Recommended reading: [0]
What you are seeing is that anyone can build anything with just a computer and a AI agent and the AI boosters are selling dreams, courses and fantasies without the risks or downsides that come with it. Most of these vibe coded projects just have very bad architecture and the experienced humans still have to review and clean it all up.
Meanwhile, "AGI" is being promised by those big labs, but their actions says otherwise as what it really means is an IPO. After that, we will see a crash come afterwards and the hype brigade and all the vibe coders will be raced to zero by local models and will move on after the grift has concluded.
You now need to know what to build and what should exist out of infinite possibilities as you can assume that someone can do that in 10 mins with AI. What used to be 90% of startups fail; with AI it is now 98% of them failing.
We know how this all ends. Do not fall for the hype.
[0] https://blog.oak.ninja/shower-thoughts/2026/02/12/business-i...
I’ve been actively trying to apply AI to our field, but the friction is real. We require determinism, whereas AI fundamentally operates on probability.
The issue is the Pareto Principle in overdrive: AI gets you to 90% instantly, but in our environment, anything less than 100% is often a failure. Bridging that final 10% reliability gap is the real challenge.
Still, I view total replacement as inevitable. We are currently in a transition period where our job is to rigorously experiment and figure out how to safely cross that gap.
Good luck!
> The people and corporations and all those LinkedIn gurus, podcasters
You can just mute and ignore them
> I'm now scared to publish open source
If you get many PRs it's a good problem to have, better than you publish and nobody reads it
> mediocre C compilers, Moltbook
it's all experiments. You can say the same thing about cleantech 15 years ago, where companies talked about solar panels and electric cars with swappable batteries all the time. You don't have to keep track of all things people experimenting with
But, I don't like hype or having things forced down my throat, and there's a lot of that going on.
Psychologically, the part that seems depressing is that everything just seems totally disposable now. It's hard to even see the point of learning the latest and greatest AI tools/models, because they'll be replaced in about 3 months, and it's hard to see the point in trying to build anything with, or without AI, given the deluge of AI slop it will be up against.
I like the idea of spending a bit of time to learn something, like how to use a shell, how to ride a bike, how to drive a car, how to program in C or C++, and use the skill for years or decades, if not a lifetime. AI seems to have taken that away now everything is brand new and disposable, and everyone is an amateur.
Trust your eyes. You can see what it actually does, therefore the marketing is lying to you.
But it sounds like your problem isn't knowing what to believe. Your problem is that you know the truth, and you're tired of having to wallow in the lies all day. I don't blame you; lies are bad for your mental health. Well, there's a solution: Turn off the internet. You can, you know. Or at least you can turn off the feed into your brain. Stop looking at posts about AI, even on HN. If you can't dodge them well enough, just turn off social media. Go outside, if the temperature is decent. If it isn't, go to a gym or an art museum or something. Just stop feeding this set of lies into your brain.
I do not fear that some agents will pollute my repos with their PR. In opposite, I suppose that we will end at a point where for each question, task, or problem, one will find many (AI-coded) solutions, making it impossible to choose a right, solid, reliable one. I recently thought about having a database of tools per task so that a comparison would be possible. But the maintenance costs of something like this are enormous when including benchmarks, comparisons, etc. on different qualities.
https://www.reddit.com/r/recruitinghell/comments/j1vm8j/gold...
You said it yourself, these are overwhelmingly people who've never built or maintained anything complex in their lives. If you're going to listen to what people on the Internet say, why not seek out people who can earn your respect?
In order to combat that worry, I'm trying to focus on gratitude that I have had a career where I got paid for doing fun things (programming), rather than worrying about what if my career stops being fun. Many people never get that chance, after all, and live their entire lives working menial jobs just to put food on the table. I'm also trying to make my career less important to my own mental happiness by focusing on other things that are good and will not go away even if my career stops being fun (for me, that means my marriage and my faith).
It's not easy to do, at all. And it also doesn't help the worry that I might even lose my job entirely because the industry abandons sense and fires people in favor of LLMs. But it does help a little, and I'm hoping that with practice the mental discipline will get easier and I can let go of the anxiety some.
Then I decided to build something complex using Claude, and within a week I realized that whoever claims "90% of code is written by LLMs" is not being totally honest, the parts left out from such posts tell a different story: programming is going to get harder, not easier.
The project started great but turned into a large ball of spaghetti. It became really hard to extend, every feature you want to add requires Claude to rearrange large portions of the codebase. Debugging and reading logs are also very expensive tasks. If you don't have a mental model of the codebase, you have to rely on the LLM to read logs and figure things out for you.
Overall, my impression is that we need to use this as just another tool and get proficient at it, instead of thinking it will do everything.
Also, the recent Anthropic partnership with Accenture suggests otherwise [0]. If AI could do it all, why train humans?
So please don't leave the industry. I think it will get worse before it gets better. We need to stick around longer and plan for all this hype period.
[0] https://www.anthropic.com/news/anthropic-accenture-partnersh...
Either way, planning for it to happen would be better than be taken by surprise if it does.
And to finish the end of the week on a good note.
AI Fails at 96% of Jobs (New Study)