Teams where I work can use Claude Code, Codex, Cursor, and Copilot CLI. Internally, it seems like Claude Code and Codex are the more popular tools being used by most software teams.
If you’re new to these tools, I highly recommend trying to build something with them during your free time. This space has evolved rapidly the past few months. Anthropic is offering a special spring break promotion where you can double the limits on weeknights and weekends for any of its subscription plans until the end of March.
That's not vibe coding. Imagine if you were hiring a chef and a candidate came in who'd never used a stove. Sure, technically there are other ways to heat food, but it would be a bit odd.
Everyone talking about vibe coding all your dependencies and the problem is that the people who are good with these tools and do get 50% or greater productivity benefits won’t be able to empathize with the people who are bad with these tools and create all the slop.
I think AI encourages people to take side quests to solve easy problems and not focus on hard problems.
That without domain expertise problems will compound themselves. But I dunno, I agree that they’re here to stay.
Personally I still believe that despite AI being moderately useful and getting better over time, it's mostly only feasible for boilerplate work. I do wonder about these people claiming to produce millions of lines of code per day with AI, like what are you actually building? If it's then Nth CRUD app then yeah, I see why... Chances are in the grand scheme of things, we don't really need that company to exist.
In roles that require more technical/novel work, AI just doesn't make the cut in my experience. Either it totally falls over or produces such bad results that it'd be quicker for a skilled dev to make it manually from scratch. I'd hope these types of companies are not hiring based on AI usage.
I noticed that some of these roles come from businesses that recently had layoffs and were now asking their staff to "do more with less" so not exactly places people would be eager to work at, unless they have to.
I don't know if this is the new norm but this craziness is not helped by the increase in the number of "AI influencers" pushing the hype. Unfortunately, I've been seeing this on HN a lot recently.
E.g., Nobody wants to continue working with someone who create sound effects, movie player, operating system, etc.
Don’t know/care about coding with AI? You’re unhireable now. Grim.
Just cause you're using an LLM doesn't mean you're "vibe coding".
I regularly use LLMs at work, but I don't "vibe-code", which is where you're just saying garbage to the model and blindly clicking accept on whatever is spit out from it.
I design, think about architecture, write out all of my thoughts, expected example inputs, expected example outputs, etc. I write out pretty extensive prompts that capture all of that, and then request for an improved prompt. I review that improved prompt to make sure it aligns with the requirements I've gathered.
I read the output like I'm doing a deep code review, and if I don't understand some code I make sure to figure it out before moving forward. I make sure that the change set is within the scope of the problem I'm trying to solve.
Excluding the pieces that augment the workflow, this is all the same stuff you would normally do. You're an engineer solving problems and that domain you do it in happens to involve software and computers.
Writing out code has always been a means to an end. The productivity gains if you actually give LLMs a shot and learn to use the tools are real. So yes, pretty soon it's going to become expected from most places that you use the tools. The same way you've been expected to use a specific language, framework, or any other tool that greatly improves productivity.
A decent company wouldn't necessarily look for someone who can type faster or commit 100x more code like the vibers do, but look into how you understand the code.
We're not concerned about hiring for the 'skill' of using these things, but more as a culture check - we are a very AI-forward company, and we are looking for people who are excited to incorporate AI into their workflow. The best evidence for such excitement is when they have already adopted these tools.
Among the team, the expectation is that most code is being produced with AI, but there is no micromanager checking how much everyone is using the AI coding tools.
My first experience with it was a year ago and the tests it produced were so horrendously hard to maintain that I kinda gave up, but I imagine that things have gotten a lot better in the last year.