This leaves many in a position where they fear they will be next on the chopping block. Many assume physical tasks will take longer since it will take longer to build up, verify and test humanoid robots vs. some virtual AI agent. However, many believe the writing is on the wall either way, and those in domains involving using their hands or bodies will only have a few more years than the formerly employed white-collar class.
Which skills then, or combinations of skills, do you believe will be safest for staying employed and useful if AI continues improving at the rate it has been for the past few years?
Actually very little debate. We get a lot of unsubstantiated hype from companies like OpenAI, Anthropic, Google, Microsoft. So-called AI has barely made a dent in economic activities, and no company makes money from it. Tech journalism repeatedly fails to question the PR narrative (read Ed Zitron).
> Regardless of whether this will happen, or when, many people already have lost their jobs in part due to the emerging capabilities of AI models…
Consider the more likely explanation: many companies over-hired a few years ago and have cut jobs. Focus on stock price in an uncertain economy leads to layoffs. Easier to blame AI for layoffs than admitting C-suite management incompetence. Fear of the AI boogeyman gives employers the upper hand in hiring and salary negotiations, and keeps employers in line out of fear.
Like right now, native mobile jobs are mostly unaffected by AI. Gemini, despite all the data in the community, doesn't do a decent job at it. If you ask it to build an app from scratch, the architecture will be off. It'll use an outdated tech stack from 2022. It will 'correct' perfectly good data to an older form, and if you ask it to hunt for bugs in cutting edge tech, it might rip out the new one and replace it with old stuff. It often confuses methods that are common between different languages like .contains().
But if very high quality data is easily accessible, e.g. writing, digital art, voice acting, etc, that makes it viable to be cloned by AI. There's little animation data and there's even less oil painting data - so something like oil painting will be greatly more resistant than digital art. It's top tier on Python and yet it struggles with Ren'Py.
Anthropic released experiment results from getting it to manage a vending machine: https://www.anthropic.com/research/project-vend-1
This is a fairly simple task for a human, and Claudius has plenty of reasoning and financial data. But it can't reason its way into running a vending machine because it doesn't have data on how to run vending machines.
Even as AI becomes more capable, I think roles involving coordination, trust-building, and cross-disciplinary thinking will remain resilient. These aren’t just hard to automate—they're what make many organizations function in the first place.
Right now AI is good as a manager, it can identify issues and propose solutions, but it struggles to actually implement them reliably. This is even the case with code today, despite the vast datasets, documentation, compilers, etc.
This problem becomes harder the more the unconstrained physical world is involved, in particular in slow moving legacy environments that have not been designed for AI / robots. Prime examples are physical infrastructure, and housing stock and the associated trades that maintain and upgrade those.
What makes this hard for AI isn't just the "thinking" part—LLMs are already impressive there. It's the judgment required to navigate situations where data is sparse, stakes are high, and the path forward involves reconciling trade-offs between humans, systems, and unpredictable environments. Think of a doctor diagnosing a rare condition while calming an anxious patient, or a civil engineer weighing community concerns during an infrastructure redesign.
Another safe haven for now is the ability to guide and control AI. People who deeply understand the limitations of AI systems—and can design governance, oversight, and fallback processes—are going to be essential as AI becomes more embedded in critical decision-making.
I’m currently working on a side project called “AI Chat Co-Pilot” that flags architectural dependencies and offers deployment guidance based on company-specific code analysis. One of the key things I’ve realized is how much human context and judgment still matters in deciding what not to automate. So maybe the safest skill isn't technical or creative alone, but knowing when to lean on AI—and when not to.
Curious to hear what others think are the underrated "sticky" skills that AI will have a hard time swallowing.
Plumbing.
Embalming and funeral direction.
Childrearing, especially of toddlers. Therapy for complex psychological conditions/ones with complications. Anything else that requires strong emotional and interpersonal judgement and the ability to think outside the box.
Politics/charisma. Influencers. Cult leaders. Anything else involving a cult of personality.
Stand up comics/improv artists. Nobody’s going to pay to sit in a room with other people and listen to a computer tell jokes.
World class athletes.
Top tier salespeople.
TV news and game show etc hosts.
Also note that a bunch of these (and other jobs) may vanish if the vast majority of the population is unemployed and only a few handfuls of billionaires can afford to pay anyone for services.
I’d also note that a lot of jobs will stay safe for much longer than we fear if AI continues to be unable to actually reason and can only handle patterns / extrapolations of patterns it’s already seen.
More realistically, though - the areas with the best lobbying and strongest unions.
For example, Conflict resolution, therapy, and coaching depend on nuance, empathy, and trust.
Skilled trades like plumbing, electrical work, or HVAC repair, Auto mechanics, Elevator technicians.
Roles like physical presence plus knowledge Emergency responders (firefighters, EMTs), Disaster relief coordinators.
The example I’m thinking of is a language tutor. If you only listened to AI hype, you’d think that tutors are going to be extinct in a year or two.
But the reality (which I have direct personal experience with) is that no one really wants to learn a language from a robot, no matter how human it seems or how sophisticated its learning algorithm is. Free or low cost language learning resources were already basically universal prior to recent AI tools.
At the end of the day, people want to have a human connection for many (but certainly not all) things. This includes the actual learning conversation, but also things like a weekly scheduled meeting, a relationship that grows over time, paid incentives to show up, and so on - things that some AI app is not going to be able to do.
If the cost of the human connection stays within reasonable limits, I don’t see it being eliminated by AI.
Masses losing their job due to AI (or for whatever reason) will have a widespread effect on every other sector, because at the end of the day huge part of the economy is based on people just spending their money.
That’s why if you look at the leveling guidelines for any well known tech company, “codez real gud” only makes a difference between junior and mid level developers. After that it’s about “scope”, “impact” and “dealing with ambiguity”.
Yes I realize that there are still some “hard problems” that command a premium for people to be able to solve via code - that’s the other 10% and I’m being generous.