I've tried Cursor and Claude Code and have seen them both do some impressive things, but using them really sucks the joy out of programming for me. I like the process of thinking about and implementing stuff without them. I enjoy actually typing the code out myself and feel like that helps me to hold a better mental model of how stuff works in my head. And when I have used LLMs, I've felt uncomfortable about the distance they put between me and the code, like they get in the way of deeper understanding.
So I continue to work on my projects the old-fashioned way, just me and vim, hacking stuff at my own pace. Is anyone else like this? Am I a dinosaur? And is there some trick for the mental model problem with LLMs?
Annecdotally, what we've found was that those using AI assistants show superficial improvements in productivity early, but they learn at a much slower rate and their understanding of the systems is fuzzy. It leads to lots of problems down the road. Senior folks are also susceptible to these effects, but at a lower level. We think it's because most of their experiences are old fashioned "natty" coding.
In a way, I think programmers need to do natty coding to train their brains before augmenting/amputating it with AI.
My own experience with LLM-based coding has been wasted hours of reading incorrect code for junior-dev-grade tasks, despite multiple rounds of "this is syntactically incorrect, you cannot do this, please re-evaluate based on this information" "Yes you are right, I have re-evaluated it based on your feedback" only to do the same thing again. My time would have been better spent either 1) doing this largely boilerplate task myself, or 2) assigning and mentoring a junior dev to do it, as they would only have required maybe one round of iteration.
Based on my experience with other abstraction technologies like ORMs, I look forward to my systems being absolutely flooded with nonperformant garbage merged by people who don't understand either what they are doing, or what they are asking to be done.
I'm looking into alternatives because I have zero interest in having LLM tools dictated to me because some MBA exec is sold on the hype
I find it impossible to get into flow with the autocomplete interrupting me constantly and the code they generate in the chat node sucks
I lead a team building Markhub, an AI-native workspace, and we have this debate internally all the time. Our conclusion is that there are two types of "thinking" in programming:
"Architectural Thinking": This is the joy you're talking about. The deep, satisfying process of designing systems, building mental models, and solving a core problem. This is the creative work, and an AI getting in the way of this feels terrible. We agree that this part should be protected.
"Translational Thinking": This is the boring, repetitive work. Turning a clear idea into boilerplate code, writing repetitive test cases, summarizing a long thread of feedback into a list of tasks, or refactoring code. This is the work we want to delegate.
Our philosophy is that AI should not replace Architectural Thinking; it should eliminate Translational Thinking so that we have more time for the joyful, deep work.
For your mental model problem, our solution has been to use our AI, MAKi, not to write the core logic, but to summarize the context around the logic. For example, after a long discussion about a new feature, I ask MAKi to "summarize this conversation and extract the action items." The AI handles the "what," freeing me up to focus on the "how."
You are not a dinosaur. You are protecting the part of the work that matters most.
I've tried new things occasionally, and I keep going back to a text editor and a shell window to run something like Make. It's probably not the most efficient process, but it works for everything and there's value in that. I have no interest in a tool that will generate lots of code for me that may or may not be correct and I'll have to go through with a fine tooth comb to see; I can personally generate lots of code that may or may not be correct, and if that fails, I have run some projects as copy-paste snippets from stack overflow until it works; it's not my idea of a good time, but I think it was better than spending the time to understand the many layers of OSX when all I wanted to do was get a pixel value from a point on the screen into applescript and I didn't want to do any other OSX ever (and I haven't).
I work with grad students who write a lot of code to analyze data. There is an obvious divide in comprehension between those who genuinely write their own programs vs those who use LLMs for bulk code generation. Whether that is correlation or causation is of course debatable.
In one sense, blindly copying from an LLM is just the new version of blindly copying from stack overflow and forum posts, and it seems to about be the same fraction of people either way. There isn't much harm in reproducing boilerplate that's already searchable online, but in that situation it puts orders of magnitude less carbon in the atmosphere to just search for it traditionally.
For the philosophical insights into ethics... we may turn to fiction =3
I agree with you, 100%. I like typing out code by hand. I like referring to the Python docs and I like the feeling of slowly putting code together and figuring out the building blocks, one by one. In my mind, AI is about efficiency for the sake of efficiency, not for the sake of enjoyment, and I enjoy programming,
Furthermore, I think AI embodies the model of the human being as a narrowly-scoped tool who gets converted from creator into a replaceable component, whose only job is to provide conceptual input into design. Sound good at first ("computers do the boring stuff, humans do the creative stuff"), but, and it's a big but: as an artist too, I think it's absolutely true that the creative stuff can't be separated from the "boring" stuff, and when looked at properly, the "boring" stuff can actually become serene.
I know there's always the counterpoint: what about other automations? Well, I think there is a limit past which automations give diminishing returns and become counterproductive, and therefore we need to be aware of all automations, but AI is the first sort of automation that is categorically always past the point of diminishing returns, because it targets exactly the sort of cognitive features that we should be doing ourselves.
Most people here disagree with me, and frequently downvote me too on the topic of AI. But I'll say this: in a world where efficiency and productivity has become doctrine, most people have also been converted into only thinking about the advancement of the machine, and have lost the essence of soul to enjoy that which is beyond mere mental performance.
Sadly, people in the tecnhnical domain often find emotional satisfaction in new tools, and that is why anything beyond the technical is often derided by those in tech, much to their disadvantage.
But not using AI is also idiotic right now, at the very least you should be using it for autocomplete, in the _vast_ majority of cases any current leading LLM will return _far more_ than not using it (in the scope of autocomplete).
Coding agents still give you control (at least for now), but are like having really good autocomplete. Instead of using copilot to complete a line or two, using something like Cursor you can generate a whole function or class based on your spec then you can refine and tweak the more nuanced and important bits where necessary.
For example, I was doing some UI stuff the other day and in the past it would have taken a while just to get a basic page layout together when you're writing it yourself, but with a coding assistant I generated a basic page asking it to use an image mock up, a component library and some other pages as references. Then I could get on and do the fun bits of building the more novel parts of the UI.
I mean if it's code you're working on for fun then work however you like, but I don't know why someone would employ a dev working in such an inefficient way in 2025.