The problem: You can't win anymore.
The old way: You'd think about the problem. Draw some diagrams. Understand what you're actually trying to do. Then write the code. Understanding was mandatory. You solved it.
The new way: The entire premise of AI coding tools is to automate the thinking, not just the typing. You're supposed to describe a problem and get a solution without understanding the details. That's the labor-saving promise.
So I feel pressure to always, always, start by info dumping the problem description to AI and gamble for a one-shot. Voice transcription for 10 minutes, hit send, hope I get something first try, if not hope I can iterate until something works. And when even something does work = zero satisfaction because I don't have the same depth of understanding of the solution. Its no longer my code, my idea. It's just some code I found online. `import solution from chatgpt`
If I think about the problem, I feel inefficient. "Why did you waste 2 hours on that? AI would've done it in 10 minutes."
If I use AI to help, the work doesn't feel like mine. When I show it to anyone, the implicit response is: "Yeah, I could've prompted for that too."
The steering and judgment I apply to AI outputs is invisible. Nobody sees which suggestions I rejected, how I refined the prompts, or what decisions I made. So all credit flows to the AI by default.
The result: Nothing feels satisfying anymore. Every problem I solve by hand feels too slow. Every problem I solve with AI feels like it doesn't count. There's this constant background feeling that whatever I just did, someone else would've done it better and faster.
I was thinking of all the classic exploratory learning blog posts. Things that sounded fun. Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle. It bothers me that my reaction to these blog posts has changed so much. 3 years ago I would be bookmarking a blog post to try it out for myself that weekend. Now those 200 lines of simple code feels only one sentence prompt away and thus waste of time.
Am I alone in this?
Does anyone else feel this pressure to skip understanding? Where thinking feels like you're not using the tool correctly? In the old days, I understood every problem I worked on. Now I feel pressure to skip understanding and just ship. I hate it.
It’s a different, less enjoyable, type of work in my opinion.
Ok, you don't like a particular way of working or a particular tool. In any other era, we would just stop doing using that tool or method. Who is saying you cannot? Is a real constraint or a perceived one?
Regardless, I understand the need to understand what you built. So you have a few options. You can study it (with the agent's help?), you can write your own tests / extensions for it to make sure you really get it, or you can write it yourself. I honestly think that most of those take about as long. It's only shorter when you don't want to understand it, so then we're back to the main question: Why not?
I'd disagree. For me, I direct the AI to implement my plan - it handles the trivia of syntax and boilerplate etc.
I now work kinda at the "unit level" rather than the "syntax level" of old. AI never designs the code for me, more fills in the gaps.
I find this quite satisfying still - I get stuff done but in half the time because it handles all the boring crap - the typing - while I still call the shots.
That’s the promise, but not the reality :) Try this: pick a random startup idea from the internet, something that would normally take 3–6 months to build without AI. Now go all in with AI. Don’t worry about enjoyment; just try to get it done.
You’ll notice pretty quickly that it doesn’t get you very far. Some things go faster, until you hit a wall (and you will hit it). Then you either have to redo parts or step back and actually understand what the AI built so far, so you can move forward where it can’t.
>I was thinking of all the classic exploratory learning blog posts. Things that sounded fun. Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle.
It was "stupid" then - better alternatives already existed, but you do it to learn.
> Am I alone in this?
absolutly not but understand it is just a tool, not a replacement, use it and you will soon find the joy again, it is there
Where are the labor saving _measurements_? You said it yourself:
> You'd think about the problem. Draw some diagrams. Understand what you're actually trying to do.
So why are we relying on "promises?"
> If I use AI to help, the work doesn't feel like mine.
And when you're experiencing an emergency and need to fix or patch it this comes back to haunt you.
> So all credit flows to the AI by default.
That's the point. Search for some of the code it "generates." You will almost certainly find large parts of it, verbatim, inside of a github repository or on an authors webpage. AI takes the credit so you don't get blamed for copyright theft.
> Am I alone in this?
I find the thing to be an overhyped scam at this point. So, no, not at all.
I used to work in land surveying, entering that field around the turn of the millennium just as digitalisation was hitting the industry in a big way. A common feeling among existing journeymen was one of confusion. Fear and dislike of these threatening changes, which seemed to neutralise all the hard-won professional skills. Expertise with the old equipment. Understanding of how to do things closer to first-principles. Ability to draw plans by hand. To assemble the datasets in the complex and particular old ways. And of course, to mentor juniors in the same.
Suddenly, some juniors coming in were young computer whizzes. Speeding past their seniors in these new ways. But still only juniors, for all that - still green, no matter what the tech. With years and decades yet, to earn their stripes, their professionalism in all it's myriad aspects. And for the seniors, their human aptitudes (which got them there in the first place) didn't vanish. They absorbed the changes, stuck with their smart peers, and evolved to match the environment. Would they have rathered that everything in the world had stayed the same as before? Of course. But is that a valid choice, professionally speaking? or in life itself? Not really.
It’s just going to take time for “best practice” to come around with this. It’s like outsourcing, for a while it seems like a good idea and it might be for very fixed tasks that you don’t really care about, but nobody does it now for important work because of the lack of control and understanding which is exactly where AI will end up. I think for coding tasks you can almost interchangeably use AI and outsourcing and preserve the meaning.
You have to understand your problem and solution inside and out. This means thinking deeply about your solution along with the drawing boxes and lines. And only then do you go to the LLM and have it implement your solution!
I heavily use LLMs daily, but if you don't truly understand the problem and solution, you're going to have a bad time.
Aside from regular arguments and slinging insults at chatgpt, I've been enjoying being able to be way more productive on my personal projects.
I've been using agentic AI to explore ESP32 in Arduino IDE. I'm learning a ton and I'm confident I could write some simpler firmware at this point and I regularly make modifications to the code myself.
But damn if it isn't amazing to have zero clue how to rewrite low level libraries for a little known sensor and within an hour have a working rewrite of the library that works perfectly with the sensor!
I'll say though, this is all hobby stuff. If my day job was professional chatgpt wrangler I think I'd be pretty over it pretty quickly. Though I'm burnt out to hell. So maybe it's best.
You're having imposter syndrome-type response to AI's ability to outcode a human.
We don't look at compliers and beat out fists that we can't write in assembly... why expect your human brain to code as easily or quickly as AI?
The problem you are solving now becomes the higher-level problem. You should absolutely be driving the projects and outcomes, but using AI along the way for programming is part of the satisfaction of being able to do so much more as one person.
Note: I don't vibe-code, or use agents. Just standard Jetbrain IDEs, and a GPT-5-thinking window open for C+P.
When I need something to work that hasn't been done before, I absolutely have to craft most of the solution myself, with some minor prompts for more boilerplate things.
I see it as a tool similar to a library. It solves things that are already well known, so I can focus on the interesting new bits.
wrote recently about it https://punkx.org/jackdoe/misery.html
now at night i just play my walkman(fiio cp13) and work on my OS, i managed to record some good cassettes with non AI generated free music from youtube :) and its pretty chill
PS: use before:2022 to search
I’ve never met so many people that hate programming so much.
You get the same thing with artists. Some product manager executive thinks their ideas are what people value. Automating away the frustration of having to manage skilled workers is costly and annoying. Nobody cares how it was made. They only care about the end result. You’re not an artist if all you had to do was write a prompt.
Every AI-bro rant is about how capital-inefficient humans are. About how fallible we are. About how replaceable we are.
The whole aesthetic has a, “good art vs. bad art,” parallel to it. Where people who think for themselves and write code in service of their work and curiosity are displayed as inferior and unfit. Anyone who is using AI workflows are proper and good. If you are not developing software using this methodology then you are a relic, unfit, unstable, and undesirable.
Of course it’s all predicated in being dependent on a big tech firm, paying subscription fees and tokens to take a gamble at the AI slot machine in hopes that you’ll get a program that works the way you want it to.
Just don’t play the game. Keep writing useless programs by hand. Implement a hash table in C or assembly if you want. Write a parser for a data format you use. Make a Doom clone. Keep learning and having fun. Satisfaction comes from mastery and understanding.
Understanding fundamental algorithms, data structures, and program composition never gets old. We still use algebra today. That stuff is hundreds of years old.
What i like is problem solvinig.
Coding is 90% syntax 10% thinking
AI is taking away the 90% garbage, so we can channel 90% to problem solving
So I'm more productive, but at what cost...
For side projects no, but I use it at the level that feels like it enhances my workflow and manually write the other bits since I don’t have productivity software tracking if I’m adopting AI hard enough
How can I chose my political views and preferences if I need to consult about them with LLM?
LLM code is extremely "best practices" or even worse because of what it's trained on. If you're doing anything uncommon, you're going to get bad code.
AI coding fixed that. Pre-AI I loved using all of the features of an IDE with an intention of speeding up my coding. Now with AI, it's just that much faster.
>The result: Nothing feels satisfying anymore. Every problem I solve by hand feels too slow. Every problem I solve with AI feels like it doesn't count. There's this constant background feeling that whatever I just did, someone else would've done it better and faster.
I've had so much satisfaction since ai coding. Ive had greater satisfaction.
The only exception here is learning (solving a solved problem so you can internalize it).
There are tons of problems that LLMs can't tackle. I chose two of those (polyglot programs, already worked on them before AI) and bootstrapping from source (AI can't even understand what the problem is). The progress I can get on those areas is not improved by using LLMs, and it feels good. I am sure there are many more of such problems out there.
Before this "AI" I had to do the mundane tasks of boilerplate. Now I don't. That's a win for me. The grand thinking and the whole picture of the projects is still mine, and I keep trying to give it to "AI" from time to time, except each time it spits BS. Also it helps that as a freelancer my stuff gets used by my client directly in production (no manager above, that has a group leader, that has a CEO, that has client's IT department, that finally has the client as final user). That's another good feeling. Corporations with layers above layers are the soul sucking of programming joy. Freelancing allowed me to avoid that.
If this seems interesting to me, and I have time, I will do it.
If it is uninteresting to me, or turns out to be uninteresting, or the schedule does not fit with mine, someone else can do it.
Exactly the same deal with how I use AI in general, not just in coding.
This isn't accurate.
> So I feel pressure to always, always, start by info dumping the problem description to AI and gamble for a one-shot. Voice transcription for 10 minutes, hit send, hope I get something first try, if not hope I can iterate until something works.
These things have planning modes - you can iterate on a plan all you want, make changes when ready, make changes one at a time etc. I don't know if the "pressure" is your own psychological block or you just haven't considered that you can use these tools differently.
Whether it feels satisfying or not - that's a personal thing, some people will like it, some won't. But what you're describing is just not using your tools correctly.
It’s a huge help for diving into new frameworks, troubleshooting esoteric issues (even if it can’t solve it its a great rubber duck and usually highlights potential areas of concern for me to study), and just generally helping me get in the groove of actually DOING something instead of just thinking about it. And, once I do know what I’m doing and can guide it method by method and validate/correct what it outputs, it’s pretty good at typing faster than I can.
Whereas the vibe in the lecture theatre 4 years ago was far more nerdy and enthusiastic. It makes me feel very sorry for this new generation that they will never get to enjoy the same feeling of satisfaction from solving a hard problem with code you thought and wrote from scratch.
Ironically, I've had to incorporate some AI stuff in my course as a result of needing to remain "current", which almost feels like it validates that cynical sentiment that this soulless way is the way to be doing things now.
Most commenters comment because it makes them feel good inside. If a comment helps you.. well, that’s a rare side-effect.
To truly broaden your perspective - instead of just feeling good inside - you must do more than Ask HN.
I've coded in win32, XWindows, GTK, UIKit, Logo, Smalltalk, QT, and others since 95. I had various (and sometimes serious) issues with any of these as I worked in them. No other mechanism of helping humans interact with computation has been more frustrating and disappointing than the web. Pointing out how silly it all is (really, I have to use 3 separate languages with different computation models, plus countless frameworks, and that's just on the client side???), never makes me popular with people who have invested huge amounts of time and energy into mastering etheral library idioms or modern "best practices" which will be different next month. And the documentation? Find someone who did a quick blog on it, trying to get their name out there. Good luck.
The fact that an AI is an efficient, but lossy compression of the big pile, to help me churn it faster, it's actually kind of refreshing for me. Any confidence that I was doing the Right Thing in this domain always made me wonder how "imagined" it was. That fact that I have a stochastic parrot with sycophantic confidence to help me hallucinate through it all? That just takes it to 11.
I thought when James Mickens wrote "To Wash It All Away" (https://scholar.harvard.edu/files/mickens/files/towashitalla...), maybe someday things would get better. 10 years later, the furniture has moved and changed color some, but its still the same shitty experience.
The job now feels quite different than the one I signed up for a decade+ ago. The only options I see are to accept that with a sigh or reject automation of the fun part and lose employability (worst case) or be nagged by anxiety that eventually that’ll happen.
Here's where I'm at:
- Your subjective taste will become more important than ever, be it graphic design, code architecture, visual art, music, and so on for each domain that AI becomes good at. People with better taste will produce better results. If you have bad taste, you can't steer _any_ tool (AI or otherwise) into producing good outputs. So refining your taste and expanding it will become more important. re: "Yeah, I could've prompted for that too.", I see a parallel to Stable Diffusion visual art. Sure, anyone _can_ make _anything_, but getting certain types of artistic outputs is still an exercise in skill and knowledge. Without the right skill expression, they won't have the same outputs.
- Delegating the things where "I don't have time to think about that right now" feels really good. As an analog, e.g., importing lodash and using one of their functions instead of writing your own. With AI, it's like getting magical bespoke algorithms tailored exactly to your needs (but unlike lodash, I actually see the underlying src!). Treat it like a black box until it stops working for you. I think "use AI vs not" is similar to "use a library or not": you kinda still have to understand what you need to do before picking up the tool. You don't have to understand any tool perfectly to make effective use out of it.
- AI is a tremendous help at getting you over blockers. Previous procrastination is eliminated when you can tell AI to just start building and making forward progress, or if you ask it for a high level overview on how something works to demystify something you previously perceived as insurmountable or tough.
> Nothing feels satisfying anymore
You still have to realize that were it not for you guiding the process, the thing in question would not exist. e.g., if you vibecode a videogame, you start to realize that there's no way (today) that a model is 1-shotting that. At least, it isn't 1-shotting it exactly to your vision. You and AI compile an artifact together that's greater than the sum of both of you. I find that satisfying and exciting. Eventually you will have to fix it (and so come to understand parts you neglected to earlier).
It's incredibly satisfying when AI writes the tedious test cases for things I write personally (including all edge cases) and I just review and verify they are correct.
I still find I regret in the long term cases where I vibe-accept the code it produces without much critical thought, because when I need to finesse those, I can see how it sometimes produces a fractal of bad designs/implementations.
In a real production app with stakes and consequences you still need to be reading and understanding everything it produces imo. If you don't, it's at your own peril.
I do worry about my longterm memory though. I don't think that purely reading and thinking is enough to drill something into your brain in a way that allows you to accurately produce it again later. Probably would screw me over in a job interview without AI access.
Use it in the precise, augmenting, accelerating way.
Do your own design and architecture (it sucks at that anyway) and use AI to tab complete the work you already thought through and planned.
This can preserve your ability to reason about the project and troubleshoot, improve your productivity while not turning your brain off.
I do not want to be a programmer anymore
https://news.ycombinator.com/item?id=45481490
I Don't Want to Code with LLM's
That's the "promise", but in practice it's exactly what you don't want to do.
Models can't think. Logic, accuracy, truth, etc are not things models understand, nor do they understand anything. It's just a happy accident that sometimes their output makes sense to humans based on the statistical correlations derived during training.
> The result: Nothing feels satisfying anymore. Every problem I solve by hand feels too slow. Every problem I solve with AI feels like it doesn't count. There's this constant background feeling that whatever I just did, someone else would've done it better and faster.
Am I the only one who is not totally impressed by the quality of code LLMs generate? I've used Claude, Copilot, Codex and local options, all with latest models, and I have not been impressed on the greenfield projects I work on.
Yes, they're good for rote work, especially writing tests, but if you're doing something novel or off the beaten path, then just lol.
> I was thinking of all the classic exploratory learning blog posts. Things that sounded fun. Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle. It bothers me that my reaction to these blog posts has changed so much. 3 years ago I would be bookmarking a blog post to try it out for myself that weekend. Now those 200 lines of simple code feels only one sentence prompt away and thus waste of time.
If you don't understand these things yourself, how do you know the LLM is "correct" in what it outputs?
I'd venture to say the feeling that models can do it better than you comes from exactly that problem: you don't know enough to have educated opinions and insights into the problem you're addressing with LLMs, and thus can't accurately judge the quality of their solutions. Not that there's anything wrong with not knowing something, and this is not meant to be a swipe at you, your skills or knowledge, nor is my intention to make assumptions about you. It's just that when I use LLMs for non-trivial tasks that I'm intimately familiar with, I am not impressed. The more that I know about a domain, the more nits I can pick with whatever LLMs spew out, but when I don't know the domain, it seems like "magic", until I do some further research and find problems.
To address the bad feelings: I work with several AI companies, the ones that actually care about quality were very, very adamant about avoiding AI for development outside of doing augmented searches. They actively filtered out candidates that used AI for resumes and had AI slop code contributions, and do the same with their code base and development process. And it's not about worrying about their IP being siphoned off to LLM providers, but about the code quality in itself and the fact that there is deep value in the human beings working at a company understanding not only the code they write, but how the system works in the micro and macro levels. They're acutely aware of models' limitations and they don't want them touching their code capital.
--
I think these tools have value, I use them and reluctantly pay for them, but the idea that they're going to replace development with prompt writing is a pipe dream. You can only get so far with next-token generators.
Do all the stuff you mention the old way. If I have a specific, crappy API that I have to deal with, I'll ask AI to generate the specific functionality I want with it (no more than a method or two). When it comes to testing, I'll write a few tests (some simple, some complicated) and then ask AI to generate a set of tests based on those examples. I then run and audit the tests to make sure they are sensible. I always end my prompts with "use the simplest, minimal code possible"
I am mostly keeping the joy of programming while still being more productive in areas I'm not great at (exhaustive testing, having patience with crappy APIs)
Not world changing, but it has increased my productivity I think.
But the whole blog is a consequence of exactly this that you are describing.
Why didn't the fact that Redis already existed make the whole thing feel pointless before? You could just go to github and copy the thing. I don't get why AI is any different in this regard.
AI just like IDEs before it makes it easier for me to complete my labor and have money appear in my account.
There are literally at least a dozen things I would rather do after getting off of work than spending more time at a computer.
My experience with Vibe coding has been more frustrating than fruitful. Currently, I've only been fully successful with small code snippets and scripts but I can see where it's heading.
It works for me.
The "you think about the problem and draw diagrams" part of you describe probably makes up less than 5% of a typical engineering workflow, depending on what you work on. I work in a scientific field where it's probably more than for someone working in web dev, but even here it's very little, and usually only at the beginning of a project. Afterwards it's all about iteration. And using AI doesn't change that part at all, you still need to design the high level solution for an LLM to produce anything remotely useful.
I never encountered the problem of not understanding details of the AI's implementation that people here seem to describe. I still review all the code and need to ask the LLM to make small adjustments if I'm not happy with it, especially around not-so-elegant abstractions.
Tasks that I actively avoided previously because they seemed like a hassle, like large refactorings, I no longer avoid now because I can ask an AI to do most of it. I feel so much productive and work is more satisfying because I get to knock out all these chores that I had resistance to before.
Brainstorming with an AI about potential solutions to a hard problem is also more fun for me, and more productive, than doing research the old ways. So instead of drawing diagrams I now just have conversations.
I can't say for certain whether using LLMs has made me much more productive (overall it likely has but for certain tasks it hasn't), but it definitely has made work more fun for me.
Another side effect has been that I'm learning new things more frequently when using AI. When I brainstorm solutions with an AI or ask for an implementation, it sometimes uses libraries and abstractions I have not seen before, especially around very low level code that I'm not super familiar with. Previously I was much more likely to use or do things the one way I know.
I want to say up front that as a CTO, I think "the pressure to skip understanding and just ship" is a cultural failure somewhere that should be addressed by you if not by the people you work for and with. As others have pointed out here, that sort of approach to software engineering is guaranteed to create technical debt. This idea is corroborated by several articles I've read recently about the "slop" that is flowing downstream to QA teams and customers. I think as software engineers and professional people, we owe it to our colleagues and our customers to not replace working understandable software with broken black boxes.
The problem with the agile manifesto was never its mission and values. The problem with agile is the glut of terrible practices that do not scale. The problem with AI assisted coding isn't that it automates some large and tedious amount of syntax creation—it does that well. The problem with AI assisted coding is that we're trying to use it to do things that it shouldn't be doing. Almost none of the "good" work product I have seen come out of AI assisted engineering as been "one shot" solutions: planning is still a huge part of my process, and with AI assistance, I can do more planning!
The current phase of my personal development is to move from using AI assistance in one codebase on one task at a time, to using it across multiple tasks in multiple codebases. As I am writing this response to you, I have Claude working on two different problems: a complex redesign of our asset processing pipeline with backward compatibility for the thousands of assets that have already been added to our system, and debugging a stubborn defect in authentication in our Unity codebase. My approach to this is to treat these tasks like two collaborations with two other developers—my role is to guide and review their implementation, not do their work for them.
On that note, I would love to create a cultural shift here soon and start using more test-driven development in our projects. I have always loved this approach to software engineering, but I have seldom had the opportunity to put it into practice. TDD is time consuming in a way that I have found difficult to justify at the beginning of projects. But the longer a team waits to start implementing good test code, the harder the task becomes. I want to stress that it should be professional malpractice to automate TDD script design without systematic code review.
I just got back from a well-earned vacation, and the jet lag is really painful. I am looking forward to feeling better. Then, the next phase of doing more through collaboration is to leverage git worktrees. I started doing this just before my break, and I have a couple of different trees ready to start building some much-needed features. The worktrees and some excellent features in Laravel make it fast and simple to have completely separate local dev environments, with no virtual containers needed (fast and scalable!). I am pleased with how quickly the workspaces can be created—I hope to be as happy with the work that can get done inside them.
Absolutely all of this effort is aimed squarely at creating more value for my customers and our company's founders with less time and cost. But once I have the process up and running, I intend to reap some personal gains from all of this newfound productivity. I want to create games and experiences that leverage generative AI to generate novelty and story. With the support of AI coding assistants, I'll be able to start exploring those personal goals soon. In this way, I can unlock new avenues for personal growth in entrepreneurship and product design without taking on an outsized risk or dramatically changing the course of my career.
It has taken months of experimentation and painful personal growth to get to this mental space. I hope sharing these experiences is useful to you and to others. If you ever want to talk more about this challenge, I'd be open to meeting remotely: https://calendar.aaroncollegeman.com
Cheers.
I want to say up front that as a CTO, I think "the pressure to skip understanding and just ship" is a cultural failure somewhere that should be addressed by you if not by the people you work for and with. As others have pointed out here, that sort of approach to software engineering is guaranteed to create technical debt. This idea is corroborated by several articles I've read recently about the "slop" that is flowing downstream to QA teams and customers. I think as software engineers and professional people, we owe it to our colleagues and our customers to not replace working understandable software with broken black boxes.
The problem with the agile manifesto was never its mission and values. The problem with agile is the glut of terrible practices that do not scale. The problem with AI assisted coding isn't that it automates some large and tedious amount of syntax creation—it does that well. The problem with AI assisted coding is that we're trying to use it to do things that it shouldn't be doing. Almost none of the "good" work product I have seen come out of AI assisted engineering as been "one shot" solutions: planning is still a huge part of my process, and with AI assistance, I can do more planning!
The current phase of my personal development is to move from using AI assistance in one codebase on one task at a time, to using it across multiple tasks in multiple codebases. As I am writing this response to you, I have Claude working on two different problems: a complex redesign of our asset processing pipeline with backward compatibility for the thousands of assets that have already been added to our system, and debugging a stubborn defect in authentication in our Unity codebase. My approach to this is to treat these tasks like two collaborations with two other developers—my role is to guide and review their implementation, not do their work for them.
On that note, I would love to create a cultural shift here soon and start using more test-driven development in our projects. I have always loved this approach to software engineering, but I have seldom had the opportunity to put it into practice. TDD is time consuming in a way that I have found difficult to justify at the beginning of projects. But the longer a team waits to start implementing good test code, the harder the task becomes. I want to stress that it should be professional malpractice to automate TDD script design without systematic code review.
I just got back from a well-earned vacation, and the jet lag is really painful. I am looking forward to feeling better. Then, the next phase of doing more through collaboration is going to be leveraging git worktrees. I started doing this just before my break, and I have a couple of different trees ready to start building some much needed features. The worktrees and some wonderful features in Laravel make having completely separate local dev environments fast and simple, with no virtual containers needed (fast and scalable!). I am very happy with how quickly the workspaces can be created—I am to be as pleased with the work that can get done inside them.
Absolutely all of this effort is aimed squarely at the problem of creating more value for my customers and our company's founders with less time and cost. But once I have the process up and running, I intend to reap some personal gains from all of this new found productivity. I want to create games, and I want to create experiences that leverage generative AI for creating novelty and story. With the support of AI coding assistants, I feel like I'll be able to start exploring those personal goals soon. In this way, I get to unlock new avenues for personal growth in entrepreneurism and product design without taking an outsized risk or dramatically changing the course of my career.
It has taken months of experimentation and painful personal growth to get to this mental space. I hope sharing these experiences is useful to you and to others.
But I honestly I do not miss it at all. The further AI coding advances, the easier it becomes to build and iterate over small products (even if they just start out as MVPs). And the more I actually feel in my element. I understand why people dislike it, but it feels as if these tools where specifically made for me; and I am getting more and more exited while these keep getting better.
In a perfect world, I'd see no code at all and just tell the AI what I want, and a blackbox implementation with my product appears that I start to sculpt down to something I can work with and serve to users. That would be my ultimate satisfaction.