The AI is blisteringly capable & observant & cross functional & smart. But it constantly make incredibly bad judgement. All the time. Every good decision is brilliant and great. But has three mediocre or poor little decisions packed into it.
It's a miracle that AI can bring us so "close to the machine" (Ullman). But that remaining agency & deliberateness: you can't get that taste & refinement without being deeply deeply technical.
Code you didn't write is code you can't maintain.
Worse, you can't tell that unmaintable code is being written so you hit the wall faster.
Once you hit the wall the AI won't be able to fix it for you.
Use the LLM to help you learn.
Get it to explain what it's made and provide references.
When you don't understand it's answer, prove further - you'll find that it's doing the wrong thing - often.
In doing this, you'll learn and end up with better systems.