All the hype about AI is very familiar to us all. Let's leave it that for this thread.
But having taken on a few things for myself, trying to get customer ready results from homegrown AI/Rag/NLP/ML etc. I have to simply admit that it is by far the most challenging aspect of my programming career.
And I have not come up with results that even remotely would satisfy a paying customer.
How about you ?
I find most people have trouble because they assume it's like programming - they talk to it like a robot and assume it uses robot speak. But it's trained on human language and works better when talked to like a person. It's more like talking to a gifted child. Some,like Claude, have been trained to use tags and such. There's also a lot of core stuff that people don't understand, things like when to fine tune, when to RAG, when to prompt engineer and use large context windows, the difference between models and how they're trained. It's useful to read the documentations; GPT and Claude function similarly at first glance, but the documentation tells you the differences.
Many mistakes are solved in the same way you would do it with a child. If they're getting the answer wrong, there might not be enough context or enough hints. People say that LLMs only get sarcasm when they know the source of the material, well, that's exactly how humans understand sarcasm as well. Instead of asking, "Solve this math question," a prompt like, "What's the best way to solve this math question?" might lead to better results. Things like Cursor works so much better vs Copilot despite the same models, because it's trained to think on the solution.
I would recommend doing hackathons to learn. Set a small goal, one that can be solved in a day or two. Solve it using AI. If you can't, you'll at least learn why not.