Recently we've been hiring some new junior software developers (we're only a small development team of ~8 devs) - seeing how they work, I'm seeing a true difference between them and staff hired before 2022 or so.
The earlier hired - basically the older - people learned how to do research, use a search engine, use W3Schools, Stack overflow, MDN, MSDN (now Microsoft Learn), or even Reddit during their work in aid to resolve their problem or look up documentation.
Then in 2022/2023 AI happened in the form of ChatGPT and Github Copilot Chat.
What we see with the newer people, they tend to embed the mentioned forms of AI in their workflow very much - and rely less to none to the traditional ways of problem solving and research. Instead, AI just hands them the fish they search for verbatim - though without any opinion or additional insights you otherwise get from platforms like Stackoverflow. And good prompt engineering also is important - otherwise they get the wrong answer.
I wonder how other senior/lead developers handle this? Should one forbid or discourage Copilots? Should we encourage traditional ways of problem solving and research (old man yells at cloud)? Is there a good balance yet to be found?
Thank you all for your insights.
I have long been an advocate of not just looking up in the manual but becoming highly familiar with the manual. This is good for languages that have a good manual (Java, Python) and not so good for some others.
Lately I’ve been using Copilot a lot. I think it is better than SO. Whenever I see anything surprising about the answer I ask it questions about how the code works and the answers are usually good.
I wish it did a better job of citing sources; if it could point me to the answer in the manual I’d love that. Links it provides to arbitrary web sites are wrong maybe half the time. I’ve been pining away for a search engine that ingests your pom.xml or whatever your language uses and only searches the docs for the JDK version and dependencies that you’re actually using