For example a team that couldn't write a new encrypted messaging app without AI, gets an AI to write them one. How do they check that the code is actually secure? Writing encryption code is very hard to get correct, in fact most humans can't get it right, and if you don't understand the intricacies of cryptography then you'll never pick up the mistakes the AI makes.
I did want it to improve our e2e testing but it didn’t make it as easy as I expected.
Turn on persist logs in the dev tools network tab; go through as much as the flow as possible; filter out domains that aren’t your site: google, facebook, external api calls; download the .har for all of whats left; convert har into a k6 script using a library, or dump that into an llm to convert it to their newer browser script and point the llm at their docs; edit it to be dynamic so you can test paths from different product pages/types, and scrape and enter proper guids, etc.
On the flip side we’re seeing a lot more bot traffic likely due to bad actors doing this too, but by writing these tests you begin to see what calls cause load and be proactive.
https://helpfuldjinn.com/ https://github.com/DjinnRutger/HelpDesk-Public