HACKER Q&A
📣 sriramgonella

How are teams validating AI-generated tests today?


With the rise of AI-assisted development, many tools generate tests automatically.

But validating whether those tests actually cover meaningful edge cases seems harder.

Curious how teams here handle this in real workflows.


  👤 david_iqlabs Accepted Answer ✓
One thing I've noticed with AI generated tests is they can look very convincing even when they're wrong. The output reads confidently but there's not always anything grounding it in real signals.

I've found it works better when the AI is just explaining results that come from deterministic metrics rather than inventing the analysis itself.

Curious how other teams are dealing with that.


👤 itigges22
For security vunerability testing on websites I have been making for clients- I almost always hire a senior developer to look over the work and or tests that were created. AI can pass a test, and it can make something that passes a test, but there almost ALWAYS are problems that the senior dev finds with the tests, or with the code that was being tested. Sometimes AI will adjust the code entirely to pass the test or adjust the test to pass failing code.

Another counter-measure I have is to simply lock code before testing. Look over test files, and ensure its not following the happy path.