For example: I've got some 'multiplying' code that should return 4 when passed 2 and 2 so I'm advised to write an "assert" test that returns 4 when I send it 2 and 2. But what happens if I send it just 2?... or 2 and 2 and 2?... or 2 and 3?... or 2 and W? None of the examples of why it's so vital to write tests ever seem to cover this kind of thing? They just seem to show a test which returns the correct answer when given the correct data. But aren't many of the problems with software caused when a piece of code receives the wrong data in the first place?
Genuine question. I've read so many times about why I should practice TDD that I believe it in theory. But I've never seen a beginner's example that has actually seemed like it would provide the claimed benefits. Why is writing a test that takes one single specified input and checks for one single specified output "better" than just writing my code and then trying to break it by [metaphorically] hitting the "Do Not Press!" button, by throwing as much incorrect, malformed or otherwise "wrong" data at it, as possible.
A book that explains TDD in the context of a much larger example application is Growing Object-Oriented Software, Guided by Tests by Steve Freeman, Nat Pryce (sometimes referred to as "The GOOS Book").
One of the difficult things about TDD is that it takes a long time to learn to do it effectively. I personally didn't really "get" TDD until I paired with a few much more experienced developers, and this forced me to rethink the way that I approached writing software. Ultimately, it caused me to significantly level up my coding skills and I wouldn't go back to the way that I used to write code before.