The three rules were never meant to be a guideline for how to safely build artificial intelligences; the three rules were incredibly fallible, and most of the stories centered around how they weren't enough to protect neither humans nor robots, or about the unintended consequences of those rules.
I remember reading somewhere that Asimov explicitly made the rules faulty, so he could write interesting stories, but I don't know if that's apocryphal or not.