And let's say that you worry about containing it so that it doesn't take over the world. (Perhaps reasonably.)
But let's say you're using it the way we use AI right now, to help people code. It generates code for people, which they then run, not in the contained environment. You don't have a contained AI any more, do you? Especially if its training data included things like, say, APT techniques and examples, and code obfuscation techniques.
I mean, we worry about it hacking a human - persuading a human to relax the firewall rules, or whatever. But letting it generate code that we will run is far more of a security hole, isn't it?
Am I all wet here?