That being said, if it all works as promised and the model becomes widely available it's quite amazing and has the potential to change a few things. The obvious one is that it becomes much easier to automatically create plausible-looking content such as news stories, comments, etc. This will create many more bots and spam than you are used to seeing. The other obvious one is to act as a natural-language based search engine or database, where you can ask questions and get facts as answers. This would be restricted to non-subjective things that are in the training data of course.
On a longer time scale, it could drive the adoption of technologies around fact and identify verification. As it becomes so much easier to automatically generate content, we need to better ways to establish trust. Safety and bias is yet another, since anything generated will obviously be biased to whatever is most common in the training data.