Why is big O often used in CS when omega or theta should be used?
Do most people not know about omega and theta or perhaps they intentionally misuse asymptotic notation?
I've found some people on a well known QA site to be so strict with big-O notation that I don't use it anymore and instead only refer to constant, linear, quadratic time and so on. Even when O(1), O(n), O(n^2) would be actually easier to write even if they are technically wrong.
Finding lower bounds is generally more difficult.
Those aren't in the ASCII table.