In computer science it's used to express asymptotic behaviour. This suppresses lower-order terms and constant factors, so O(2n + log(n)) = O(n). The most common ones used are upper bounds O(•), asymptotically equal behaviour θ(•), and lower bounds Ω(•).
I'm guessing shmageggy thought the notation was Big-O notation. It's a bit of CS notation that defines an upper bound on the number of operations for an algorithm as a function of input size.
I don't know enough about particle physics to know whether you and him/her are even remotely close to saying the same thing.
It's math notation invented for analytic number theory that got popularized in CS by Donald Knuth.
By definition, constant (nonzero) arguments denote the same class, so using constants different from 1 isn't really useful. However, the O stands for 'order' or 'order of', and it sometimes gets (ab)used to denote orders of magnitude (eg powers of 10) of finite values instead of limiting behaviour of functions.
If you're a physicist, you've probably seen it used as the final term of Taylor expansions. That's the proper usage, whereas the usage above is an abuse of notation insofar as (nonzero) constant arguments are equivalent.