I disagree: IEEE 754 is quite elegant. The fact that they are monotonically increasing in correspondence to their bit representation is one of the many nice things about it.
The lack of precision leads to increasing error over time in many contexts (off the top of my head, multiplication and division). I still think that if an algorithm can be rewritten to work with ints instead of floats, it will be served better the vast majority of the time.
Multiplication and division of floats creates rounding errors of <1 ulp each time. In most contexts you never need to worry about them.
The operations you need to watch out for are addition/subtraction, in cases where your result has much smaller magnitude than your inputs, causing loss of significance. Sometimes great care must be taken in implementing numerical algorithms to avoid this. But this is an inherent problem in numerical computing, not the fault of the floating point format per se.
Doing integer/rational arithmetic gives you a choice: either never do any rounding and require exponentially growing precision that makes even the simplest algorithms impractically expensive (not to mention giving up entirely on the many common computations which cannot be represented whatsoever in an exact rational arithmetic system), or allow rounding/approximation of some kind and end up with roughly the same problems floats have.
I just use bigints wherever it's possible to transform the algorithm to work with bigints, and I "render to decimal" in views