Numerical precision is an ongoing concern of mine especially in big / long running simulations and solvers.
I came across an article by Rob Farber on the scientificcomputing.com site this morning that asks the question "How much is Enough?". Although no definitive answers are presented the author summarizes the current and future concerns over accuracy.
Personally I don't believe floating point is the way forward. Floating point is fast to calculate in hardware but is not always an ideal way of representing numbers. Although the various branches of mathematics are largely base independent humans are most comfortable with base 10 while computers are of course most comfortable with base 2. This does result in some situations when a calculation in base 10 with only a few decimals of precision gives precise results whereas a calculation in base 2 is incapable of giving a precise result even given N bits of precision although the result is probably acceptable after n bits.
I'm not presenting any solution to the precision problem, but merely pointing out that sometimes the issue is caused by: using base 2 for calculations and/or the floating point representation of these numbers.