c++ - How can I get consistent program behavior when using floats? -
i writing simulation program proceeds in discrete steps. simulation consists of many nodes, each of has floating-point value associated re-calculated on every step. result can positive, negative or zero.
in case result 0 or less happens. far seems straightforward - can each node:
if (value <= 0.0f) something_happens();
a problem has arisen, however, after recent changes made program in re-arranged order in calculations done. in perfect world values still come out same after re-arrangement, because of imprecision of floating point representation come out different. since calculations each step depend on results of previous step, these slight variations in results can accumulate larger variations simulation proceeds.
here's simple example program demonstrates phenomena i'm describing:
float f1 = 0.000001f, f2 = 0.000002f; f1 += 0.000004f; // part happens first here f1 += (f2 * 0.000003f); printf("%.16f\n", f1); f1 = 0.000001f, f2 = 0.000002f; f1 += (f2 * 0.000003f); f1 += 0.000004f; // time happens second printf("%.16f\n", f1);
the output of program is
0.0000050000057854 0.0000050000062402
even though addition commutative both results should same. note: understand why happening - that's not issue. problem these variations can mean value used come out negative on step n, triggering something_happens(), may come out negative step or 2 earlier or later, can lead different overall simulation results because something_happens() has large effect.
what want know whether there way decide when something_happens() should triggered not going affected tiny variations in calculation results result re-ordering operations behavior of newer versions of program consistent older versions.
the solution i've far been able think of use value epsilon this:
if (value < epsilon) something_happens();
but because tiny variations in results accumulate on time need make epsilon quite large (relatively speaking) ensure variations don't result in something_happens() being triggered on different step. there better way?
i've read this excellent article on floating point comparison, don't see how of comparison methods described me in situation.
note: using integer values instead not option.
edit possibility of using doubles instead of floats has been raised. wouldn't solve problem since variations still there, they'd of smaller magnitude.
i recommend single step - preferably in assembly mode - through calculations while doing same arithmetic on calculator. should able determine calculation orderings yield results of lesser quality expect , work. learn , write better-ordered calculations in future.
in end - given examples of numbers use - need accept fact won't able equality comparisons.
as epsilon approach need 1 epsilon every possible exponent. single-precision floating point format need 256 single precision floating point values exponent 8 bits wide. exponents result of exceptions simplicity better have 256 member vector lot of testing well.
one way determine base epsilon in case exponent 0 e value compared against in range 1.0 <= x < 2.0. preferably epsilon should chosen base 2 adapted e value can represented in single precision floating point format - way know testing against , won't have think rounding problems in epsilon well. exponent -1 use base epsilon divided two, -2 divided 4 , on. approach lowest , highest parts of exponent range gradually run out of precision - bit bit - need aware extreme values can cause epsilon method fail.
Comments
Post a Comment