0
I have a program developed in c++ in visual studio, which processes a huge amount of data! This program can use both float and double type data, and this specification is made as follows:
typedef float/double real;
or in the previous statement, or use double or float. However I came across a problem for which I find no justification. In my project settings using the floating point model by default: precise, the time it takes the program to process the data in the case of floats is twice as long as doubles. If the floating point model uses the fast option, using floats or doubles, it takes roughly the same time, which does not surprise me. I just don’t understand why in the precise mode of the floating point model, the time it takes to use floats is much longer than using doubles (it takes twice as long).
Thank you!
Using
double
, What is the "floating point model" used? In "precise" mode, the compiler needs to enter code around any floating point operation to conform it to IEEE floating point standards. The only plausible explanation I see is that, with double, the code is generated in "fast" mode. It may be that the compiler only obeys this flag with the "float" type, but this is speculation.– lvella