Difference between float and double

Asked 2 years ago, Updated 2 years ago, 101 views

I read about the difference between float and double Most of the time, they seem to produce the same results

But when I solve the problem of floating point calculation, When I use my computer, I use 10 cases The results came out the same when I wrote double and float.

But when I put it on the server where I was grading the questions, float type says only 1 out of 10 is correct double It says all 10 of them are correct Why is this happening?

c c++ floating-point double-precision

2022-09-21 14:46

1 Answers

Because the environment of your computer and server is different. Since the C++ standard only determines the minimum size of the data type, The size of float on your personal computer may differ from the size of float on your server This can cause an overflow.

And double and float are very different. In general, double is about twice as accurate as float. Because double represents 15-16 decimal places and float represents 7 decimal places.

float a = 1.f / 81;
float b = 0;
for (int i = 0; i < 729; ++ i)
        b += a;
printf("%.7g\n", b);   // prints 9.000023

while

double a = 1.0 / 81;
double b = 0;
for (int i = 0; i < 729; ++ i)
        b += a;
printf("%.15g\n", b);   // prints 8.99999999999996

It is.

Also, the maximum value of float is about 3e38, and the maximum value of double is 1.7e308. There are data types that can store larger floating points, such as long double All of these data types are bound to have a round-off error.

Therefore, if accuracy is important, write int/fraction class.


2022-09-21 14:46

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.