I am a beginner in C language.
As shown below, I wonder why 0.0000
is possible when I prepare variables with int
, enter values with the double
input translation specifier, and output values with the double
input translation specifier.
#include<stdio.h>
int main (void)
{
int data;
scanf("%lf", & data); // real number input
printf("%f\n", data);
return 0;
}
I understand that the above is the wrong code (it is bad to define data
in int
), but I would like to understand the behavior.
data
is stored in int
, so I imagine that typing double
will cause the memory to overflow behind me, but I'm worried that I can't explain why 0.0000
occurs in printf
.
The compiler is gcc(MinGW.org GCC Build-2) 9.2.0
.
Thank you for your cooperation.
gcc(MinGW.org GCC Build-2) 9.2.0
, so is it 32-bit Windows (even 64-bit Windows creates and executes 32-bit executable files)?The C language does not specify the size of the data type, so it depends on the environment.Therefore, if the data type is intentionally incorrect, such as the question, it will show behavior according to the size of each environment.
On top of that, it's exactly what you imagine on 32-bit Windows.It can be a list of meaningless numbers.
Stacks data
when calling printf()
.data
is a 32-bit int type, so only 32 bits are written to the stack.However, the called printf()
is interpreted and read as a 64-bit double-precision floating point number under the direction of %f
.Therefore, you try to read the 32-bit next to the data
stored in the stack.
It was a short test program, and I think the stack was not dirty and accidentally filled with 0
.As a result, it is only a bit string that can be interpreted as 0.0000
as a double-precision floating point number.
By the way, if you intentionally write another value next to data
, you will be able to confirm that the display changes.
for (inti=0;i<32;i++)
printf("i=%d, %f\n", i, data, 1<<i);
When I ran with Visual C++ at hand, I got the following output:
i=0,0.000000
i = 10.00000000
i=2,0.00000000
i = 3,0.00000000
i = 4,0.000000
i = 5,0.00000000
i = 6,0.00000000
i = 7,0.00000000
i = 8,0.00000000
i = 9,0.00000000
i = 10,0.00000000
i = 11,0.00000000
i = 12,0.00000000
i = 13,0.00000000
i = 14,0.00000000
i = 15,0.00000000
i = 16,0.00000000
i = 17,0.00000000
i = 18,0.00000000
i = 19,0.00000000
i = 20,0.00000000
i = 21,0.00000000
i = 22,0.00000000
i = 23,0.00000000
i = 24,0.00000000
i = 25,0.00000000
i = 26,0.00000000
i = 27,0.00000000
i = 28,0.00000000
i = 29,0.00000000
i=30,2.00000000
i = 31, -0.00000000
Also, the behavior is different for 64-bit Windows such as MinGW-w64, and 64-bit Linux, where metropolis is described.
64-bit Windows and 64-bit Linux use 64-bit space to stack 32-bit int types.The unused 32-bit portion is cleared to zero.
printf()
tries to read this 64-bit area as double-precision floating point, but the unused 32-bit part that is cleared is
0.0000
is always displayed because is applicable and 指数0 for both exponential and mantissa parts が is applied.
For the previous code, 64bit also writes 1<i
next to data
, but it is not referenced as described, so
i=0,0.000000
i = 10.00000000
i=2,0.00000000
i = 3,0.00000000
i = 4,0.000000
i = 5,0.00000000
i = 6,0.00000000
i = 7,0.00000000
i = 8,0.00000000
i = 9,0.00000000
i = 10,0.00000000
i = 11,0.00000000
i = 12,0.00000000
i = 13,0.00000000
i = 14,0.00000000
i = 15,0.00000000
i = 16,0.00000000
i = 17,0.00000000
i = 18,0.00000000
i = 19,0.00000000
i = 20,0.00000000
i = 21,0.00000000
i = 22,0.00000000
i = 23,0.00000000
i = 24,0.00000000
i = 25,0.00000000
i = 26,0.00000000
i = 27,0.00000000
i = 28,0.00000000
i = 29,0.00000000
i = 30,0.00000000
i = 31,0.00000000
will be
Data is stored in int, so I imagine that if I double-enter it, the memory will overflow behind me.
The glibc scanf(3) implementation is as follows (cast to long double*
), so that's right.
glibc/stdio-common/vfscanf-internal.c
if(flags&LONGDBL)\
&__glibc_likely(mode_flags&SCANF_LDBL_IS_DBL)==0))
{
long double = __strtold_internal
(char_buffer_start(&charbuf), & tw, flags & GROUP);
if(!(flags&SUPPRESS)&tw!=char_buffer_start(&charbuf))
*ARG (long double*) = d;
}
Then I'm worried that I can't explain why it's 0.000000 when I printf.
If you cast the pointer to data
(type int*
) to long double*
and refer to it, the value you entered (the value of float) will be displayed (though you step through the stack).
Regardless of whether you type in the keyboard or not, the int 0 is specified as %f
so it says 0.000000
.
printf("%f\n", 0);
is the same as writing.
© 2024 OneMinuteCode. All rights reserved.