To be brief, the 1999 ISO C Standard (C99) is:
size_t
is used to indicate the size of the object
Defines a unsigned int
type of 16 bits or more.
(I have attached the original text below.)
for
If you go around the loop,
i
is always positive, either int
/size_t
may be used
However, int
may overflow, so if the value of i
becomes too large, you should use size_t
.
And size_t
is always positive, so if i
can be negative, of course int
should be used.
size_t is an unsigned data type defined by several C/C++ standards, e.g. the C99 ISO/IEC 9899 standard, that is defined in stddef.h.1 It can be further imported by inclusion of stdlib.h as this file internally sub includes stddef.h.
This type is used to represent the size of an object. Library functions that take or return sizes expect them to be of type or have the return type of size_t. Further, the most frequently used compiler-based operator sizeof should evaluate to a constant value that is compatible with size_t. As an implication, size_t is a type guaranteed to hold any array index.
There are a lot of people who are wrong about size_t. There is no unsigned int anywhere in the text. size_t is defined as 'the largest unsigned data type in theory'. In other words, on 32-bit machines, it is an unsigned integer (which means just an integer, not an int), and on 64-bit machines, it is an unsigned integer (unsigned long). If there's a 128-bit machine or a larger machine that might appear in the future, it's going to be bigger.
This is something that you should be aware of when you're trying to process more than four gigabytes of files, like video files, or when you're trying to process large amounts of data. There's a possibility that you might think it's unsigned int, and you might use it by transforming it into int or unsigned int, and it might cause a bug that's out of range.
© 2024 OneMinuteCode. All rights reserved.