C has family of integer types i.e {short, int, long, long long}
. Any new programmer is likely to use int to represent integer variable in the application and since int type has 32 bit space with range (-2,147,483,648 to 2,147,483,647)
, there will be bug as soon value of the variable goes out of the range. As you can see the maximum value is 2,147,483,647 which IMHO very small (cannot even count Earth’s population).
So my question is how does newbie avoid such bug? What is space of int
type on 64 bit OS?
3
Note: ‘int’ is only guaranteed to be at least 16 bits. It’s even smaller than you thought! If you want to guarantee at least 32-bits use ‘long’. For even larger values look at things like ‘int64_t’ or ‘long long’.
How does a newbie avoid problems like this? I’m afraid it’s the same as for many other programming problems. “think carefully and take care”.
Running a test at program startup is a good idea. As is having a good set of unit tests. Take extra care when moving to a new platform.
2
Limits.h stores the min and max values for integer types in C.
N.B. C++ has its own version: <limits>
If you’re really interested in the number of bits a type uses on your platform, you can do something like this (from here):
#include <limits.h>
#include <stdio.h>
int main(void) {
printf("short is %d bitsn", CHAR_BIT * sizeof( short ) );
printf("int is %d bitsn", CHAR_BIT * sizeof( int ) );
printf("long is %d bitsn", CHAR_BIT * sizeof( long ) );
printf("long long is %d bitsn", CHAR_BIT * sizeof(long long) );
return 0;
}
As mentioned elsewhere, limits.h
will specify the ranges allowed for each type – INT_MIN
, INT_MAX
, UINT_MAX
, etc.
If you need an integer type of a specific width, the stdint.h
header provides type definitions like int8_t
, int16_t
, etc.
We can classify computation in several categories.
For one: A significant context for computing is some form of indexing. We can index into an array, for example. Another form is that we can count objects (e.g. give them an int id) that we create in memory.
For these, our programming languages are usually implemented so that they run out of memory before the simple integer type overflows. For example, arrays may be limited to 32k in 16-bit worlds (and you could only have one of those), 2^31 elements in 32-bit worlds. And you cannot create more than 32k objects in 16-bit worlds and 2^31 object in 32-bit worlds (because objects almost universally use more than one byte each). We can consider that in some sense, larger and large int values imply more memory usage, and that memory is exhausted just before simple int’s overflow.
For another, however, we have kinds of computation that doesn’t use additional memory in proportion with larger numeric values, such as, summation, we need to be very careful about the selection of data type. Other examples would be rows in a database, where the row number (row id) could exceed memory size as much of the data lives on disc instead of in memory. In these cases we have to look to other limits to indicate the proper size to use. For example, if the db limits to 2^63 rows (big db!), then we should be ok to use a 64-bit int regardless of if we’re using a 16-bit machine or larger.
This is best explanation I found on page 3 that any newbie can understand quickly and avoid bugs.
2