Integers are divided into two types:
Integer, occupying two bytes, ranging from-32,768 to 32,767.
Long, occupying 4 bytes, ranging from -2,147,483,648 to 2,147,483,647.
There are two kinds of floating-point numbers:
Single-precision floating-point number (single), occupying 4 bytes and 7 significant digits, ranges from -3.402823E38 to-1.40 1298E-45 when negative, and from 1.40 1298E-45 when positive.
Double, occupying 8 bytes, has 15 significant digits, and the positive number ranges from-1.7976931348 6231e 308 to -4.45438+0247e-324.
When programming, you should choose the data type according to the actual situation. As mentioned above, integers are preferred, but if the data must contain decimals, you should choose floating-point numbers. For example, if an integer variable is assigned to 0. 12345, its value will automatically become 0, but it will not change when it is assigned to a single-precision floating-point variable. This is why sometimes the program can run normally when the integer is changed to single precision.