Current location - Plastic Surgery and Aesthetics Network - Plastic surgery and beauty - Why should data be defined as integers in vb, and sometimes as single precision? What's the difference between these two? When else?
Why should data be defined as integers in vb, and sometimes as single precision? What's the difference between these two? When else?
There are two types of numerical values on the computer: integer and floating point. Integer is an integer without decimal, and floating point is a real number with decimal (because computers can't store infinite decimal, they can only be represented by floating point approximation). Their representations in computer memory are quite different (the storage mode of floating point is similar to the scientific counting method in mathematics), and integer operation is much more efficient than floating point operation, so integer operation should be given priority, followed by floating point operation.

Integers are divided into two types:

Integer, occupying two bytes, ranging from-32,768 to 32,767.

Long, occupying 4 bytes, ranging from -2,147,483,648 to 2,147,483,647.

There are two kinds of floating-point numbers:

Single-precision floating-point number (single), occupying 4 bytes and 7 significant digits, ranges from -3.402823E38 to-1.40 1298E-45 when negative, and from 1.40 1298E-45 when positive.

Double, occupying 8 bytes, has 15 significant digits, and the positive number ranges from-1.7976931348 6231e 308 to -4.45438+0247e-324.

When programming, you should choose the data type according to the actual situation. As mentioned above, integers are preferred, but if the data must contain decimals, you should choose floating-point numbers. For example, if an integer variable is assigned to 0. 12345, its value will automatically become 0, but it will not change when it is assigned to a single-precision floating-point variable. This is why sometimes the program can run normally when the integer is changed to single precision.