Let's talk about the unit of memory:
The common types are as follows.
1T
1G (At present, the most common memory situation is DDR2 generation with a capacity of1g.. )
2M
1K
1 byte (bytes)
1 bit (bit)
Conversion:
1T= 1000G, 1G= 1000M, 1M= 1000K, 1K= 1000BYTE
Just remember and understand these two concepts: BYTE and BIT.
1 byte consists of 8-bit binary.
1 bit is the most basic storage unit in memory. It consists of a device that can provide two states, namely 1 and 0.
Do you understand? The 1 bit can indicate two states,
Are decimal 0 and 1, and binary 0 and 1.
1
And so on: two digits can represent four states, namely 0 to 3 in decimal, and binary corresponds to the following four digits.
00
0 1
10
1 1
By analogy, three digits represent eight states, namely 0 to 7 in decimal and 000 to11in binary, so I won't list them one by one.
By analogy, a byte is an 8-bit binary, representing 256 states of decimal 0-255. The binary number is 00000000 to11111.
Then it's time to get down to business,
If there is no sign bit for storing 256-state numerical data in this byte (tinyint is this type), it means (can store) decimal numbers from 0 to 255. If there is a sign bit, it can only represent 256 states including 0 from negative 128 to positive 127.
One byte represents digital data:
Do you understand? The maximum number of bytes that can be unsigned is 255, which means that the four bytes (32 bits) you said are 10 decimal numbers, and the 1 bit will overflow, while the1byte will represent three decimal numbers, which is actually less than1/kloc. Because one byte indicates that the unsigned integer (TINYINT) exceeds 255, it will overflow to 256.
One byte represents character data:
Ok, one byte means that short integer data can reach 255, so one byte (8 bits) means characters? The answer is that only one ASCII character can be represented, and ASCII characters are A-B, 0-9 and! @ # $ $% and other punctuation marks and control symbols *** 128 kinds (7-bit binary), plus a check bit, 8 bits of our 1 byte just represent these characters. This is what we often call ASCII coding. I don't remember binary. After changing to decimal, I probably remember that character A is ASCII code 065, character B is 066 ..., character "1" is ASCII code 049, character "2" is 050 ..., and character "~" is 126.
Do you see it? When representing characters, this byte, eight binary bits and 256 variants can only be used to represent some letters, numbers, punctuation marks and some control symbols. Tell us whether this bit is a or some other ASCII symbol.
But when representing numerical data, it can really represent an unsigned integer with a maximum of 255.
To sum up: this word has 8 bytes and 8 decimal places, which can represent 256 kinds of changes.
When representing numerical values, when eight binary numbers are converted into decimal numbers, it is 127-
If you define this memory as CHAR, then the symbol "~" will be displayed on the screen.
When this memory (8 bits) is defined as an unsigned integer (TINYINT), the value of 127 will be displayed on the screen.
Do you understand? Maybe you already understand. I'm kidding. . . Ha ha laugh
=========================
A character CHAR( 1) needs 8 bits, that is, 1 bytes, and 1 bytes can have 256 variants, that is, it can represent 256 characters.
But if one byte stores data,
In different units, CHAR(5) represents five characters, and each character can be divided into single byte and double byte.
Explain that INT is 4 bytes, but the exponential value is a numerical value represented by 4 bytes.
Each byte is an 8-bit binary number, which can represent decimal 0-255, and four bytes are 4 * 8 = 32-bit binary numbers, which can represent decimal -2,147,483,648 to +2,147,483,647.
CHAR(5) means that the length is 5 bytes,
Each byte can represent 256 characters,
Double bytes can represent 16 power-of-2 characters.
Is that clear?
So, you will know that CHAR(4) and INT occupy the same space, because they are all four bytes.
Also, don't confuse numeric values with numeric characters.
A two-digit decimal value can be represented by half a byte: for example, a four-digit binary number 10 10 represents a decimal number 10.
The character 10 refers to two symbols 1 and 0. In the form of ASCII code, two bytes occupy the binary position of 16 bit and are converted into decimal 49 and 48.