For example, the 8-bit binary number 0 1 10 100. If it is not defined as unsigned, it defaults to signed number, with the first sign bit, 0 as a positive number and 1 as a negative number. If it is defined as an unsigned number, the first bit is not a sign but a number.
The difference between signed and unsigned.
Int is signed and unsigned is unsigned.
They actually occupy the same number of bytes, but the signed one needs to arrange a place to represent the symbol of my value, so the absolute value it can represent is half less than that of the unsigned one. For example, we have a 1? [1] byte integer (although this type does not exist), then the unsigned one is as follows: 000000 ~111111.
A byte is an 8-bit signed number. Because the first bit is used to represent symbols, only 7 bits can be used to represent numbers 000000 ~ 650438+0111,so it can also represent the range:-1/.