How do binary 0 and 1 represent integers regardless of size?
The binary number 1 is obviously greater than the binary number 0.
The ten-digit unsigned binary integer111111is obviously larger than the ten-digit unsigned binary integer containing 0.
2. The decimal number "-44" is represented by 8-bit twos complement.
I use -44 to convert it into binary, which is 7 digits. How to express it with 8 bits?
The number of digits in a computer appears to the power of 2, namely, 4 digits, 8 digits, 16 and 32. ...
Seven digits must be regarded as eight digits.
The original code of 3,0 is 00000000, and its complement is1111111,and its complement is1.
The original code of -0 is 1000000, and its complement is11111,and its complement is 00/.
Is this right? Or are they all zeros?
Some are right and some are wrong. Just ask the rules once.
4. Decimal "-128" is represented by 8-bit twos complement: "1000000".
5. If the decimal number "-57" is11001,then the hexadecimal representation of 16 is "C7H".
6. A typical example of binary addition and subtraction (arithmetic operation)
Add:
1 1
+ 0 1
-
1 0 0
Subtraction:
1 1
- 0 1
-
1 0