But char is divided into signed and unsigned forms, that is, there are positive and negative differences.
If you sign it, you sign it. In machine code, 8-bit binary bits need to occupy a symbol to distinguish symbols, which are represented by 1 or 0 respectively. Then the remaining 7 bits can represent a number of 0- 127. Since the power of 2 * * * is 256 numbers, we can get that there are 128 numbers in the negative range.
In contrast, unsigned characters represent values from 0 to 255. Since ASCII code has no negative value, general ASCII code only needs to use 0- 127, and the remaining 128 characters constitute extended ASCII. Of course, it's usually useless. You can also see that the symbols in this range are generally strange, but you assign them to this range.
Although defining whether a char variable generates an unsigned char or a signed char depends on the specific environment you use, a typical INTEL computer generates a signed char. If you are not sure, you can use the constants provided in limits.h to determine. Of course, int long and so on can also be treated in the same way. If it is a floating-point number, use float.h to view it. Specifically, you can query the description of the library. The procedure is as follows. Run by yourself.
# include & ltstdio.h & gt
# include & lt restrictions.>
main(){
int a = CHAR _ MIN
int b = CHAR _ MAX
printf("%d %d\n ",a,b); //print the minimum and maximum values of data type char.
}
Now we have understood that 8-bit signed char can't represent numbers above 127, so when you give a variable a value above 127, it will automatically change back to the minimum value of negative value and increase upwards, which can be imagined as a circle. The principle is that there is only one adder in the CPU, so you can't do subtraction without circulation. Specifically, you can buy a microcomputer principle at will. R, use LINUX more.