-[2^(n- 1)]~[2^(n- 1)]- 1
Where n is the number of binary digits. The integer you said (that is, the short integer in C/C++) is represented by 2 bytes, that is, 16-bit binary, which means the range is
-[2^( 16- 1)] ~ [2^( 16- 1)]- 1
that is
-(2^ 15)~(2^ 15)- 1
So it is -32768 ~+32767.
Similarly, a long integer (long int in C/C++) is represented by 4 bytes, that is, a 32-bit binary, and the range of its decimal truth value is
-(2^3 1)~(2^3 1)- 1
Namely-2147482648 ~+2147482647.
The representation of "two's complement" is the general representation of signed numbers by computers now. Other representations include "the complement of 1", "code shift" and original code representation. When different representations are used, the range of decimal truth values they represent is different.