For example, on an 8086 cpu, the integer int is 16 bits.
In arm, by default, the int type is the same as the long type, but the compilation option is not changed. That is to say, the default int is actually long int. And short is short int. Long int is compiled in arm by default, occupying 4 bytes. Short integer takes up 2 bytes. Because arm is a 32-bit cpu. However, arm can use the thumb instruction, and you can modify the compilation options to complete it. In other words, you can make an integer value occupy 16 bits.
For example, on a single chip microcomputer, the int type is an 8-bit byte.
Therefore, whether it is short, long, double precision, floating point, char or various structures, their essence is the size of the memory length. If you look at it this way, you will have a deeper understanding of data types.
For example, you throw a short one into a long one. In memory, it tries to increase the space of two consecutive bytes. It's safe. But you force the conversion in turn. Turn long into short. If there are non-zero bits in the reduced two-byte space, you may get an incorrect conversion value, which is very dangerous.