Why don't you define data types directly with int in single chip microcomputer, such as #define uint unsigned int?
Unsigned int is unsigned 0-65535 int is a signed integer -32767-32767. #define uint unsigned int is a macro definition, which means that uint stands for unsigned int, so that future programs can use uint to stand for unsigned int, which saves a lot of things.