How do you usually input floating-point numbers by single chip microcomputer?
Single-chip microcomputer is inefficient in processing floating-point numbers. Generally, floating-point numbers are multiplied by the n power of 1 to become integers for calculation and other processing, and decimal points are processed again when output. For example, if you enter 1234.567, 1234 is saved in two unsigned char variables, and 567 is saved in two unsigned char variables. If it is a signed floating-point number, you can save the symbol in an unsigned char separately. When calculating, all the numbers are multiplied by 1, and when outputting, the integer part and the decimal part are output separately.