Float is the definition of a single-precision floating-point number, that is, when there is a decimal. Nowadays, computers are not very sensitive to the size of decimals, so it is suggested to use double instead of double, which is much more accurate.
Int is an integer, limited to integers.
take for example
Floating f;
int I;
f = 9/5.0;
I = 9/5;
The result is f =1.8; I = 1;