The key is that we don’t know how many overflows there are. Let the original data be l and the transformed data be ud. If the addition operation is performed, the benefit should be ud=long.max+1-l. It's hard to say if it's caused by multiplication.
The above personal opinion, and why not define the Decimal type from the beginning?