Yesterday I found an old email in my mail box that I thought might be generally interesting.
I was asking the technical lead on the C# compiler which algorithm/shortcut people should use to choose their ‘number types’ among the many available in the language. I was asking for something that works the majority of times, even if not always. I’m sure there are other scenarios we haven’t consider. Anyhow, here is his algorithm.
If you need fractions:
- Use decimal when intermediate results need to be rounded to fixed precision - this is almost always limited to calculations involving money.
- Otherwise use double - you will get the rounding of your calculations wrong, but the extra precision of double will ensure that your results will be good enough.
- Only use float if you know you have a space issue, and you know the precision implications. If you don’t have a PhD in numeric computation you don’t qualify.
Otherwise:
- Use int whenever your values can fit in an int, even for values which can never be negative. This is so that subtraction operations don’t get you confused.
- Use long when your values can’t fit in an int.
Byte, sbyte, short, ushort, uint, and ulong should only ever be used for interop with C code. Otherwise they’re not worth the hassle.
View comments on GitHub or email me