It is common to use idioms such as:
x/60.0
to force a floating-point division when x
is an integer in languages which do not have distinct operators for integer and decimal division.
Is this an accepted idiom or is it preferrable to cast x
to a floating-point type?
I believe it is a concise and clear idiom- it might be show intent less clearly than an explicit cast, but it is more concise and easier to read. I think that it’s less hack-ish than x + ""
to convert to string or x+0.0
to convert to floating point (as there’s no unnecessary “NOOP”).
Thoughts?
Álex
1
The return result will be language specific based upon how the language handles implicit conversions.
That having been said, if your language of choice will support returning a float from that implicit conversion, then yes, your example is a pretty common way of triggering the float point arithmetic. It’s quick, clean, and clear.
Some languages will require the explicit cast, so the norm in those cases would obviously be the explicit cast.
Update:
Alex asked:
would you prefer x/60.0 or float(x)/60? Why?
And my preference is for the first, as this example will explain. Please note that my comments are still targeted to languages that allow the implicit conversion. Languages that require an explicit conversion are out of scope for the answer.
int x = 65; float y = 0.0; y = x / 10.0; //will yield 6.5
vs.
int x = 65; float y = 0.0; y = (float) x / 10.0; //will still yield 6.5
Why?
- The implicit cast is what I would “expect” the code to be doing in the first place. When I’m mentally performing the calculations prior to typing them out, my mind switches between Integers and Reals without worrying about it. In this case, I expect the compiler to be able to do the same.
- The explicit cast muddies up the code. Reading it out loud, I get “y equals float x divided by 10” instead of just “y equals x divided by 10.” Inserting the explicit cast interrupts my pattern of thought while reading the code.
- And while this may just be my guilty conscience, I always feel like I’m doing something wrong when I require an explicit cast. I feel like I’m telling the compiler “You wouldn’t normally do this because it’s wrong, but I swear to you it’s okay in this case. Really, it is, just believe me, ok?” (I agree, I could use some therapy on the issue). The compiler should be smart enough to allow the conversion or throw an error when there is data loss.
- Finally, when adding an unnecessary explicit cast there is a chance for introducing a bug later on in the code’s life. If one of the variable types changes, the explicit cast may allow something to compile that really should not have. Granted, this is pretty rare but it can be very difficult to hunt down.
4