I understand what increment and decrements operators are (++
and --
) and the difference between post and pre (i++
vs ++i
) but should they be avoided as they do increase the difficulty of reading the code? After reading the answers to the question Are assignments in the condition part of conditionals a bad practice? it seems like code readability is of the utmost importance. I had a prof who told us not to use increment or decrement operators unless you know exactly what you’re doing (but this prof usually gave poor advice).
In my opinion, i++
is easier red than i = i + 1
, however I wouldn’t do this in side other statements, such as arr[i++]
or arr[++i]
, I would break these over two lines.
In summary
- is it ok to use
++
and--
when they are the only operations on the line? - is it ok to use
++
and--
when they are combined with other operations and operator presidency has to be considered?
4
Your question
Yes it’s completely ok to use i++
and --j
alone in a statement. Most mainstream languages have adopted them.
Yes it’s ok to use i++
and --j
in more complex expressions, when the precedence is self-explanatory :
- either because it’s isolated enough as in
a[i++]
or in--j > 0
, - or because it’s used according to a common language idiom like
*p++
(experienced C/C++ programmer know that increment applies to the pointer/iterator and not to the dereferenced value).
Additional advise
Avoid in case of doubts! Prefer readability and simpler steps instead of using parentheses as for **p[i]++
. Abuse like --(*p++)
or 1---j
should really be forbidden, not for syntactic ambiguity, but to the huge effort it requires to understand.
Absolute prohibition: Never use more than one such side effect on the same variable in the same expression as in *p++ == *--p
or funct(i++, i--)
. This might lead at worst to undefined behavior, at best to misunderstandings as there are subtle differences across languages.
Conclusion and quote of the day:
” Debugging is twice as hard as writing a program
in the first place. So if you’re as clever as you can be when you
write it, how will you ever debug it ? ”B.W.Kernighan
7
For question 1) – yes, it is perfectly readable. Every programmer on the planet understands the line i++
.
However, for consistency’s sake, one may consider using i += 1
instead. It has the same form for any constant e.g. i += 4
, whereas i++
is a special bit of syntax that increments by exactly one.
For question 2) – absolutely not!! A programmer should never have to have memorized operator precedence to understand a line of code.
2
Let’s walk through some logic:
- Some junior developers don’t fully understand these statements
- So we avoid using them and limit their exposure to such statements
- Eventually all developer will encounter such a statement
- They misunderstand the statement because of #2
In a nutshell, when statements are avoided because they are considered esoteric they become more esoteric. The problem is that if these things are part of the language, they will be found in code somewhere. You might be able to prevent their use on a team but in a way you creating ignorance in doing so.
This problem is why I am not a big fan of “everything plus the kitchen sink” languages. The more syntax there is, the more there is to understand and if some of it is kind of cruddy or awkward, it becomes esoteric and eventually causes a bug somewhere when it is misunderstood. Adopting such a language and then declaring that people shouldn’t use the syntax it offers isn’t really optimal, in my opinion. Python doesn’t allow ++i and i++ and if you hate these statements, that’s an advantage of that language.
In languages that allow these things, my answer is “know your tools”. These statements are well defined, quite logical, and are useful in common situations. If you are programming in a language that allows these statements and don’t know what they mean, you need to skill-up and it’s not the responsibility of other developers to coddle you.
8
Take a look at two small code snippets (C# – I do not know if C++ and Java behave the same way as C# does):
int i = 0;
int j = i++;
and
int i = 0;
int j = ++i;
Do you understand the difference between those versions? What are the values of i
and j
in those snippets?
If you – and also your colleagues – can tell that without hesitation, then such use is OK, within your team. I prefer to discourage this type of use.
1
It depends on the language. In C there are some well-known idioms which depends on using increments or decrements as expressions. So you would expect a C developer reading your code to be familiar with these idioms. The reader might be mystified if you choose a less idiomatic solution.
Java and C# have different culture and different idioms. It is not common to use increments in expressions. Doing it might be considered “clever” or “micro-optimization”. Increment as stand-alone statements are not frowned upon though. (Although iterators and similar are preferred to loops with counters.)
Some languages like Python does not have increment/decrement operators at all, in which case you don’t have much of a choice.
Bottom line: Readability is always relative to a certain audience.
1