When reviewing code, I sometimes see blocks like this (I happen to be using a JavaScript since it is widely understood):

if (!myVar || myVar === null || myVar === undefined) {

Those who know JavaScript decently well will immediately recognize that myVar === null and myVar === undefined will never be true if they are evaluated, because if myVar is either null or undefined, execution will proceed immediately to the first line inside the if block.

Some might argue that the code above provides greater clarity than simply using if (!myVar) ...,
and junior developers may spend less time interpreting it. And clarity in code is obviously paramount. But I find this type of code distasteful, since in my view it promotes a low-confidence approach to software development. And I hardly think I would be alone in that.

Conversely, I can think of times when I have passed an options object to an obscure function with what I know are the function’s default values. This is another example of unnecessary verbosity for the sake of clarity, and it does potentially discourage developers (including me) from being confident and clear-eyed about how the function works. But I subjectively find this more acceptable, since the functionality I’m leveraging is more obscure.

All aspects of code exist somewhere on the continuum of common-to-obscure. So the right approach to what I’m calling “clarity vs. confidence” is probably somewhere on a continuum as well. But developers who write and review code in the real world need to know where to place themselves and their teams on that continuum.

My Google search terms didn’t turn up much on this topic. Have published experts in coding best practice explored when to write “unnecessary” code vs. not?

There can be value in crowd-sourcing opinions, but StackExchange discourages questions that elicit opinion-based answers. Only an answer that includes citations will be selected as the “best answer”.



If you can’t trust me to figure out what !myVar does then either teach me

if (!myVar) { // if null or undefined

Or don’t

if (myVar === null || myVar === undefined) {

Trying to do both at the same time will just leave me more confused.

If I don’t know what !myVar does seeing the rest of your code doesn’t tell me what it does. It just tells me there’s also more stuff going on to worry about. That just adds to the intimidation.

To prove that point I offer the comments of Doc Brown & Jon Wolski that point out the equivalent code is actually:

if ( myVar === null 
  || myVar === undefined 
  || myVar === false 
  || Number.isNaN(myVar)
  || myVar === 0 
  || myVar === ""
) {

Which I didn’t even spot. So don’t underestimate how confusing this code is.

That means the comment I added is not equivalent even if it was your intent. I could try to fix that but generally I find comments that tell me what code does distasteful. Mostly because code will get refactored and the comment wont and now I really hate you.

Better comments tell me why it’s doing what it’s doing. Show me your intent. Help me understand that and you’ll free up the brain cells I need to do a web search and learn this standard stuff for myself.


//Set default if needed

Tell me that and I know what you’re trying to do even if your code is wrong.



I think candied_orange’s answer explains why there’s a lot of begging the question on your concept of clarity with respect to the particular problem you’ve chosen, however I think this user went “x ⟶ y” problem in the wrong direction some ways (too honed in on a Y behind the very specific example you gave).

If I understand correctly, to your core issue, away from the particular piece of code you’ve provided above:

All aspects of code exist somewhere on the continuum of common-to-obscure. So the right approach to what I’m calling “clarity vs. confidence” is probably somewhere on a continuum as well. But developers who write and review code in the real world need to know where to place themselves and their teams on that continuum.

You’re talking about using and not necessarily abusing language features, and with respect to their clarity, when you should feel the need to redundantly comment or write code that’s unnecessary in a given context, i.e. overexplain what you do in fear that the user won’t understand what is going on.

This is complicated, but I’ve crafted roughly three criteria (of which none is objective) which can aid in helping you understand when to put in-code helpers for certain usage of features.

  • Discoverability – how easy is it for users to figure out the language feature you’re using.
  • How complicated what you’re doing is.
  • How confusing it can be.


I use C++ a lot, and C++ has a lot of features people who regularly use C++ are often simply not familiar with. For example, C++ has lambdas/anonymous functions, in the syntax

[reference or value, captures...](parameters,...){

I’ve had co-workers who simply did not know about the existence of this feature. However simply asking me “what is this thing” will result in me saying “That’s a C++ lambda”. I may or may not need to go into more specifics on what that is, but they now have enough information, by the mere fact of me giving a name to this concept, for them to figure the rest out on their own. This knowledge is also self proliferating as in just because I wrote code that used this feature, doesn’t mean I’m the sole proprietor of information on how to use it. If, lets say, co-worker Alice, who just learned this feature, is asked by Bob, what this is, Alice can confidently explain what this is to Bob. I wasn’t needed.

This applies to a lot of things in JavaScript, such as ‘Promises’ or the DOM, closures even. If it doesn’t require you to do anything other than point them to the feature and you aren’t using in non-obvious ways, than it is at least discoverable.


Complex problems sometimes require complex solutions. Even if those solutions use fairly simple constructs built into a language, their usage, even in seemingly obvious in their application, may still need to be explained.

C++ has the concept of “const”, where, given a simple variable, if declared const Type foo the type is not allowed to have non const operations applied to it, simplified, you cannot mutate a simple const variable.

Except there are several escape hatches to this rule. One escape hatch I’ve actually used, is mutable. What mutable does is effectively say “this member variable, in this object? Yeah you can just use that in a const context”. Now, why would I do this? Some times, objects have the semantics of things that are not modified, but the underlying implementation actually does modify said object. The biggest reason I use this are for network socket API objects, such as ZMQ. Even sending data technically modifies the socket. Yet semantically, I’m not really modifying anything the user cares about. This prevents users from doing things they ought to be able to. Similarly, multithreaded primitives, such as mutexes, and atomic variables used for signalling, often have similar “yeah, technically this is mutating the object, but what I’m doing is not semantically a mutation that the user cares about”.

But I can’t just slap a few mutables on some network sockets and std::mutex and call it a day. Technically they could just look it up. But no one would understand why I’m doing this. Alice, in the above example, wouldn’t be able to explain to Bob what purpose mutable is serving just by me telling her what it is. I’d have to come in and explain it. This would confuse even experienced programmers.

So instead of letting that scenario play out, I comment on mutable variables justification for why they are mutable (and often what mutable does). What I’m doing is complicated, it’s not immediately obvious what is happening just by virtue of using this feature and understanding it.


A feature may exist in the language, may be discoverable, and may be easy to use. But that feature (or the allowance of something being written) may still lead to confusion.

By confusion, I mean a lot of things, but particularly I mean when you write one thing, and even given simplicity, and discoverability, it still could be interpreted to mean something else. This is particularly pertinent when you are relying on the language to do something implicitly for you with out clearly expressing intent.

Here’s an example of what I’m talking about.

I’ve already introduced you to the idea of const objects. In C++ particular, we like to make objects const whenever we can. We can also make references to objects const (aka semantic immutability), so that when I pass a non-const object to a function, I don’t have to copy its data, and I still can enforce the invariants of whether or not that object is changed.

But even with out the escape hatches, you can still accidentally have mutability exposed in const objects. If you have a pointer to an object, in a C++ object, say:

struct Foo{
    int* my_int_ptr; 
    //despite mutating bar, I can still apply const to this function. 
    void baz() const{
        //de referncing the pointer, and adding to it. 
        (*my_int_ptr) += 1; //mutates the object.  

    void mut_qux(); //won't work when Foo is const.
    void const_qux() const; //will work when Foo is const. 

you can actually have const member functions/methods that mutate the underlying object in the pointer. The reason C++ allows this, is that the object only stores the address of my_int_ptr, not the object, so technically, you are not editing the top level object at all, only the value with in the address. But most people, including myself, expect the semantics of const to cover this use case, to prevent such types of mutability.

Instead of calling this method const, when it technically is in C++, I instead don’t use const. I could, and it would be right, and it wouldn’t even be difficult to understand why I could do that. Every one should understand const doesn’t apply to modifying values within a pointer right? But it’s confusing. We we say const, we would like there to be no interior mutability either.

Confusion Part 2: Sometimes the Language is Wrong.

One concept I want to introduce here is that languages can also be wrong, and that can lead to confusion. What I mean is there can be features that were ill thought out, shouldn’t be there in the first place, and not just non applicable like the above. As a JavaScript programmer, you should be more intimately familiar with these kinds of failures than I, but here’s an example of failures in C++.

In C++, we can write brace-less versions of many types of statements, like if, for, etc… The immediately following statement becomes the block. However this is… confusing at best. Many code bases explicitly outlaw this kind of thing. It’s not always obvious what a statement even is, and it’s easy to accidentally edit it such that it is no longer a single statement.

In C++, we also have “simple” type automatic integer/float promotion rules. However… they are implicit, and they interfere with the type system in some very dangerous ways. Relying on this system is also confusing, because even if you understand that most things promote to int when they are smaller than that, and floats are by default doubles. It’s almost never clear what type will end up what, even when you use the same types, let alone different ones, compared to what makes sense. These rules are discoverable, often simple enough, yet lead to a lot of confusion.

Your example.

If we go ahead and come back around to your actual example, and apply these principles, there’s a problem with every point I bring up, especially on confusion and complexity.

First, the rules surrounding negation in JavaScript on variables are not easy to find, and ironically are harder to find than !! because of how easy it is to discern for Google “What does !! mean in JavaScript“, versus the former (my search results did not include anything useful in the first page until I changed my query to include “bang operator” instead of !, but that could just be my personalized results). But this isn’t nearly as big of a deal as the next issue.

Second, it’s really confusing. Most people, even in JavaScript, expect ! to correspond to boolean logic (even MDN shows describes it as “logical not” and shows a boolean example immediately). This isn’t about knowledge, its’ about communication. So when you’re using it for multiple things, even if you can, people don’t understand what’s going on. If you were to explain to Alice what ! means in JavaScript, before throwing up, she would not be able to explain to Bob why that line of code’s intent was. This is really similar to my integer promotion thing. Like !, integer promotions rules are ubiquitous and not new. But they still lead to all sorts of confusion and problems for people down the line when you rely on the feature instead of explicitly casting types.

The linked answer goes over what you can do. Comment, etc… Additionally I would argue that there’s some other things at play here. First I would argue that JavaScript using ! like it does is not necessarily the best language decision. I’m not sure we should be relying on JavaScript’s mistakes in the first place. JavaScript has an ecosystem of very basic libraries that exist for the sole reason to correct for these mistakes. Regardless, let’s say you’re using this in order to figure out if a parameter is really missing or not, this is still functionality often taken from a library.

Instead of using this behavior directly, I would either use a library, or push this behind another function (such as parameterOptional(your_var, false) or parameterExists(your_var)or parameterMissing(your_var)) and then comment the code appropriately in that function such that no one really has to deal with it.

Third, What you are doing may be complex. If you’re actually testing for existence of a value you may not be trying to test for !x, but literally everything except if x is true. If you’re relying on the fact that say, !x evaluates true/false as well as multiple types of null-ability, then that should be explained. There are two complex semantic ideas at play despite the code being looked at being incredibly tiny. If you tried to explain this in a comment, what would it look like? If it turned out really big, that might give you a clue on the issues with this piece of code.

Some times code can be truly miniscule, but have a lot of implications, and rely on a lot of behavior unsaid.



Use tests to enshrine confidence and intent

If code pushed into production contains redundant statements, it’s an indicator of deficiencies in the automated tests which should be taking care of this instead. Furthermore, a good static code analysis tool should also detect this and warn to remove redundant and unreachable code.

Anything a programmer may be able to infer by reading code is less conclusive than something a test can prove by running it, since humans can misread code whereas a test-runner cannot.

Lastly, there is no need to antagonise clarity and confidence against each other since tests can provide both.

Based on the JavaScript example:

function foo(bar) {
    if (!bar) {
        return "woof";
    } else {
        return "meow";

Example using Jest:

    'Foo', () => {
        it.each([0, false, null, undefined])(
            'should return woof', 
            value => expect(foo(value)).toEqual("woof")



There are two parts to this question.

First, the decision to use !myVar versus testing for null or undefined is an issue of teaching the idioms of a language to new developers. Your first code example is an easy answer for JavaScript: use !myVar unless you need to treat null and undefined values differently. If you don’t, new developers need to learn this JavaScript idiom, and use !myVar. Developers will find this in so many libraries and other snippets of JavaScript code that they will have a difficult time in general without knowing this.

The second answer involves this:

Conversely, I can think of times when I have passed an options object to an obscure function with what I know are the function’s default values. This is another example of unnecessary verbosity for the sake of clarity, and it does potentially discourage developers (including me) from being confident and clear-eyed about how the function works. But I subjectively find this more acceptable, since the functionality I’m leveraging is more obscure.

Passing arguments that are the default values to a function isn’t bad, nor is it inherently good. Specifically for options objects, passing an empty/default object might be done because some higher level function or object already defines an options object that consumers of your code can customize. For instance, say your object has this.options which just happens to default to an empty object { }, which also happens to be the default “options” parameter in the function. Consumers of your code might pass in their own custom this.options, which means blindly calling someFunction(this.options) is clearer than:

if (this.options) {
else {

On the other hand, calling someFunction({ }) just because the options argument is the default is just cluttering up the code — potentially making the code less clear. Other developers need to learn the API. This is not your responsibility. Let’s not forget that checking arguments.length is a means of supporting function overloads in JavaScript, so be aware that someFunction({ }) and someFunction() might execute different behavior. Again, this is about learning the API, which is something each developer is responsible for.

You are correct that this exists on a continuum. There is no single best way to do this. Developers should learn the well-known idioms of the languages they work in, and they need to learn the APIs of the library code used in the application. Of course, an idiom being “well-known” is also subjective. An idiom for one generation of developers might not be an idiom for another. These things tend to change over time, however I doubt your example of checking !myVar versus comparing to null or undefined is an idiom that will change in JavaScript unless the language itself changes.



You’re overthinking this.

Those who know JavaScript decently well

That will be anyone competent who’s worked with it regularly for 6-12 months and then they qualify for basically the rest of their career. Why are you trying to optimize for the transient instead of the steady state?

If they’re not competent they aren’t going to know it decently well, even if they’ve been working with it for the better part of a decade, but you aren’t going to fix that by being extra explicit.

And once they do know JS decently well, as you yourself point out your code example violates the principle of least surprise: it’s going to look odd at best to an experienced JS developer.


A reader will assume code are written a certain way for a reason. If the code contains things which does not make logical sense (like a clause which is never evaluated), a reader will assume either they don’t understand the code or that there is a bug. In either case the reader will waste time investigating further.

If you want to be explicit you should also be consistent. E.g. if the purpose of the first clause is to check for empty string, you should write it outright:

if (myVar === "" || myVar === null || myVar === undefined) {

This code is completely unambiguous and the intention is clear.



Use optional static typing

Since you used Javascript as an example : one easy way to increase both clarity and confidence in code would be to use Typescript instead.

You don’t even need to convert anything :

TypeScript syntax is a superset of ECMAScript 2015 (ES2015)syntax.
Every JavaScript program is also a TypeScript program.

You could add valuable information inside the code, e.g. defining what myVar should be, and if it’s nullable or not:

let myVar: number;
myVar = 1;

if (!myVar || myVar == null || myVar == undefined) {

Type hints are available for other dynamically-typed languages, e.g. for Python since v3.5. And if you’re using a statically-typed language (e.g. Java, C#, C++, …), you basically have to specify the type of myVar.

Fail fast

Thanks to the first line of code, it is now clear what myVar is allowed to be, and more importantly, what it cannot be. Those lines would all fail during transpiling:

myVar = null; # Type 'null' is not assignable to type 'number'.(2322)
myVar = "test"; # Type 'string' is not assignable to type 'number'.(2322)
myVar = [1,2,3]; # Type 'number[]' is not assignable to type 'number'.(2322)

Also, if you forget to assign a value to myVar, the if would fail with Variable 'myVar' is used before being assigned.(2454).

This means that the myVar == null || myVar == undefined check is now provably useless.

Comments can lie

The advantage of let myVar: number; is that it’s not a comment. Comments can become outdated:

var myNumber = "some_string"; // myNumber is a number


Since Typescript checks your code and types during transpiling, you can now feel more confident, and write clearer, more concise code (try it online):

let myNumber: number;
myNumber = 1;

if (myNumber == 0){
} else {
   console.log("Not zero!");

or, if you’re working with strings:

let myString: string;
myString = "test";

if (myString == ""){
} else {
   console.log("Not empty!");

myString and myNumber are simply not allowed to be null or undefined.



if (!myVar || myVar === null || myVar === undefined) {

You cannot possibly get code coverage in a unit test of myVar === null || myVar === undefined because it is impossible to write a test that reaches that code.

As a conclusion, that code is superfluous and has to go, because any added benefit that the brevity may have for a exceptionally undereducated reader, is dwarfed by the confusion of the whole team of not getting 100% code coverage of even the simplest of functions.



Oh well. This is JavaScript.

So first we need to assume that people know what myVar === null and myVar === undefined mean, whether these cases are possible in some situation, and how they should be handled.

Next we look at the ! operator in JavaScript. It checks for a huge number of possible cases: It handles not only null and undefined, it also handles a boolean “false”, an integer or floating point “0”, a string “empty”, a floating point “NaN” and whatever I forgot.

But in reality you would for example assume for example that myVar is a boolean; you want to handle “true” and “false” correctly, null and undefined maybe shouldn’t happen but you think treating them as false is unreasonable. myVar shouldn’t be a string, neither empty or non-empty, and either situation would be wrong.

You just need to be aware that if (!myVar) … does the following in this situation: It handles boolean true and false correctly. It handles null and undefined like false; I hope you know that is correct. It handles strings or integers by going either through the “if” or “else” branch, which both don’t make sense. So your code just ignores these cases.

How to express your intent is difficult. You could write

if (! myVar) // Assume bool, null or undefined

or you could write code that says explicitly which cases you are handling. This is a pain in JavaScript. You either assume the type, or probably have some smaller helper functions for each thing you might want to check.


I write only the code that is necessary for the intended logic to be implemented. Clarity comes on its own by avoiding unnecessary stuff.

Code is clear to understand by default as long as it do exactly what its supposed to do to implement the algorithm. Any extra code makes it harder to understand.

In the if-condition presented, the question one should ask is whether a particular check is necessary. What will happen if in absence of the check the line below it is executed, would any harm occur?

Checking whether a variable is null and then also checking whether its a number is absurd. There is no scenario where this would be needed. You check a variable for being null if you want to de-reference it (you expect it to be an object). If you don’t want to de-reference the variable, instead perform some numeric operation on it then only you check whether its a number. There is no scenario in which you do both so you never need both the checks.

It may happen that due to some quirky nature of your language a simple check for null is not enough to guarantee error/exception free execution of the de-referencing action you want to perform after the check. Then you put two or three etc checks that are needed to ensure that the variable infact have an object on it.

The point is, your checks must be there only to ensure successful execution of the action you want to perform, nothing more. This itself bring clarity. Putting extra check when they are not needed for execution of algorithm will confuse other readers (programmers) about your intention.


Your code smells 😉

  1. There is clear concept DRY:

“Don’t repeat yourself” (DRY) is a principle of software development aimed at reducing repetition of software patterns, replacing it with abstractions or using data normalization to avoid redundancy.

  1. You provide bad programming approach to junior developer. They’ll use that approach everywhere.

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *