Avoiding throw because we are not sure the exceptions will always be caught

  softwareengineering

I’m a junior in my company, and one of the coding rules they have is:
“a constructor object must never fail” (i.e., never throw). So what if I give them an invalid parameter? Then, the object should be constructed anyway and there should be another way to check if the object is valid, like an isValid() method.

For the context, we are developing a desktop software (no web) in C++. 
I work mainly on the core of the app, which can be seen as a DLL used by the rest of the software.

I am surprised by this, and after a bit of questioning, I understand the following reasoning:

  1. When something throw, at one point in the call stack this should be handled (using trycatch).

  2. If a throw is not caught, stack unwinding will go all the way to the top, at which point it will encounter a global catch block that will do something like this: display the message “Something went horribly wrong”, try to generate a crash report, and shut down the app.

  3. We cannot be sure that future programmers will correctly use your object and handle correctly all edge cases. If someone uses my object, forgets to add catch, and passes through testing, we are at risk of having a “Something went horribly wrong” message in production, and this should be avoided at all cost.

  4. So, an object constructor can never throw. In fact, no function should throw (outside of a very little part of the code that has a specific catch clause that doesn’t crash the app).

While I understand the reasoning, I find the consequences to be unsettling: I may not use exceptions.

And it’s a shame, because they are useful. Instead, we have a lot of if (myobj.isValid()) everywhere, and empty constructors alongside with init(....) methods, which I have a feeling are a bad pattern.

But we can’t find a breach in this reasoning, especially point 3. 
What would be a better approach, that allows programmers to throw when needed, and ensure that we never trigger the ultimate “Something went wrong” fail-safe?


Basic example: if my object requires to store a size as a double, and size shall always be positive. 
But C++ doesn’t have unsigned double, so we have

class MyObj {
   MyObj(double size) {m_size = size};
   double GetSize() {return m_size};
   double m_size;
}

What if someone calls MyObj(-3)?  I can’t throw in the constructor, so the object must be created. Then, I have to add a method bool isValid(){m_size>0;}.

And all callers have to check validity; it’s tedious. What I want is have my object and always be sure that size is positive when I use it.

24

We cannot be sure that future programmers will correctly use your object and handle correctly all edge cases. If someone uses my object, forgets to add catch, and pass through testing, we are at risk of having a “Something went horribly wrong” message in production, and this should be avoided at all cost.

This seem like a very dangerous mentality, as it could easily result errors being hidden. Some developer might reason that crashing is horrible, so if there is a failure we can just ignore it, or retry endlessly, use some default value, or some other non standard and poorly documented error handling.

The failure is still not caught by QA, so the failure handling is never tested, and there are no “something went horribly wrong” messages in production. Except the system might not actually do what it is supposed to, it may give faulty data. And if you are unlucky it may not be immediately obvious that the data is faulty. By the time the failure is actually discovered it may have caused untold amount of harm.

Because of this it is usually a good idea to “fail fast” to more quickly find errors, and to limit the harm errors can cause. Reliability and failure handling is an entire engineering discipline in itself. But if you are concerned about software issues a good place to start is to ensure the following exist:

  • Specifications/documentation about the intended behavior
  • Code analysis tools (“treat warnings as errors”, static analyzers etc)
  • Code reviews
  • Automated tests
  • Manual Testing

Each can be done at a level suitable for the reliability you require. For high reliability applications the QA work may greatly exceed the actual development work.

A similar argument applies against having a separate IsValid()-method, since that could just as easily be missed, resulting in hidden errors. A factory + Result/Option object, as suggested by other answers, is a better solution since the compiler will help ensure correct usage.

In any case, the reliability of your application should not rely on a single person “not forgetting something”, several things should need to go wrong before a problem affect actual users.

2

These kind of complaints usually come out in companies/teams that are themselves new to enterprise level software engineering. If developers forget to add a catch and the code goes all the way to production, then it simply means the company has a deficiency in QA.

But there are ways to solve the problem, up to some point.

Particularly the factory pattern can be used to avoid the problem of throwing an exception. If an object cannot be created from the passed parameters, either return null, or to be a little more advanced, a Result class which will contain the object created successfully or a collection of error codes if something happened during object creation.

Of course, if the developers forget to check the outcome of the factory methods to see if everything went fine, then you will have issues. But at least a clear return value will at least rise some questions to its users.

Academically speaking, constructors are supposed to ensure objects created are valid and ready to be used. That means throwing exceptions. In the real world, things are a little bit more complicated. That’s why creational design patterns exist.

The no-throw policy can also be dealt with the operation result design pattern. It might have some merit based on performance, especially if a lot of exceptions are thrown in quick succession (in a loop e.g.). But this depends on the specifics of the project. Note that both the result pattern and the throw exception pattern assume that errors will be handled and will not go unnoticed.

There are libraries that help implement the result pattern, for example this one for c++ https://github.com/bitwizeshift/result

6

Rather than have a throwing constructor, you can have a function that returns either an object, or nothing. Callers will have to handle the possibility of there being no object.

This isn’t quite “Have my object”, but it does satisfy “always be sure that size is positive when I use it”.

From your example:

class MyObj {
private:
   MyObj(double size) {m_size = size};
   double m_size;
public:
   double GetSize() {return m_size};
   static std::optional<MyObj> Create(double size) { 
      if(size<0) return std::nullopt; 
      return MyObj(size); 
   }
}

The reasoning you were supplied with is somewhat faulty. The immediate caller of a function does not have to be wrapped in try ... catch .... It can be some distance up the call stack, where an appropriate action can be taken. The try ... catch ... you describe in point 2 is such a place, and is sometimes called the “last-chance exception handler”.

It is also somewhat worrying that part of the last-chance exception handler is an autosave, because by definition, you don’t know what’s gone wrong when you get there, so you might be saving an inconsistent state.

Point 3 is also faulty reasoning. You’ve swapped the possibility of having a “Something went horribly wrong” message in production, for the possibility of something going horribly wrong silently, and corrupting all your data. It doesn’t actually address “We cannot be sure that future programmers will correctly use your object and handle correctly all edge cases”, it just makes errors quieter.

8

For the problem you’re trying to solve of programmers forgetting to deal with failure states when creating an object, there isn’t really a difference in foolproofishness between throwing an exception and expecting a caller to check if the object is valid after making it (or null as suggested in other answers), other than perhaps IDE support for null check reminders. However, the consequences of failure to check are quite different. If you throw an exception in the constructor, you get an exception with specific detail of the failure at the point of failure. If you return null, you get a null reference exception or a segfault trying to access the object the first time, hopefully close to where you tried to construct the object. Since this option generally has better IDE support it might be preferable to exceptions for you. The worst of the options is what you’re doing, because you might have an object that doesn’t fail until some other part of the code in some unexpected way, because that part of the code expects the object it gets to be in a valid state.

The way to safeguard against exceptions crashing the whole program is to put global try-catch blocks in places where the application can recover from an unknown error. Depending on your IDE there might also be a way to have it warn when a function or constructor can throw an exception that isn’t handled.

4

The real danger with exceptions in constructors is that the destructor will not be called. This is true no matter how nicely you handle the error in your surrounding code. That means if your class does anything that requires teardown, and then encounters an exception later in the constructor, it will never run your destructor. (It will run the destructors of instance variables that fall out of scope, though. If you use new you have problems, if you use std::vector you’re fine.)

class MyObj {
    char *myArray;
public:
    MyObj(int size);
    ~MyObj();
};

MyObj::MyObj(int size) {
    std::cout << "Constructor called!" << std::endl;
    myArray = new char[size];
    throw std::runtime_error("Oh no!");
}

MyObj::~MyObj() {
    if (myArray) {
        std::cout << "Destructor called!" << std::endl;
        delete[] myArray;
        myArray = NULL;
    }
}

int main()
{
    try {
        MyObj example(10);
    } catch (...) {
        std::cout << "Handled error nicely!" << std::endl;
    }
}

Output:

Constructor called!
Handled error nicely!

We didn’t free myArray! Not good! So you shouldn’t throw an error once you’ve already done something that requires your dtor to be called.

Another issue is that constructors are often called in initializer lists. Suppose you have a constructor that might error:

class SubObj {
public:
    SubObj(int size);
};

SubObj::SubObj(int size) {
    if (size < 0) {
        throw std::runtime_error("Oh no!");
    }
}

class MyObj {
    SubObj mySubObj;
public:
    MyObj(int size);
};

MyObj::MyObj(int size) : mySubObj(size) {}

How do you handle this? It turns out there is a way, but it’s likely unfamiliar syntax to most developers, and there’s no way to try/catch just a single initializer in the initializer list.

MyObj::MyObj(int size)
try : mySubObj(size) {
    // entire constructor enclosed in this block.
}
catch (...) {
}    

5

…one of the rule coding rule they have is: “a constructor object must never fail” (i.e. never throw)

I think the best way to understand these rules is to separate “a constructor object must never fail” from your interpretation, “i.e. never throw.” Let’s start with “A constructor must never fail” and we’ll tie it to “never throw” later.

First off, permitting failure is mandatory in programming. There’s a theorem, Rice’s Theorem which says a particular semantic property (such as “failing”) cannot be proven for all programs (unless it’s a trivial property, which we won’t worry about here). So if proving that your program doesn’t fail is important (and in enterprise settings, it absolutely is), it is essential that the company rephrase “failure” in a syntactic way. We cannot prove that a program cannot “fail” directly, but we can prove that a particular boolean will never be false if we’re really careful with our rigor.

At the top levels, this sort of mathematical proof is essential. I must prove that my server will not crash, or I must prove that my server will not leak sensitive customer information. Of course, we also have users inputting incorrect things (and sometimes adversaries intentionally constructing malformed requests). So what do we do? We end up needing to prove that the failing behavior is handled. Instead of “open” being defined as a function which opens a file, we re-define it to be a function which opens a file if it exists, or provides a way to notify the caller that the file was not opened. As part of the verification of that caller, they’ll look at your “open” API, and make sure the caller handled both the “open a file” and “don’t open a file” paths.

It’s not so much that things cannot fail, which would be an afront to Rice’s theorem. Instead it is that a thing must handle any failures in a well defined way, and then that is thought of as “handling an issue” rather than “failing.”

So how do we do this? Your company is using a common approach: returning booleans. This is, arguably, the obvious solution. Every function returns a success/fail indicator, and the caller checks it.

You suggest another obvious alternative, exceptions. These are a big deal in many languages. With exceptions, functions are not required to return a success/fail indicator, and the caller is no longer required to check it. Instead, all of these checks can be “lumped together,” into catch blocks.

Your argument appears to be that these catch blocks are superior. And as someone who uses both approaches, I can see why. It’s nice to be able to write a bunch of code that doesn’t worry about failures, and “wrap” it in a single try/catch block which handles all of my errors. It’s easier to see that that catch block is in place.

However, the devil is in the details. What does it mean to find yourself in a catch block? It means you have executed an “exceptional control flow” that jumped you to your catch block, possibly from inside a sub-sub-sub function. What do you know at that time? You know the contents of the exception. Nothing else.

So now you are in an environment where you must tell a caller enough information to let them “not fail,” by handling the errors. What can you tell them? Can you tell them enough? Let’s give an example. You have a function that opens a file, reads in some numbers, and then does some arithmetic on them. This arithmetic may divide by zero, and you choose to report this via an exeception. The caller gets a DivideByZero error.

Is the file still open? Could I possibly run out of file handles because your function kept opening files and not closing them?

This is a surmountable problem. You can write exception handling systems which provably handle all of these details correctly. But it turns out to be hard. Much harder than you might think, near the start of your career. It’s not bad for a 1000 line program, but as you get to 100,000 or a million lines of code, it gets downright difficult. So in high-reliability situations, it is not uncommon to fall back on the uglier and more verbose approach of if-statements. That approach typically permits easier proofs of correctness.

So now we can get back to the idea of constructors not throwing exceptions. We see that this isn’t actually an attribute of constructors at all. It’s an attribute of the entire approach to providing guarantees of the behavior of functions. So how do we get to their statement, “Constructors must never fail?” Well, how do you report the failure? A constructor cannot return a boolean to indicate failure. And we just discussed that it can’t throw an exception? What is it to do?

Well, the only remaining answer is to always put the class into a meaningful state. I’ve seen code with plenty of mIsValid Booleans sprinkled through it, which are set to false if construction “failed.” If your company defines “fail” such that setting an “invalid” flag isn’t “failure,” because you handled it in a defined documented way, then their position is consistent. Or, one can go down the approach it sounds like they are doing, where the constructors trivially cannot fail, and there is a separate init function which can fail (and can return a boolean).

And this is why we get to the situation you describe. Good job for getting to the end of this post. I wanted to be long-winded to describe the full thought process. I think the issues regarding exceptions are obscured by lots of people who know the rules for handling failures, but not the full rationale why. This can lead to lots of false starts where an individual will tell you “we don’t do X because of Y,” but Y is a total red herring. They know they need junior developers to follow the rules, but they may not really understand why. To understand why, one has to go all the way up to the senior staff who are held responsible for the success of a company’s products, and what their needs are. Then one has to go all the way down to the junior developer fresh out of school, and ask how to we ensure their product meets the senior staff’s needs. It’s a big round trip!

3

A related topic is 2-phase construction, e.g.
https://stackoverflow.com/questions/3021572/two-phase-construction-in-c

There are sometimes reasons not to rely on exception handling, and there is some language support in C++ to ensure that the actual return value of the init (or factory method) isn’t ignored:

https://stackoverflow.com/questions/76489630/explanation-of-nodiscard-in-c17

However, as the other answers have also indicated: if there is an error you need to deal with it at some point, and moving where that point is doesn’t really reduce the complexity necessarily.

0

Throwing an exception from inside a constructor is just an absolute pain. Say you have a local variable with an instance that threw an error during construction. What’s the state of that instance? Or an instance in dynamic memory. If the constructor throws, what happens to the memory? You really need to read the C++ standard very, very, very carefully.

And that’s easily avoided by having a factory creating instances. Or a class method if you prefer that (which then is just a simple factory). That code allocates an instance with a simple constructor that doesn’t throw, and then calls instance methods to set up the object the way you like it. So you avoid having to know the intricacies of the language and having to deal with them correctly.

At that point, if there are problems, you can return an instance where invalid() returns true, or deallocate the instance and return null, or throw an exception. If you think the problem is due to a programming error, assert.

If your colleagues don’t like exceptions but you use them, they will ask you to remove them. That’s not a hill you want to die on. Personally I’d prefer returning a null pointer because the caller cannot ignore that, but that’s just me. And whatever you do, make sure an instance is always in a state where the destructor can be called safely.

Anyway, I don’t mind how many exceptions you throw – just don’t throw them from constructors if you value your sanity! Calling a function that either returns a constructed instance, or returns NULL or throws an exception is much healthier just because of the C++ language.

Initially I wanted to comment, but I guess I’ll post an answer after all, albeit one that tackles the problem from a completely different angle.

There seems to me to be a very fundamental difference between the mindsets of SW engineers and businesspeople.

Many SW engineers demand, first and foremost, that the application’s functionality is well defined. Therefore, if the app behaves in a way that violates the developers’ assumptions (eg a ‘boneheaded’ exception is thrown, in Eric Lippert’s nomenclature) then the app MUST crash. (The problem should be logged and fixed ASAP; the app may be immediately relaunched; QA efforst and good programming patterns should ensure such issues to be rare; nonetheless bugs are never completely avoidable and if one is detected at runtime, the app MUST crash.)

There are many good reasons for these demands, many were posted on this site many times already, I’m not going to reiterate them here.

Rather, I want to attempt to show the opposing mindset and show that the opposing mindset also has some merits.

From my experiencie businesspeople often demand the direct opposite: that the app must NEVER crash. This doesn’t include only managers of the software house, who can be accused of trying to sweep problems of their app under the carpet. Crucially, this also includes clients.

The reasoning is that malfunctioning is partial functioning, which is still functioning. Crashing, to the cotnrary, is no functioning at all. If the client depends on the app working – if the client’s operations cannot continue without the app working – then the app not working is like the worst thing that can happen to the clinet. In such a case the client cannot continue their operations until fix arrives. This is most unacceptable.

If, however, the app ‘just’ malfunctions, then the client is inconvenienced, however they can continue operations. Bugs can often be worked around. Data corruption can manually be fixed, especially since data corruption due to bugs tends to be somewhat localized.

Yes – someone can say that in case of malfunction the app must crash, but needs to be immediately relaunched, therefore providing the best of both worlds. However bugs can often occur deterministically at well defined places. For example, until fix arrives, the app always crashes when the client attempts to process a given order. Then the client cannot process the order at all. If, however, the app allowed the client to process the order, but corrupted its data, then the client could ‘just’ manually fix the corrupted data and continue operating.

Real life example. We were making an app that, in the background, was supposed to transfer orders form an e-commerce platform such as WooCommerce to an ERP system. End users were intended to be small stores, not having a dedicated IT specialists. They were supposed to be enabled to just only look at their ERP system for new orders, not at the ERP system as well as manually at ecommerce platforms.

Assume that, due to a bug in our application, a single problematic order deterministically crashes our service. Problem is logged, application is restared automatically, but then it crashes again because of this buggy order. Best case scneario, this order is never transferred. Worst case scenario, no other order is transferred. This is the worst thing that may happen. The very purpose of the app is to transfer orders. Yes, the crashes are logged, but no one looks at logs (remember, the small store does not have a dedicated IT team). If the client’s stuff does not see the order in their ERP system, they never act on it. The consequences for a store that just ignores a (paid) order may be dire. The conseuqences for a store that ignores all orders may be even more dire. Vendor will probably need a few days, if not weeks to fix the bug; this means that the store is effectively closed without notice to buyers for weeks. This is a tragedy that must be avoided at all costs.

Now assume that, on the other hand, the order is transferred, however in a buggy way. It does appear in the ERP system, however its data is corrupted and is visibly garbage. Then the store employers are at least alerted that an order arrived – they may manually look at their ecommerce system to get the non-corrupted data. Alternatively, the data is subtly corrupted, for example the price is wrong. Then the worst case scenario is that a corrective invoice will have to be issued with an apology to the buyer. This is strictly preferable to an order not appearing at all.

I could not pierce through this mentalitly. There were non-negotiable demands that all orders must always be transferred, no matter what, and that I must avoid crashes at all costs and no reason is good enough to crash the app. Again: yes, I should strive to avoid bugs, however, trying my best to avoid bugs is not a good enough reason to crash the app if a bug happens nevertheless, since bugs can never be completely avoided and everyone knew that so everyone was aware that it is not reasonable to demand and expect the programmers to write code in such a way that no bugs may ever, ever happen.

From the comment under the OP:

@sayanel, the counter-argument for point 3 is that future programmers might forget to call isValid on an object that isn’t valid and as a result cause a malfunction of the program. That malfunction could be less obvious but more devastating than an “oops” message to the user and it includes corrupting the user’s critical data. – Bart van Ingen Schenau 23 hours ago

Well, no. Many times businesspeople explicitly prefer malnfuction to crash, as I explained above.

I don’t necessarily support the above mindset. Nonetheless, in my experience, it is quite common and I believe it does have its merits that tend to be overlooked by developers insisting that apps should always crash to prevent possible malfunctions.

9

I’m personally on “Team no-Throw”, so I would not rely on exception handling in this situation.

I would do one of two things:

  1. Do as the team suggests and create the object in an error state that can be checked.

or

  1. Alternatively, use a factory style function to create the object, and return null if it fails (the function validates input). You can check for null on the return to know if its in an error state and handle things from there.

Sometimes, when we try to improve an approach to work well for even more use cases, we lose track of the first goal and end up becoming the very thing we swore to destroy. It seems like this has happened in your company.

This requires some autopsy, so I’ll address points from your question in order:

Then, the object should be constructed anyway and there should be another way to check if the object is valid, like an isValid() method.

It’s subjective whether that is the right approach or not. Not necessarily a problem; but I do want to point out that a significant number of people disagree with your approach so it’s not as clear cut as your company is making it out to be.

This is one part business context, one part subjective coding style.

  1. When something throw, at one point in the call stack this should be handled (using try…catch).

Ideally, yes, exceptions should be caught. The application blowing up due to a foreseeable mistake is not good quality.

Foreshadowing: notice the stressed part in the above.

  1. If a throw is not caught, stack unwinding will go all the way to the top, at which point it will encounter a global catch block that will do something like this: display the message “Something went horribly wrong”, try to generate a crash report, and shut down the app.

I would put an asterisk on having a global catch-all that catches everything without even considering if it can do anything with it. It’s contextual.

If there is a security concern, such as not leaking implementation information to an untrusted consumer (which is often the case for web-based APIs), yeah a last line of defense that catches any exceptions that bubble to the top, logs them, and returns an uninformative 500 Something went wrong to the requester is perfectly okay.

However, this does not seem to be your scenario, as you’re (a) dealing with a desktop app and more importantly (b) are intending to still crash your app and are just logging details before you do so.

Be very careful about this catch-all making your developers lazy in not bothering to see if specific exceptions can be handled more gracefully on another level. The existence of a last line of defense should not lead to you ignoring all other defences.

I suspect that your company may have run afoul of the above advice, and their “no exceptions” rule is in response to developers never really considering if there’s an appropriate catch for thrown exceptions. It just sounds like it.

  1. We cannot be sure that future programmers will correctly use your object and handle correctly all edge cases. If someone uses my object, forgets to add catch, and passes through testing, we are at risk of having a “Something went horribly wrong” message in production, and this should be avoided at all cost.

This is where it goes severely off the rails. At its very core, the issue here is the expectation that an implementation is aware of and responsible for any and all of its consumers.

The designer of the throw should not be aware of who is using their design, nor if and how they choose to handle a raised exception. That’s the entire point of an exception: the consumer gets to opt in to handling specific exception or otherwise letting them bubble up further (in cases where they can’t meaningfully handle it anyway).

Dictating that the design of the class should be different because you can’t universally prevent that the consumer might do something wrong with it is asinine. This is arguing that we shouldn’t have forks because someone might stab themselves with it. Not only is that helicopter management, it also misses the point that stabbing things is the explicit purpose of a fork.

Stepping out of the analogy, giving your consumer the ability to choose to handle the exception or not is inherently part of the purpose of an exception in the first place. Anyone who suggests that potentially not catching an exception is the reason to disallow throwing them does not understand exceptions.

Instead, we have a lot of if (myobj.isValid()) everywhere, and empty constructors alongside with init(....) methods, which I have a feeling are a bad pattern.

As unforgiving as I was in the previous paragraph, I’m going to call for nuance here. While throwing exceptions can be a very useful practice, it should not be done for expected outcomes. Exceptions are, by their very definitions, expressing an exceptional scenario.

With regards to object construction, the question is who is constructing the object. For example, if you are deserializing JSON from an external source, then receiving bad data (i.e. structurally sound but not passing an arbitrary validation rule) is well within reasonable expectations and therefore throwing an exception would not be a great way to handle this.

On the other hand, if this class is only being instantiated by your own private domain logic, then any bad values being passed into it are a sign of a significant flaw in your domain logic, which is something you want to have fail loudly, because if your domain logic is not doing what you think it does, that can have significant impact on your application’s behavior.

Anyone who tries to tell you that the approach should be a universal one without even investigating the context, is not a good source of information and advice.

What would be a better approach, that allows programmers to throw when needed, and ensure that we never trigger the ultimate “Something went wrong” fail-safe?

As established above, I disagree with the premise that we should always have perfect coverage in conscious catching of specific exceptions; by the very nature of what an exception is.
As established slightly further above, it can make sense to at the same time have a catch-all last line of defense in cases where you’d otherwise be leaking sensitive information to people who should not be privy to exception details.

A more healthy flow would be something along the lines of:

  • Design phase – do we throw an exception?
    • Is the bad path a “normal” outcome or is it describing something unreasonable?
    • Can we continue with the current situation or has it become impossible to deliver what the consumer expects?
  • Consumer phase – do we handle any exceptions?
    • Which exceptions are being thrown?
    • Can we meaningfully respond to (some of) these scenarios and handle them better than letting them bubble up?
  • Security phase – do we need to stop the bubbling up?
    • Is the eventual recipient of the exception information a trusted actor, or should we block this information from reaching them?

After all this has been considered, if there’s an exception that bubbles up and crashed the app, there’s only three possible remaining explanations.

  • Maybe this is a new exception that didn’t exist before. The solution is to evaluate it using the above guideline.
  • Maybe we made a mistake in how we handle this scenario. The solution is to re-evaluate it.
  • Maybe our evaluation is correct, and no one along the way has any way of handling this elegantly. The solution here is that there is no better solution available than what you already have.

Your company’s policy is making simplifying assumptions that lead to wrong decisions.

That the user sees a “something goes horribly wrong” message is irrelevant. What’s relevant is whether you can reasonably well prevent something from actually going horribly wrong. Preventing a message from popping up isn’t helping here in any way.

It’s actually the opposite: In any big-enough project, allowing to have objects that are not actually valid (as seems to be your company’s “solution”) is virtually guaranteed to cause things to get horribly wrong later on, because now no consumer of an object can assume that it’s valid. Every single function that the object is passed to needs to check its validity and have a work-around in the !is_valid() case, and it’s virtually guaranteed that some callers will forget that and break.

A better solution is to make the constructor of your class Foo private (so that it can’t be called from outside the class) and delegate the construction to a static factory methods that returns a std::optional<Foo>. Any code that constructs an instance of Foo would

  • call that factory method
  • check the return value for std::nullopt, and handle that case
  • extract the Foo from the std::optional<Foo> and proceed with only using the Foo object, which is guaranteed to be valid (because there’s no way to construct an invalid Foo)

This way, all code that just uses a Foo can rely on the fact that all Foos are valid. Only code that wants to construct a Foo has verify that the Foo was indeed constructed correctly. You’ll need code reviews and tests to ensure that that’s indeed the case, but this is still so much safer than also having to check all users of Foo.

If your constructor can never throw, then presumably it is assumed that the arguments passed to it will never be invalid.

So assert that to be the case. Then if your assumption is wrong, the program will terminate immediately instead of entering an undefined state. At that point, you can fix the bug and move on.

8

The need to throw exceptions in a constructor is often a result of a poorly designed class. Meaning that your class does not accurately represent the kind of objects that you’re intending to represent. Ideally, you should write your class in such a way that all the possible combinations of the values taken by the class fields present a valid instance of your class.

For instance, one way to represent objects that have a start date and an end date as a part of their defining characteristics looks something like:

public class Foo {
   private Date startDate;
   
   private Date endDate;

   public Foo(Date startDate, Date endDate) {
      if(startDate > endDate) {
         throw new StartDateBiggerThanEndDateException();
      }

      this.startDate = startDate;
      this.endDate = endDate;
   }

   public Long getDuration() {
      return endDate - startDate;
   }

}

In this example, you throw an exception whenever someone tries to create an instance with a start date that is superior to an end date.

You can rewrite this class to prevent having invalid state:

public class Foo {

  private Date date1;
   
  private Date date2;

  public Foo(Date date1, Date date2) {
     this.date1 = date1;
     this.date2 = date2;
  }

  public Date getStartDate() {
     return DateUtils.min(date1, date2);
  }

  public Date getEndDate() {
     return DateUtils.max(date1, date2);
  }

  public Long getDuration() {
     return DateUtils.diff(getEndDate(), getStartDate());
  }

  @Override
  public boolean equals(Object other) {
    // regular
    Foo otherFoo = (Foo) other;
    if(this.getStartDate().equals(other.getStartDate() &&
       this.getEndDate().equals(other.getEndDate()) {
         return true;
    }
    return false;
  }

}

Of course, depending on the specific use case, such rewrites are not always as straightforward, but they can significantly cut down on the cost of the number of exceptions that you’ll otherwise have to handle.

16

LEAVE A COMMENT