I am working on the frontend of a product, and it is now broken due to bad data from the backend.
This can be caught internally before it gets anywhere close to production, but we end up fiddling with our fingers as the backend is being fixed. (Hence why I’m here…)
I have some hackish fix that does a sanity check, in this particular instance, on the data being received; does this imply I have to do a check for all data?
Honestly, how much do you trust the data from the backend after the actual experience? I guess not 100%, so adding some additional checks might be worth it to catch some potential bugs.
However, we do not know your product, we do not know what happens when your frontend gets broken, and what financial risk is at stake when your users see the kind of error message your product shows now with “bad data”. We also do not know if your product becomes totally unusable, or if only a minor feature of your product does not work. We also do not know if sanitizing “all data” means only a few hours of additional work, or a three months delay in delivery. But these are the factors you have to consider when asking yourself “shall I add only a few checks” or “shall I sanitize all input data like hell”.
Only about functionality, not security:
Only trust data from the backend when it’s the same system/solution. So what that means:
If you have an application in something like Meteor / Nodejs with Angular or a MS solution with both client and server in one application you could trust it.
That is because it’s one solution and it will be deployed and tested as a whole. It uses the same delivery pipeline. In this case if something goes wrong your integration / end-to-end tests are wrong. You fix them and you are stable again.
When the backend is a different application (let’s say a separate PHP app) which can even be from a different supplier / team treat it as an external source. Validate as much of the data as possible. Why: Because that backend can be changed and they might not notify you. So your software may do wrong things based on their error and changes.
Data should be validated at the source, and it would be nice if a routine only got valid data. In the real world though that is seldom the case. Even for reliable sources, data is seldom 100% as pure as falling snow. Doc hits some key points. I will just add a question to reinforce: How bad is it if the data is not clean? What are the consequences? The worse the results can be of processing invalid data, the more imperative it is that data be sanitized before processing no matter how well you trust the source. As consequences of processing bad data go down, the more you can accept data without validation for sources judged reliable.
I generally trust data from the back-end and distrust data from the front-end. If the back-end is returning bad data, it likely got there from a broken front-end. If you can’t trust your back-ends, you are unlikely to be able to provide reliable results.
Data coming from systems should in general be validated. How detailed this validation is depends on the type of data. Some of the validations I use include:
- type: This may be sufficient for many values.
- range: Often applies to numeric values.
- length: Often applies to strings.
- valid values: Less common.
- cross-field checks: eg. country = US, state = Yukon is invalid.
Assuming that you are referring to a typical client-sever architecture based application. Its always a good idea to implement sanity checks or input validation on BOTH sides of the application (client side and server side). This will ensure that you have an additional layer of sanity checks at each side of the application.
Now in the case of your example if you believe that the backend data is somehow wrong and that you have to always wait for a fix in order to return the front-end to a good working state; then it would be a better idea to put some of the development efforts in producing helpful error states within the application instead of it “just being broken” or Hacky fixes. By producing a helpful error state you can perform further root causes analysis.
What I mean by helpful error states here is basically a model to capture the root cause of the issue but displaying a relevant contextual message to the user at the front-end. For example When its broken and its a customer looking at the screen then “Say its Temporarily Unavailable, Please try again in X hours…” or something, but when a developer/ looks at it make sure that it has enough relevant error details so that they can investigate the issue or follow up with the right individuals.
Well, checking data received in the front end may be a good thing to provide the user with quick feedback if there’s an error.
Still, this is most often than not overkill and it could be dangerous, if specs change and the validation changes with it, you might check data in the front end in some ways and the back end in other ways, which will create inconsistencies.
I don’t agree with
the backend team broke the frontend because of some bad data being returned to us
though, they broke the backend there. The front end shouldn’t be impacted at all even if the data is bad, since it’s only a layer of presentation of datas.