We have System A (an application and a database) that is built for a specific business department and therefore has business aligned data model and table structure.
System A is a mission critical application.
Downstream systems, as part of their business processing flows, retrieve data from System A via stored procedures
We have developed System B which has a more generic data model and table structure to replace System A. System B aims to service other business departments as well. Once System B goes live, System A downstream systems will re-point database connection to System B.
Stored procedure used by System A downstream systems were also rewritten in System B. Signatures (input parameters and returned result sets) of the Stored procedures were retained in order to not impact downstream systems retrieving data System A.
What would be the testing required for System B?
Here are my thoughts:
To completely guarantee that all features and business logic in System A has been implemented and implemented correctly in System B, all test cases in System A has to be executed in System B
Downstream systems should test all business processing flows that are impacted by the stored procedure rewrites.
An architect is pushing that there is no need for downstream systems to execute #2 and that only a comprehensive black box/white box/unit test of each stored procedure be done by the actual developer (as the stored procedure signatures shouldn’t have changed). Is this a logical approach to testing the stored procedures or is this testing method flawed?
Any thoughts on the testing approach above especially #2?
I am assuming that the test suite for System A has good test coverage.
This is maybe a good assumption. But you know what they say about assumptions…
A more fundamental question to ask is: what are the implications of failures or bugs? The severity of bugs (in terms of business impact), the likelihood of them to be noticed at all (a non-noticed bug costing $1 due to rounding errors on a transaction that happens 100,000 times a day is… expensive), and the ability you have to resolve them somewhat dictate this.
For example, if all your failure cases will be easily noticed, easily fixed, and have minimal impact on your business a comprehensive integration test is less important than the opposite.
This is not to say you should just not care about bugs. But if your A --> B switch will affect 50,000 employees and potentially shut them down for several days it is far more important to comprehensively test than a system which will affect 5 people for a few minutes at a time.
All this is to say – your actual business impact should guide your decision here. So first determine this.
My perspective would be that if you have any nontrivial business impact to run through your downstream system(s) full set of integration tests, assuming you have a good way to swap dependencies on your database (this is pretty much exactly the type of situation that causes this to be desirable). This is probably the best case situation – running all of those integration tests against your new system.
Assuming non-trivial business impacts, you will probably have to have a defined “go live” date. Because of this I would definitely suggest at least some integration system tests prior to this date on a sandbox system. Again, depending on your business impact, I would try to identify the key transactions that have significant business impact and at the very least spot check them.
Something I do not see mentioned on your test planning is verifying your data migration process worked as intended, so I would make sure to some sort of spot checking on this. You may be able to combine what you need to test functionality and data migration all at the same time. Maybe even just testing the queries in A and B to verify they have matching results.