There are various metrics like “test case effectiveness”, that is calculated as (Total number of bugs found / Total number of test cases executed).
While this produces some numbers in the beginning, when run on a mature product, usually no bugs are found. But then I would get 0/100 = 0% and it does not make sense to report 0% test case effectiveness.
How to work with these metrics to actually get meaningful data?
Turn it on it’s head. Change the formula to:
100%-bugs found/tests ran. Call it build stability or something like that.
Then you’ll be able to tell the relevant stakeholders that this build was 100% stable with regards to things you know you need to test . However, this breaks down if people find bugs no one’s ever seen/ you don’t have test cases for.
First off, congratulations on working hard enough on quality that this is a problem.
The most effective way of dealing with this problem, if you’re lucky enough to have it, is to seed the code under test with known bugs.
The testers and anyone coaching them shouldn’t know this is happening or how many bugs are introduced when it happens. Typically the number is randomized in some way. This allows you to test your testers. You get an objective X out of Y score.
Not something to do every time but it keeps the testers awake and lets them show off their skills even while the programmers are doing their darnest to put them out of a job.