I am a
C# developer, and my team and I are trying to start
automated functional tests. (Note that we don’t want to do
Unit Tests; we expect to develop tests which test one function point per test)
Our software has several modules, but they have something in common: one input (can be File, string or database row) and some outputs (a file, a database row, or both in some cases).
One problem is that the configuration is stored in the database, and there is A LOT of configuration to enable the software to run correctly, and someone could change it and affect the tests.
In my opinion, we have an amazing Test team, and I’m thinking of finding a way where the developers could develop the tests but the Test team would sometimes help developing them too.
- The Test team develops the automated tests based on their test cases – is that a good approach?
- I know that if the creation of the test scenarios is hard to do, then the Test team will not see any reward. So, what’s a good approach (in
C#) to develop automated tests where both teams (development and test) could help to improve them?
As a developer, I suppose that developers can certainly help testers to implement the automation of their tests.
I believe, however, it is best to leave the testers scope, plan, and design their test environment and test cases by and for themselves, without developer bias.
Separation of concerns. Segregation of responsibility.
I would recommend looking at Cucumber with Gherkin. It is based on idea that testers or business experts can write semi-formatted, human readable tests and then developers create bindings between those tests and the application. In ideal situation, the bindings will be generic enough, that the test cases can be written or change without developer involvement.
There’s a lot of configuration – well, you need to simplify that for reproducible results. That means setting up pre-configured packs (or profiles) of configuration settings that can be re-applied at any time.
Then you can develop a set of config that is consistent for your test team and allow them to reset things to a known baseline.
As for your question – aren’t they what the test team does anyway? Have you asked them to collaborate with you to help you become better testers?
You could adopt a library that breaks testing into two parts: the high level test logic and the low level implementation.
For example, robot framework lets you specify test cases using high level keywords. I’ve successfully worked on a team where developers implement the keywords (in python, though robot supports .net, java, and other languages), and the testers use these keywords to create their tests.
Not only is this a good way to leverage the skills of both testers and developers, it gives testers an opportunity for career growth because they can learn how to use a programming language to write their own keywords.
There are other frameworks that work in a similar fashion. Three that readily come to mind are cucumber, specflow (essentially, cucumber for .net), and fitnesse.