A typical waterfall project has well defined phases that go from the idea conception until putting the solution in production, being the most typical of them:
- Requirements gathering
- Design of the solution / Architecture
- Testing phase (including all kinds of tests)
- User Acceptance Test
- Go Live (put in production)
- Post Go-Live support
It's very common in big companies that there's a functional team responsible for each phase. Thus, we get:
- Business Analysts to gather requirements
- Architects, Systems Engineers or Software Analysts to design the solutions
- Programmers or coders to implement it
- Quality Testers or Quality Assurance engineers to check its quality
- The customer to test the delivered solution
However, one of the big problems from this approach is that usually these teams work in silos. The right hand doesn't know what the left hand is doing and this causes inefficiencies. To make it worse, sometimes the way to try to reduce silos is through an immense increase in bureaucracy, forcing teams to communicate through documentation.
Requirements are gathered by the Business Analyst team and are being transferred to the Development team and the QA team. The development team produces code to meet the requirements and the QA team produces a Test Plan, that is, a set of test cases that try to check that these requirements have been developed successfully.
However, there's not necessarily a real connection between the produced code and the Test cases. There's no easy way to determine how much of the code has been really tested, if we've gone through all the branches and checked all the conditions. And this is where Functional Test Coverage comes into play.
Typically when we talk about code coverage, we mainly refer to unit test code coverage; that one being the result of running code coverage tools against unit tests. However, in siloed organizations developers do not have incentives to unit test their own code because they wrongly assume that the QA department will do that for them. Hence the unit test coverage might be very low or non-existent.
At the same time, test coverage is something that helps very little to QA engineers if they are working as a silo. QAs don't care much about how much code has been unit tested because they don't know what functionalities have been covered in there and thus they will test everything on their own, probably using a Blackbox testing approach.
Functional test coverage
Basically, what is needed is to visually show to the QA team how much code has been executed during the functional tests and foster the communication between the development and the testing team. These teams need to gather together, look at the graphs and then determine which areas of the system have been sufficiently tested and which require new tests to cover them.
Moving forward QA test cases should not only be based on the requirements. They should also be reflected and compared against the code that presumably covers them... and if there are gaps there should be an exercise taking place between Developers and QAs to identify those gaps in order to decide whether:
- It's dead, unused code and therefore needs to be removed and refactored
- It's some functionality that QAs didn't manage to test yet and new test cases should be added.
- It's hard to test functionality (exceptions, complicated sad paths) which require deeper investigation.