Responsible: MSS Estimate time for POC: 8 MD The content on this page is based on the [[http://sbforge.statsbiblioteket.dk/display/NAS/Release+tests+screaming+for+automation|Release tests screaming for automation]] posting. == Background == The currently manual release test is much to heavy for a project the size of NAS. We need to automate some of the task use to QA the system on release to avoid having the release test halting progress on the project for long periods of time. A detailed breakdown of the POC can be found here: [[http://sbforge.statsbiblioteket.dk/jira/browse/NARC-23|POC automate system test]] == Problems with the Current Setup == The current !NetarchiveSuite release test is quite large and manually run, which creates a number of problems: * The test are very time consuming. With anything concrete to base this on, I would guess NAS use around a man month on each release test. This takes a heavy chunk out of the ca. 3-4 allocated developer ressources time. * The precision of the manual test are low: The manual process of reading through many pages of 'do this and verify that' are error prone, and very difficult to reproduce in a consistent manor. * The test specifications are not very precise. As nearly all documentation written with the purpose of being read by another human being, the test specification are open to interpretation. This again means that the tests will be run in slightly different ways each time they are run. * Lot's of redundancy/inconsistentcy: A lot of the same test functionality can be found in different tests. Small variations have been introduced across these similar bits of code, properly because of historical copy/pastes. This phenomenon often arises from attempt to 'code' extensive functionality through documentation. * Tests are rarely run: Because of the time consuming nature of the current acceptance test/release tests, they are only run at the end of iterations and often only a subset of the tests are run. This means it can be difficult to pinpoint what cause a problem because the code may have change significantly since the functionality was verified the last time. * Drains the teams motivation: Running extensive manual tests can be very boring and drain the motivation of the development team. == Advantages of a Shift to automatic release tests == * The automated parts of the release test, would provide the test status free-of-cost for at the end of a iteration. Eg. in I43 I spent the better part of 3 days trying to get a picture of the state of the unit test (TEST10), where in I44 the reference test result could be read directly from the Hudson continuous integration server. * Test specification written directly as code are very precise and and test can therefore be run in a very consistent manor. * It is much easier to reuse code and avoid redundancy and inconsistency compared to manual test specifications. * Tests can be run on a continuous integration server, providing fast feedback on changes which cause the acceptance tests to break. The current unit test suite is a good example of the value of such a quick feedback functionality. After the unit test have been added to the continuous integration server, it can now be used to get a real time, reference status (unit tested) functionality of the NetarchiveSuite code, which in turn leads to much quicker detection and fixes of broken commits, which again leads to faster code changes and more agressive design maintenance (refactorings). * The ressources of the development team can be switched from laboring through exhausting manual test, to improving the quality of the tests. ' Stress testing, performance test, regression test, multiple platform tests, etc. becomes feasible because of the cheapness of running tests. == Problems Associated with a shift to automatic release tests == * The implementation and maintenance of automatic system test is very time consuming, and this will (initially) have an impact on the progress of the new features * Automatic test may miss some of the bugs which might have been detected by a human manually validating the functionality (the automatic test will on the other hand properly find problems more consitently). == Conclusions == The current release test setup consumes more and more of the projects development resource, so it is difficult to see how we can avoid focusing on automating the release testing process, if we wish to keep growing NAS with new functionality. We should be able to implement the initial POC automatic system test in a weeks time, which would give a good indicator of the cost and benefits of automatic system testing. Further automation of the release tests will be a time consuming process, but can be implemented on a cost/benefit basis as we see the need arise. All-in-all the automation of the release test can be regarded as a long term investment in the efficiency of the project, which will hopefully in the near future significantly increase the development teams ability to focus on feature development, instead of manual tests, administration and uncovering deeply hidden bugs.