Usually the projects we are working with have a continuous integration process including the following steps: compile, build, code check (this step is usually referred to as "test" but I don't like this name), and (sometimes) package. So they are doing continuous integration and they are continuously checking the code, period. Problem is that they want to do continuous testing, I mean not only basic unit tests, they want to continuously perform functional non-regression tests on every build. And this is another story, because it usually requires a fully operational testing environment to test the application at runtime (which is needed for "real" functional tests). Between the time a build is ready for testing and the time it can actually be functionnaly tested, lots of operations are conducted manually, like deployment tasks, database and plateform setup, production data copy, startup of shared services, etc. Some managers don't know that they must fully automate the deployment in testing environment if they want to integrate fonctional regression tests in the continuous integration process. So first point was to stress on fully automating the deployment process.
But this is not the major issue to be addressed. Another point that I stressed was about not breaking the continuous integration benefits. Continous integration is based on quick feedback to the developers: the quicker the feedback, the better the fix. Anything that increases the delay between the time a developer checks his code in the source repository and the the time he gets notified of a failed build jeopardizes continuous integration success. Thing is that functional regression tests is time consuming, and comes after deployment (as noted above) which can be even more time consuming. So it does not look like a good option to automatically perform functional regression tests in the same workflow as continuous integration. Besides this, it does not make sense to functionally regression test every build
The solution that I advised is to create workflows that are triggered subsequently to provide some sort of increasing verification level while not breaking continuous integration. The target organization is to split the job into at least 2 workflows, but that can be more:
- the classical continuous integration workflow (compile, build) that is triggered as soon as a new piece of code is introduced in the repository. It is usually extended to check code (unit test, code rules, ...), but I advise to do so as long as it does not exceed 10-20 mins.
- It is better to add one more workflow to the process between continuous integration and regression workflow so that tasks like package, generate documentation, install, or even code verification could be removed from the continuous integration workflow and be included in this "intermediate" workflow. It should be triggered as soon as a successful continuous integration workflow has been performed (successful build).
- a regression workflow that includes deploying and regression testing (functionnaly) on the test plateform and is triggered at most once a day (say at night), most probably once a week depending on the development cycles and team size.
- fully automate the deployment process
- seperate functional regression tests in another workflow than the classical continuous integration workflow and trigger it at most once a
day
After releasing this article to my client, I started thinking that the name "continuous integration process" as used in the software programming field today is really a bad name. But this is another story.