Software Development and Continuous Integration in the Cloud

James Bromberger

 

The Status-quo

In the IT Industry, the software development cycle has long been plagued by quality issues. Sometimes this is due to a lack of complete and clear requirements definition, developer capability, number of developers, testing capacity, the list is long for the excuses made as to why deadlines may go whistling past.

But in today’s world, there are numerous techniques used to reduce the excuses as much as possible.


 

The Solution

The Modis team on this project identified early the need to managing code and automate various elements of its lifecycle. A number of tools were used, which really should be a part of any developer’s toolkit. Many of the tools are Open Source, and thus free to run.

The Modis team comprised of multiple experts, from a set of Architects, a pair of core Development teams, a Software Test team, and a DevOps team. The teams themselves were located on the customers’ site (with direct access to customer’s Subject Matter Experts, Legal team, and other staff), across Australia, and overseas.

Revision Control Management

A key underpinning of any software source code is the use of capable Revision Control that permits teams to collaborate effectively. Historically these have been centralised systems such as CVS or Subversion. This changed in April of 2005 when two separate Open Source “distributed” revision control solutions were released: Git, and Mercurial. Being distributed gave more reliability than a single central point of control, and flexibility for people to work offline from teams, and then return with code when ready. In this case, the team settled on Mercurial, with the undertaking that all code, images, configuration, templates would all be committed in the revision control repository.

As with any large project, it is broken down into distinct milestones that need to be met. These milestones are ordered based on many contributing priorities. When the first code is committed, it’s on a Milestone Branch in the Revision Control system, and there could be several Milestones being worked on simultaneously.

Over time, teams working on large components or complex changes may wish to work in isolation of other changes being contributed, forming their own Feature Branch from the milestone branch. This protects the project, by not introducing any updates (commits) that may contain partial changes, until such time as the break-away team are ready for their changes to be integrated back.

Mastering this revision control is a key undertaking of managing the assets of a project, and teams need to coordinate on who is on which branch, and when they plan to merge back.

Continuous Integration

The most effective way to reduce software bugs is to test it while the developer is still writing it, and giving them feedback while they’re still looking at the problems they are in the middle of trying to solve.

Most developers commit their software to a revision control repository when the software should be semantically correct for the language it is written in (but it may not be correct).  Thus, the first test that a Continuous Integration (CI) pipeline should check for is as simple as that: does it compile. This is a pretty low bar to start with, but is a solid base to start the road to automated testing.

However, it’s up to you what tests you do in your CI pipeline. While some tests will be Unit tests within the code base (if I call a function “add_one” with an argument of “5”, does it return “6”), some tests require additional services to be running for the test to succeed (if I make a network request to this resource X, do I get back the desired data Y, within time Z milliseconds).

A CI server is a process which masterminds all this. There’s a number of these available, some Open Source, some commercial. They provide a solution to create pipelines of work, possibly running this on an automatic schedule, polling to look for events that would start a pipeline, or on manual trigger. At each time, it provides logging of the events undertaken, and may also create notifications to teams to inform them of what happened.

The team uses CI/CD servers to perform a number of tasks, under a number of triggers:

  • Automatic code compilation and unit testing, upon each commit
  • Automatic code review, through static analysis, on a daily basis
  • Automatic integration testing, on a daily basis
  • Preparation and distribution of a new release (if it passes the above automatic tests)
  • Deployment of a release to an environment

Static Analysis

Examining the code can also be useful – without actually running it.

This examination can sometimes be just looking at the style of code, without caring about the statements and expressions used. Is it formatted in a consistent style, with single spaces between operators, etc. While this may seem trivial, having a consistent style across team members can help them all read each other’s code.

Recommended coding style may also change over time, with new releases of programming languages and changes in approaches in industry. In order to have these improvements over time, the team also maintains and updates these tools.

“Coverage” is another piece of analysis performed. Sometimes this is Documentation Coverage, Comment Coverage, or Unit Test Coverage. If there are 10 functions, but only 9 of them have unit tests, then our unit test coverage is 90%.

Static analysis can also help you avoid security issues. Knowing the language, one can determine what the variable names are. If a variable’s name is akin to “password”, does it get a static string within the code assigned to it, or is it read from config/environment. Detecting embedded credentials is critical – no software should ship with passwords. Likewise, tools can also examine recognised statements and ensure that we’re using them in the preferred way. For example, “if not $foo is Null” and “if $foo is not Null” may work the same, but the second is a more desirable way of expressing this.

With some programming languages, one can take hints from the language itself as to the health of the code. In Java, some methods (functions) are marked as deprecated, indicating to developers that the language will remove that method at some time. Clearly these deprecated methods are something to avoid, and static analysis can report on this.

Humans are typically lazy, and the best developers are the ones that try to be as lazy as possible by using solid libraries of code that already exist and not re-inventing the wheel! However, known bugs come up on many of these standard libraries of code, and over time these often get fixed. Static analysis can also look at the versions of libraries currently being used in the code base, and can warn about bugs in these libraries and instruct developers what libraries need attention.

With the advent of containerisation, it is also fairly easily to deploy even complex environments into environments for functional integration testing. This is not performance testing (that comes later), but merely that the components of a solution work well together.  Again, a number of libraries exist for this mock up, but triggering this from the CI//CD service as required means additional notifications can be triggered.

Release Preparation

We run segregated environments to ensure the independence and security of our workloads. Development resources definitely can’t access higher environments, and higher environments can only access published release artefacts from its immediately-inferior environment.

When testing has completed, and a release is deployed, it is made available to higher environments should they wish to adopt it. The development environment cannot push that artefact to be deployed outside of its own environment.

Deployment Testing

Standing up a complex environment can be time consuming. The approach has been to dynamically generate templates of desired deployment components, and have those templates in turn generate the physical resources. In this case, the tools generated AWS CloudFormation templates to launch EC2 AutoScale Groups, create Security groups with tight referential security between components, create RDS database instances with custom Parameter groups, replication, slaves, create S3 buckets with policies for Lifecycles, versioning, VPC locking, and lastly, to automatically create large numbers of alarms on CloudWatch metrics to notify the DevOps team of operational issues.

Automated Functional Testing

With additional tools the team created end-to-end regression tests to validate all core functionality of the deployed platform. This simulated a set of simultaneous real-world users all performing complex operations, to ensure things like resource contention did not adversely affect individual client performance.

Active Security Testing

After a period, you can only test an application when it is fully deployed. After a deployment test, third party security testing suites can run their test suites, and inspect the responses. This can be compared against tell-tale signs of misconfiguration or even indicate vulnerabilities. A key undertaking here is that the tools themselves need to constantly have the latest updates and understanding of what vulnerabilities may look like, and what application behaviours is recommended, and this changes over time.

As an example, with web application development, there were recommendations to restrict page framing by way of relevant HTTP header responses. These modifications were integrated into the source code, and the warning went away. Key to this is the set of security test cases that are constantly being updated with new warnings for the latest known vulnerabilities in the code or underlying libraries and components.

Load Testing

We’ve all heard the excuse that “the User Acceptance Test environment is not the size of the Production environment”, so any performance issues are always going to be different. With this project, the UAT environment IS the same size as production, but it’s hours of operation are not!

Most production environments work 24x7, while the UAT team, the Testers and Developers are often only working across a normal day.  The team took advantage of the Public Cloud deployment model – using Amazon EC2 with AutoScale – to dynamically size non-production environments to zero application servers outside of desired hours. The impact was immediate: the cost savings on even a small micro-services architecture were impressive. With reduced cost came the capability to size the UAT environment just like production, but pay much less for it due it its reduced duty-cycle.

With a production-sized environment, tests can then bombard this with many simulated client requests, and measure responses under load. Furthermore, production like fail-over events can be tested, and measure for time-to-(automated-)recovery and any impact of such a disaster.

Leveraging the Cloud

In some scenarios, a CI server may launch additional server instances (VMs) to task them with specific workloads, and then terminate them; in this case, the CI server is orchestrating the Cloud (within limits). This lets the CI pipeline scale to even higher throughput of builds per hour.

At the end of the automated CI pipeline there should be feedback loops to developers, testers, and release managers, all of whom can correct their implementations while any changes are fresh in their minds.

Continuous Delivery

At the end of our pipeline of CI testing, the output is a reasonably consistent set of release candidate builds. These can then be published for higher environments to then retrieve these, and deploy them. This capability to keep the release pipeline in a good state helps ensure that any critical bugs that need not just immediate code fixes, but immediate deployment to a production environment can always land in a code branch that is in good shape.


 

The Outcomes and Benefits

By using these tools and techniques, the Modis team managed to retain SQALE quality metrics of “A” for the code base, and keep vulnerabilities, critical and sever issues to a minimum. The benefit has been more secure code, quicker delivery time, and fewer defects leading to reduced time to benefit for the customer.

All of this has been possible by the close attention to technical detail, broad knowledge of tools, process automation, and deep knowledge of customers’ business to repeatedly deliver quality software.