How to Test Microservices

An explanation of microservices software architecture, and tips on how to test microservices without traditional testing methodologies.

An explanation of microservices software architecture, and tips on how to test microservices without traditional testing methodologies.

July 23, 2018
Tamas Cser

Elevate Your Testing Career to a New Level with a Free, Self-Paced Functionize Intelligent Certification

Learn more
An explanation of microservices software architecture, and tips on how to test microservices without traditional testing methodologies.

Few of you can fail to have heard of microservices recently. Although not new (Google and Amazon have been using them for over a decade), microservices have become the architecture of choice for an increasing number of applications. For big corporations like Google, they make sense as they simplify the process of spreading work across multiple teams. For startups, they can also make sense, since many Platforms as a Service (PaaS) offer ready-built microservices that you can simply plug together and go.

In this two-part blog, we’ll look at microservices architectures, show why traditional testing methodologies probably won’t suffice, introduce some simple tricks to help you plan your testing strategy and then look at how automation can help.

Microservice architectures

Traditionally, computer applications were built on a monolithic architecture. That is, the program existed as a single large executable. Think of Microsoft Office and the like. The growth of web and mobile applications started to erode that model. Take a mobile application as an example. While the front-end is still a monolithic code bundle, most apps rely on a number of backend services to work. In the early days, these services were often developed as a monolithic backend, integrating databases, application logic, and an API into a single block of code to run in the cloud. This model was often based on the classic LAMP software stack.

How to test microservices

 

The microservices architecture takes a different approach. As the name suggests, the idea is to combine a number of small services into a single whole. Effectively, each service can be treated as a black box – what matters is not how it works but that it provides the functionality you need. In a microservices world, the backend of a website might consist of several different databases, a login service, a load balancing service, as well as custom services to handle the required logic. Docker has seized on this approach and to many people, has become synonymous with microservices, a.k.a. containerization.  

The benefits are clear. Firstly, much as developers once used libraries to achieve certain tasks, nowadays you can find standard Docker containers that provide the service you want. Secondly, because the services are generally lighter weight (in terms of required processing power, etc.), they are more efficient to run. Thirdly, it is much easier to provide elastic scalability with this model. All these are reasons why Amazon and Google were early users of this approach for their own services (even if they have been slower to jump on the bandwagon of selling microservices to the general public!)

Why traditional testing won’t work

The traditional approach to testing goes something like this. Start writing your monolithic application. Every bit of code should be tested individually using unit tests. As parts are joined together they should be tested with integration testing. Once all these tests pass you create a release candidate. This is subjected to system testing, regression testing, and user acceptance testing. Assuming all is well QA will sign-off and the release will go out.

There are a number of reasons why this approach doesn’t work for systems built on microservices. The first of these is simply scale – many apps built on microservices will use dozens of services. These services may not all be available on staging in the same form they are in production (often managers are quite reasonably reluctant to spend as much on resources for the staging environment as the production environment). Secondly, one of the key aspects of microservices is how they dynamically scale up and share the demand. Testing this using old-fashioned approaches can be hard. Finally, in microservices architectures, the way that services are assembled (the so-called orchestration) may also vary dynamically to respond to load.

An alternative vision for testing

A month ago, one of our team members was lucky enough to attend an AWS summit in Berlin. There he listened to an interesting presentation discussing how Amazon develop and deploy new services to AWS. The picture below is taken from the presentation. It shows the conceptual flow for developing, testing and releasing a new AWS service. What’s most interesting to note is how the process of testing has evolved.

How to test microservices

 

In this model, the new service first goes through the usual unit tests. Then it’s deployed to the first staging environment (beta) where functional testing happens. Then it’s deployed to a different staging environment (gamma) where integration and load testing can happen. Finally, it is deployed in a very careful manner, first to a single machine then to an availability zone, then to several availability zones. This process is repeated across the whole of AWS, with close monitoring of error rates, load, etc. Any anomaly that is spotted will cause the service to be automatically rolled back.  

The first thing to note here is that the service is tested in multiple staging environments, which helps ensure that it’s robust against unexpected differences in configuration. The different tests are still essentially the sort that you might traditionally conduct (functional, integration and performance testing), but they are being divided across different setups. The second (important) thing to note is that they are using what is often called Canary testing. This involves rolling out a change to a small number of users to start with and using the extensive instrumentation and monitoring capabilities of most microservice environments to ensure that the code is behaving as expected without bad things happening (that is, the error rates aren’t spiking, the performance isn’t dropping and the load on the system isn’t skyrocketing).

5 tips for testing microservices

The following summarizes my key advice for successfully testing microservices. However, remember that this is simply advice – these are not commandments set in stone. As with all testing, your test plans should take account of the specifics of your setup.

  1. Treat each service as a software module. Conduct unit tests on the service as you would for any new piece of code. In a microservices architecture, each service is treated as a black box, so you should test it in a similar fashion.
  2. Work out the essential links in your architecture and seek to test those. For instance, there’s a strong link between the user login service, the frontend that will display user details and the database storing those details.
  3. Don't try to assemble the entire microservice environment in a small test setup – you are only asking for weeks of pain. I have had the unpleasant experience of attempting to setup Docker with Kubernetes on a MacBook in an attempt to replicate a relatively simple staging environment. The experience was not my most enjoyable ever!
  4. Try to test across different setups. Experience suggests that the more diverse the setups your code is run on, the greater the proportion of bugs that will manifest themselves. This is particularly true for complex virtual environments where you may have minute differences between different libraries and where the underlying hardware architecture may, despite the virtualization layer, still have unexpected side effects.
  5. Make good use of Canary testing for new code and test using real-life users. Ensure that all your code is well instrumented and take advantage of all the monitoring offered by your platform provider. This is in direct conflict with the test-driven development methodology since in this approach you sacrifice test coverage in favor of testing in the wild.

Leveraging mock API endpoints for testing

One of the most powerful automated testing approaches is to use software that is able to directly test your API by simulating the actions of a real user. If this is combined with a “staging” version of the real user databases, this can provide a powerful tool for testing microservices. The reason this is so useful is that it ensures the entire stack is called in a realistic manner. Often, these mock API endpoints are built on top of Open API or API Blueprint. These are two standards for defining and documenting APIs. As long as your API has been correctly documented according to one of the standards, you can use one of the mock API tools to test your API.

Furthermore, you can use this approach in a hybrid fashion during development. Because you know what the API is expected to return, you can replace missing pieces of code with a mocked-up version. This is particularly powerful when combined with testing microservices. You can use your real payments system but use the mock API to generate realistic calls and responses from the front-end.

What next?

The microservices software architecture is not new and it isn’t going away any time soon. However, as we have seen, microservices pose some problems for legacy approaches to testing. Consequently, knowing how to test an application built on this model is essential. Functionize is designed to handle modern architectures. There’s no perfect way to test microservices, but follow my five tips above and hopefully, you'll be on your way. In part 2 of this blog, we will explore how automation and artificial intelligence can help you with testing microservices.

In case you have other QA related questions outside of microservices - please check this article about "Selective QA Interview Questions for Managers to Ask".