Serverless computing is growing increasingly popular. But how do you make sure your serverless applications are reliable? How do you actually test them? Read on to find out more.
Serverless computing has been about for a long time. Indeed, it can trace its roots back to the early days of cloud computing. However, it has really taken off recently. Nowadays, you just can’t get away from it. Every developer conference I’ve been to in the last 2 years has featured it in a big way. However, before you rush to jump on the bandwagon, you need to answer a key question. How will you know your serverless application is reliable? In other words, how will you test it properly?
What is serverless computing?
Serverless computing is a form of cloud computing, lying between SaaS applications and PaaS. Proponents of the approach claim it hits a sweet spot and enables true commodity computing. In serverless computing, you deploy your function(s) directly into the cloud. Then the cloud provider manages all the physical resources, scaling up and down dynamically as needed. As a result, your function isn’t tied to any virtual or real resources. This differs from the more usual approach of deploying containers or virtual servers. It is a good approach because you only pay for what you use.
A different way to do things
Serverless apps are rather different to classic applications. They firmly adopt the API-first design philosophy. Your application exists as a set of discrete functions connected to one or more frontends using APIs. Often, they use functions provided by cloud providers, such as Google’s Firebase. Sometimes, functions will be nested. For instance, you might create a function that then calls Firebase internally to retrieve data. The nice thing about serverless applications is that you no longer need to care about the hardware (virtual or real). You don’t have to plan ahead for how to scale.
So, serverless computing means you don’t need to predict your resource demands in advance or wrestle with complex control panels to enable resource scaling. That should mean you can save time and money on DevOps. However, there are some issues when it comes to testability. For one, you have no control over where your functions run, so performance is unpredictable and varies from run to run. For another, you can’t really measure code coverage when you have no control over large parts of the code. As a result, you need to take a different approach to your testing.
The traditional approach to testing
Testing is the preserve of the conservative engineer. Seasoned test engineers talk about two key metrics: test pass rate (for both happy and unhappy path) and code coverage (do your tests touch all the codebase). They are aiming to maximize both these, and that means starting from unit tests and building upwards. These engineers have three golden rules for testing that are designed to ensure it is as effective as possible.
Good code coverage
You should be aiming to directly test at least 90% of your codebase, preferably more. That means each and every function needs one or more unit tests. And these tests need to cope with both happy and unhappy path (in other words, see how the code behaves when things go wrong). Obviously, a key requirement for measuring code coverage is being able to see the code you are testing.
Tests must be repeatable between rounds of testing. This means using the same hardware and backend environment each time. You need to make sure you start from a known state. Typically, this means initializing your test environment. You should always use the same test data unless you are testing to see how the system copes with unexpected inputs.
You should test the whole system from the bottom up. And you should complete each stage of testing before moving to the next. Traditionally, the stages are Unit Testing, Integration Testing, Systems Testing, and Acceptance Testing. Performance (load and stress) testing sit alongside Acceptance Testing.
Why Serverless makes it harder
The very things that make serverless computing so attractive also make it hard to test. Let’s look at how it impacts the golden rules.
Measuring code coverage
While you can still verify your test coverage for the code you produce, you can’t verify the Lambda or Google Firebase functions you are calling. Equally, you can’t really do unit tests on those in the classic sense.
By definition, with serverless computing, you have no server. You don’t even have a defined set of resources. You just know that as you need them, resources become available. At best, you know that your cloud provider tells you all the environments are the same in terms of setup. But you can’t even verify this.
With serverless computing, you effectively can’t do traditional unit and integration testing. You have to go straight to system testing. Performance testing is also fundamentally changed. You are no longer testing to see if your environment can cope with the load. Instead, you become interested in the execution time (and thus the expense) of your code. On AWS Lambda saving just 1 millisecond could halve your costs if it drops you from 0.1s to 0.099s execution time.
How to test serverless applications
At this point, you might be wondering how you can actually test serverless applications? And what do your test results really mean? Well, for a start you need to adopt a different test philosophy. You are going to be much more reliant on your system testing than before. For another thing, you need to instrument your code, so you can follow what is going on if there are any problems. Serverless computing is all about delivering services to your frontend application. So, this gives you a good clue that UI testing is going to be the key. With that, here are the three new rules for testing serverless applications.
Logs are going to be your friend if your serverless application has a problem. But you no longer have access to system logs in the traditional way. As a result, you need to actively add instrumentation to your code so it records events for later analysis.
Put the needs of the user first
Behavior-driven development places the emphasis on meeting the business needs of the software. In other words, if the software delivers what the customer wants, that is all that matters. This pragmatic approach is ideal for serverless applications where you can’t always test each function in isolation. However, you can’t just adopt BDD wholesale as you will hit the same issues as traditional testing.
UI testing is king
Often, UI testing is viewed as a “softer” form of testing than traditional testing. However, for serverless applications, it becomes your main form of testing. This means that you have to plan your UI testing to ensure that you exercise all your functions. In turn, this means that you will need to invest heavily in test automation.
How Functionize helps
Functionize has revolutionized UI test automation, applying AI intelligently to maximize test coverage and minimize test maintenance. In traditional test automation, test scripts must be carefully written and adapted for each browser/device. Then, every time you update your application, you end up having to change most of your scripts. Clearly, this is absurd. It leads to test engineers spending more time on test maintenance than test creation or analyzing test failures. Our solution is Adaptive Language Processing (ALP™) and Adaptive Event Analysis (AEA™).
ALP™ makes creating tests as simple as writing a set of test plans in plain English. These test plans are then modeled by our intelligent test agent. This makes it extraordinarily easy to ensure you test all user journeys through your app. In our experience, this reduces the time needed to create tests by more than an order of magnitude.
AEA™ learns how your UI should work and uses this to assess the results of all tests. This also makes Functionize tests almost maintenance-free. Our tests are able to self-heal when you change or restyle your UI. If the change is subtler, we provide a root-cause analysis engine. RCA can analyze test failures that only reveal themselves many steps after the trigger event. It can even check solutions and recommend the most likely fix to make your tests work again.
Finally, our tests are instrumented by default. We record all sorts of statistics including how long individual elements take to load. We also have our own bespoke performance metric, Page Completion Time. This reflects the time taken for the UI page to become available for the user to interact with it. When you run a test, you are able to compare these metrics with all previous test runs, allowing you to identify potential problems that could cost you.
Serverless computing seems to be here to stay. But to make your serverless application reliable you have to adopt a different approach to testing. Suddenly, UI testing becomes the most important part of systems testing. Here at Functionize, we have made it easy to maximize your UI test coverage. This makes our platform ideal for testing serverless applications.