Serverless computing is growing increasingly popular. It allows you to create applications that don’t have conventional backend servers. The problem is, that means your DevOps can’t use their usual tools to monitor them. So, how do you make sure your serverless applications are reliable? How do you actually test them?
Serverless computing has been about for a long time. Indeed, it can trace its roots back to the early days of cloud computing. However, it has really taken off recently. It’s mentioned at almost every software development conference, and CTOs view it as the next step beyond containerization. But before you jump on the bandwagon, consider what it will mean for your DevOps team. How will they test your application and spot any issues? How can they ensure it remains reliable?
Serverless computing is literally creating applications without any backend server. Instead, you compose applications from a number of services. Technically, it lies between classic Software as a Service (SaaS) applications and Platform as a Service (PaaS) containers. Proponents of the approach claim it hits a sweet spot and enables true commodity computing.
In serverless computing, you deploy your function(s) directly into the cloud rather than in a container or server. You let the cloud provider manage all the physical resources, scaling up and down dynamically as needed. As a result, your function isn’t tied to any virtual or real resources. You don’t pay for containers or servers, you only pay for the resources you consume.
Serverless apps are rather different to classic applications. They firmly adopt the API-first design philosophy. Your application is simply a set of discrete functions connected to one or more frontends using APIs. Often, they rely on functions provided by cloud providers, such as Google’s Firebase. Sometimes, functions can be nested. For instance, you might create a function that then calls Firebase internally to retrieve data. The main thing is you no longer have to care about hardware in any way. This is the key difference from containers or virtual servers.
In the classic model, DevOps teams are responsible for ensuring your servers keep running and that your application works as intended. But in the serverless world there are no servers. So, the DevOps role has to change. Moreover, they can no longer rely on their traditional tools for monitoring the backend. They also need to focus on new performance metrics, such as overall costs. Ultimately, they will need a new approach to testing both before release and in production.
Seasoned DevOps engineers will want to understand two key metrics: test pass rate (for both happy and unhappy path) and code coverage (do your tests touch all the codebase). They are aiming to maximize both these, since that will deliver the most reliable applications. And reliability is king in the world of DevOps. These engineers have three golden rules for testing before release.
You should aim to directly test the overwhelming majority of your codebase. That means each and every function needs one or more unit tests (code coverage). But also, you need to test every part of the completed application. All these tests need to check both happy and unhappy paths. That means seeing how the code behaves when things go wrong as well as right.
Tests must be repeatable between rounds of testing. This means using the same environment each time. You need to make sure you start from a known state. Typically, this means initializing your test environment. You should always use the same test data unless you are testing to see how the system copes with unexpected inputs.
You need to test the whole system end-to-end. That means testing at the system level as well as unit and integration tests. Chances are, your application also depends on external services. So, your testing needs to check how these integrations work too.
The things that make serverless computing so attractive also make it hard to test. Let’s look at how it impacts the golden rules.
Serverless computing relies largely on 3rd party functions. These are black boxes and you have to take it on faith they are correctly implemented. The only way you can test them is by calling them and checking the resulting outputs. This makes it hard to measure test coverage in the classic sense.
By definition, with serverless computing, you have no server. You don’t even have a defined set of resources. You just know that as you need them, resources become available. At best, your cloud provider may tell you all the environments are the same in terms of setup. But you can’t even verify this.
Serverless computing often requires you to go straight to system testing. And by definition, almost your entire application is now composed of external services. That means that e2e testing is more critical than ever. You also need to start looking at code efficiency in a different way. Lazily calling functions may be more efficient traditionally, but it could have a big impact on costs. On AWS Lambda, saving just 1 millisecond could halve your costs if it drops you from 0.1s to 0.099s execution time.
The classic approach to in-production testing is to rely on backend instrumentation. DevOps teams monitor things like response times for DB queries, overall compute load, and how reliable the backend code is. But serverless makes many of these metrics meaningless. Put simply, cloud providers are so good at their jobs that your application will just scale to cope with any problems. However, that will come at a potentially eye-watering cost in your next bill! Instead, DevOps now needs to focus on two key things: how users see your service, and the overall resources being used.
Serverless computing clearly causes some real issues for DevOps teams. You are going to have to adopt a different test philosophy. For starters, you will need to flip the testing pyramid. You also need to add more instrumentation within the code itself. Then you need to shift your testing all the way to the right. So, here are the new rules of testing for serverless computing:
Logs are going to be your friend if your serverless application has a problem. But you no longer have access to system logs in the traditional way. As a result, you need to actively add instrumentation to your code so it records events for later analysis.
Functional testing ensures that your application behaves as expected from the eyes of the user. This is the ultimate proof that your system is working, unlike unit tests that only check that the code is working. The challenge with serverless applications is that functional tests must validate flows that span multiple applications. Luckily, modern testing platforms are able to handle this type of end-to-end testing.
Often, UI testing is viewed as a “softer” form of testing than traditional testing. However, for serverless applications, it becomes your main form of testing. This means that you have to plan your UI testing to ensure that you exercise all your functions. In turn, this means that you will need to invest heavily in test automation. Moreover, you need to run these tests against your production system as well.
Functionize has revolutionized UI testing, delivering the first true AI-powered test automation platform. Our aim has been to tackle the three pain points of traditional test automation: slow test creation, limited test analysis, and excessive test debt.
Architect allows you to create smart automated tests with minimal effort. Traditional test automation requires engineers to painstakingly craft test scripts. That takes days, and the resulting scripts are often a compromise between coverage and performance. Architect can create the same tests in minutes and offers advanced features, such as test data creation, 2FA testing, and visual testing. You can also create more resilient tests for serverless applications using features like Smart Waits. These intelligent waits pause the test until the page is properly loaded. In short, Architect lets you create better tests in a fraction of the time.
Traditional test scripts can only test some of your UI. That’s because they only test the things they are explicitly programmed to test. To increase that coverage you have to increase the complexity of the test script. We take a different approach, relying on computer vision and machine learning. These allow you to visually verify your entire UI. Additionally, you can view the developer console for each test run, so you can dig into details like cookies and network calls. This is especially helpful when testing serverless applications to understand where things went wrong.
Test engineers spend unbelievable amounts of their time on so-called test maintenance. This happens because most test scripts break every time the UI changes or the app is updated. They then have to debug all these scripts and rerun the tests. Over time, this maintenance consumes more and more of your test resources. Ultimately, your team can no longer keep up and you start to accumulate test debt. However, Functionize’s tests self heal when your UI changes, thanks to our unique Smart Element Recognition. This eliminates test debt.
All Functionize tests run in the Test Cloud. This allows you to create a test once, then run it against any browser or device from anywhere in the world. This makes it trivial to test against your production system. Moreover, Functionize tests are truly end to end, since it’s able to test across multiple applications. As a result, you can use Functionize tests for true in-production testing. This makes them a valuable tool for your DevOps team.
Serverless computing seems to be here to stay. As a result, you will need to update your approach to DevOps and application quality. This requires a shift in how you think about testing as a whole, placing UI testing at the forefront of your strategy. Book a demo to see how Functionize can help you deliver more reliable serverless applications.