For too long, testing has been done in-house using old servers, begged resources and outdated infrastructure. We believe testing should be run in the cloud, allowing you to leverage an almost infinite resource pool. Testing done like this will be faster, more efficient, and cheaper.
Testing should not be the poor man of the software development process. Companies that engage with their testers produce the best and most reliable software. Those that treat testing as an afterthought risk bugs, production delays, and, ultimately, their reputations.
The arguments for testing in the cloud are the same as those for moving your production servers to the cloud.
It’s cheaper. You no longer need to supply and maintain your own hardware. You also only have to pay for the resources you are actually using.
It’s scalable. Cloud providers can offer you clusters of tens of thousands of virtual machines. Few, if any, companies can afford in-house infrastructure at this sort of scale. If you are load testing your backend, achieving true scale like this is hugely valuable.
It’s reliable. Virtual servers always offer some form of migration or recovery. This might be cold migration (where you power off, start a new server, and recover using the backup). Or it could be hot migration of the running servers. Many providers also offer Disaster Recovery. This is in marked contrast to what happens if your own server fails.
It enables AI-acceleration. Artificial Intelligence requires significant computing power and the cloud allows you to harness this. But even more significantly, without the data collection and processing power of the cloud, AI simply can’t work. The more data your system generates and collects, the better it will perform.
There’s one additional benefit that you just can’t get with your own infrastructure. Cloud testing allows you to run test machines almost anywhere in the world. In turn, that means that your testing is far more realistic, especially in terms of how it impacts on the backend. It can also ensure that your app is truly working and doesn’t, embarrassingly, fail as soon as you access it from an external IP!
Test automation revolutionized the software world. Without automated UI testing, app development would have been slower, because manual testing just takes longer. Bugs would have gone unnoticed because manual testing is often less effective. And, above all, we would never have coped with the explosion in smartphones, tablets, and browsers over the last decade.
But classic test automation is dumb. And, like all dumb systems, that limits its growth and utility. In this era of artificial intelligence, test automation should be intelligent too. There are few areas of software development where it is easier to apply artificial intelligence. And even fewer where the rewards are so obvious and immediate.
Put simply, (well designed) intelligent automation just works. Intelligent automation means tests become less brittle, so no more time wasted on unnecessary maintenance. Productivity increases because test failures are automatically diagnosed and classified. New tests become easier to create, and test flows are easy to reuse.
There are three key elements to a well-designed intelligent test automation suite. Firstly, it must apply artificial intelligence intelligently. All too often, people think AI is just about machine learning. But every problem needs a different approach, and often it is a combination of techniques that give the best outcomes. Secondly, it should keep humans in the loop where there is the potential for unclear outcomes. For instance, sometimes a test failure may have two equally likely causes. This will need a human to decide which is the actual cause. Thirdly, the system should be capable of evolving and learning as your system develops. Without this, all you have is a system that slightly speeds up some of the more boring parts of testing. With this, you have a system that can adapt and grow with your application.
DevOps has become critical to the success of many companies. Without DevOps, many of the services we end-users take for granted would simply fall over. Social media sites and apps like Facebook and LinkedIn would fail if they constantly went offline. If content delivery services like SoundCloud, Netflix, and Hulu kept dying mid-track you’d soon cancel your subscription. And people expect messaging services to be 100% reliable.
At its heart, DevOps is part of the Agile world. Len Bass, Ingo Weber, and Liming Zhu have defined it as: a set of practices intended to reduce the time between committing a change to a system and the change being placed into normal production, while ensuring high quality.
In other words, DevOps straddles the border between developing software and running it in production. It is what you get when you apply the Agile approach to software development to deploying services in production.
A key part of this definition of DevOps is “ensuring high quality”. As we all know, that is (or is meant to be) the definition of what a test engineer does. So, it’s clear that there is a significant overlap between testing and DevOps. The second key part is the aim to “reduce the time” for a change to reach production. This is where autonomous testing becomes key.
Autonomous testing is the outcome of intelligent test automation. With autonomous testing, you now have an intelligent test agent (ITA) augmenting the work of your test and DevOps teams. This ITA is like the perfect regression tester – focused, tireless, and driven, but still intelligent. Using this ITA means changes can be tested and deployed completely automatically. Without this, the job of an agile DevOps Engineer would be infinitely harder.
This need for scripting expertise has led to test automation becoming the domain of the developer in test. This requires a unique combination of programming skills and testing expertise. As a result, such people can earn well over $100k a year. This also has the effect of limiting the ability of small companies to fully leverage test automation. Often what happens is that testers turn to the development teams for help. This in turn impacts on overall productivity.
Programming languages have always been a way of allowing humans to “speak” to computers. Assembly code is pretty raw, C added some level of abstraction and human-readability, C++/Java take this to a higher level still. But there is another way to do things. Natural language processing is a specialized field of artificial intelligence. In NLP, the aim is to teach a computer to understand human language. So, you don’t need to learn to speak to the computer, the computer can learn to understand you!
Nowadays NLP has reached the stage where you can just use plain English to write your tests. Suddenly, anyone can write a test as easily as they could describe the test to another human. Want a test that logs you into the system and then checks your profile? Well, you can just write a test script with those steps. The skill of testing isn’t about writing code, it’s about knowing what to test in order to verify the system works. Being able to do this in plain English greatly simplifies things and boosts productivity.
The advent of test automation saw the advent of a new problem, test maintenance. Automated tests rely on being able to identify elements within the UI in order to interact with them. For instance, this might be selecting a specific button on the screen to press. Or finding a particular text field in a form and entering a specified string. The problem is, as products develop, the UI changes too. Sometimes these changes may be small, such as restyling buttons to gain a wider border. Other times they may be more significant, such as redesigning the whole UI.
The problem is, every time you make a change, even a tiny one, most of your test scripts are going to break. This will show up as test failures when you next run the test suite. To make things harder, the failure may not even be triggered immediately. If the test system struggles to find the right element it may instead select something different. In particular, this is true when things are reordered on the page. So, the test may proceed perfectly happily until it reaches a verification step where things now fail. The upshot is test churn, where tests are having to be constantly updated and rewritten.
This time spent on test maintenance is time lost. Tracing and resolving these issues is a massive time sink for your skilled test automation engineers. As a result, it becomes a waste of a valuable resource. We believe things can and should be different. Tests shouldn’t be so brittle that they break from release to release. Instead, the system should apply intelligence to learn what the test is actually doing. In other words, it should behave more like a human would. When your favorite app gets a redesign, you just look at the new UI and find the button you wanted to tap. The same thing should be true for test automation. Now your engineers can concentrate on doing their job, rather than just wasting time fixing things that shouldn’t break.
Originally software testing happened just before release. This seemed logical and reflected how software development sought to copy other engineering projects. Then people began to realize the benefits of shifting left. By shifting testing earlier in the design process, you find more of the bugs, earlier, and can fix them more effectively. In its ultimate form, this leads to methodologies like Test-Driven Development.
But we argue shift left isn’t enough. Testing needs to be shifted right as well. What do we mean by that? Well, testing shouldn’t stop the moment your code ships to production. Often, you will only really understand how your code performs once it is being used for real.
Of course, companies have always responded to bug reports from users. But this sort of unstructured production testing is not efficient. Moreover, it looks bad when your customers find the bugs, not you! Instead, you should be looking to leverage a few key things.
Probably the easiest thing everyone can do is to adopt a policy of dark launching of new features. Every feature in your app should be capable of being enabled or disabled using a simple flag. When you have a new feature, release it, but with the feature disabled. This will test for unexpected impacts on stability. Progressively you can enable more customers and see how the code behaves. If things don’t work out, or if you start to see negative feedback, just turn the flag off again. Many breakages will only ever be caught in production when the system is running at scale and becomes less responsive. Often these issues won’t show in error logs or third-party monitoring tools, yet they can have a devastating impact on user experience.
Next, instrument everything, and use this to explore how users interact with the system. In turn, this allows you to do proper A/B testing, where you compare new features to choose which is best. It also enables Canary Testing, where you check new backend code for stability. You should also look to do beta testing for any major new features, both to get feedback and to test functionality.
Shift right and your users will become your most powerful test tool.
For years, testing has been the poor relative in the development process. Companies know they need to test, but management often resents testing as it looks like poor ROI. One of the problems is that it’s extremely hard to measure the success of testing. If your tests find lots of bugs it doesn’t look like a success for the testing, it looks like a failure for the developers. If you don’t find any bugs it can look like it was an unnecessary expenditure of effort.
We would argue that testing needs to be thought about in a completely different way. No successful company thinks of customer support as a waste of resources. In the same way, testing should be seen as key to driving revenue. Stable, well-tested, bug-free software will generate more revenue than buggy, unstable, poorly-tested software. For every great company keeping your customer base is critical, especially when you are growing at scale. The best way to do this is to release the highest quality software you can while keeping pace with the demands of your customers for new and better features.
When you widen the focus of testing to cover the whole product life cycle, it changes your perspective on how to measure ROI. Suddenly, testing becomes a key driver for revenue. It becomes the oil in the gears of the development engine, speeding up delivery and reducing friction. It becomes the canary in the production coal mine, warning of problems before they hit your revenue. And it becomes a key driver for responding to customer demand, helping drive up user numbers and revenue. Above all, it becomes the measure of how well your software is performing and developing.
Our simple message to management is, stop thinking of testing as a money sink. Start to view it for what it is, the defender of your success and revenues.
There’s an old adage “quality over quantity”. Good developers have always known this to be true. There comes a time in any software development cycle when the developers start to get proud of how much code they are deleting. At this stage, the developers are aiming to create the highest quality and most efficient code possible. Good product managers have also always known this. Feature bloat, where new features are added just because they can, is a sign of a poorly-designed app. A good quality app will do a few things but do them extremely well.
This suggests that quality should be an integral part of any successful company’s DNA. For software, quality is achieved through testing your products again and again. Ensuring you never allow any regressions to slip through. Testing every new feature and keeping on testing right through to production.
People often try to ape Google, even when that is inappropriate due to the difference in scale. But when it comes to testing, they definitely have the right ideas, as this blog post explains. Google was one of the early adopters of Selenium because they were serious about quality. They still are serious, and now they have moved well beyond Selenium. We believe every company should take a leaf out of the Google book when it comes to testing.
Achieving high quality isn’t about employing more and more QA and test engineers. It’s about using your test engineers as your in-house test consultants. Their work should be supported by autonomous testing, which will act as an intelligent agent to improve test productivity, speed, and coverage. Embrace testing at every stage in the development process from planning, through continuous integration and into production. If you do this properly, quality will be baked into all your products.
Buggy software is a nightmare for your reputation. Even big companies with loyal fans can find their reputations suffering if they let a bad bug slip through. So, ask yourself: what is your brand and reputation worth to you?
The problem is, even a small application quickly becomes too complex to test everything manually. This complexity skyrockets when you add in things like responsive UIs, cross-browser testing, multiple OSes, and user customization. The traditional meaning of test coverage represented how much of your code is covered by tests. But we think a more useful definition is a combination of this multiplied by how many of the possible combinations listed above you actually test.
The only solution for achieving higher test coverage is test automation. But normal test automation suffers from many of the scaling issues of manual testing. Every browser and screen resolution combination may need the script to be tweaked. Many features may not be testable at all. And if your application allows the user to customize their home screen, etc. it could be impossible to test all the combinations.
So, we believe the real answer is autonomous testing. Autonomous testing can cope with all the issues mentioned above. Write the test once, and it will work across any combination of device, screen resolution, OS, etc. You can even use intelligent templates and advanced interaction tools to enable you to test customizable UI features. Adopt autonomous testing and you can be sure that your true test coverage will increase enormously.
In short, autonomous testing becomes the shield and defender of your brand and reputation. It is almost impossible to put a price on this.
It’s all too easy for management to overlook the importance of testing. That is, right up to the moment things go wrong. At which point the test team becomes the target of everyone’s ire. The problem with this view is it is rather short-sighted. If your test team failed to spot a bug it may not be a reflection on their performance. It is more likely to be a reflection on your lack of strategic focus on testing and quality management.
In our opinion, QA should be a C-suite responsibility. No one laughs at the idea of a Chief Compliance Officer, so maybe you should consider appointing a Chief Quality officer? Successful companies are the ones that adopt the mantra of test early, test often, and test in depth. This means testing becomes a key strategic requirement in your organization. If testing needs more resources, make them available. If there is tension between your test and dev teams, try merging them. If you want to release new features frequently, look into continuous testing.
At the heart of this approach must come intelligent use of autonomous testing. We won’t pretend autonomous testing can solve everything – sometimes you need manual and exploratory testing. Especially, if you are suffering the curse of the unreproducible bug. But, used strategically, autonomous testing can save you from a lot of heartache and pain. It shouldn’t be just an afterthought, it should be one of the first things you think of when planning a new product.