Everyone’s worst nightmare is finding a major bug just before release. It ends up delaying the release by days or even weeks. The solution is to shift to agile model software testing. This is easy if you adopt autonomous testing.
You are days away from a major release and suddenly a couple of tests fail. Nightmare! Now the dev team has to try and diagnose the failure, fix it and issue a new build. Chances are that means rerunning all the tests. Sound familiar? This sort of thing is all too common in software development. What we lack is agile model software testing.
The agile model of software development mandates frequent releases and frequent client feedback. This approach can be applied to software testing too. Agile model software testing involves shifting testing left so as to identify test failures as early as possible. The faster a bug is spotted, the easier it is to fix. The problem is, how can you shift testing left when releases happen as often as they do in agile? One way is to leverage autonomous testing.
The problem with testing late
In the standard software delivery lifecycle, testing happens once all development is done and the product is ready for release. Typically, unit testing and maybe integration testing happens earlier, but system testing, UI testing, and acceptance testing are all left to later on. The problem is, this may lead to major bugs only appearing on the very eve of release. This is a real issue for your development team for 3 reasons.
- Diagnosing failures so late is much harder. When a bug is found right at the end, your team will potentially have to search all the code to trace the origin. This is made tougher because, by now, the individual developers no longer have the code “in memory”. So, they will need to become familiar with the codebase again.
- Any fix will have a knock-on impact. There is a major risk that any last-minute fix will itself trigger other bugs. Therefore, it will trigger the need to re-test the entire product. In turn, this means…
- It will consume a lot more resources to fix it. Not only do you need developer resources, but you will also have to restart the testing process, delay release, and potentially deal with unhappy customers.
So, what is the solution? Simple really. You need to become much more flexible in your testing. You need to shift the testing left as much as possible. In brief, you need to move over to agile model software testing.
Agile model software testing
Agile software development requires you to be more flexible. Release quicker and get feedback sooner. Then modify the product to reflect the feedback. The exact same principles can be applied to testing. Test earlier and give the development team feedback sooner. This is one of the central motivations for the “shift left” movement. Doing this has numerous benefits.
Find bugs quicker and more easily
Testing code as soon as it is integrated makes a lot of sense. For a start, you should immediately be able to find most bugs. These bugs will then be much easier to fix because the developers will know exactly what has changed most recently. Furthermore, this reduces the risk that this bug will become “hard-baked” into some other part of the code.
Spread the test load more evenly
If all testing happens in the run-up to release, your test team has a very uneven workload. Testing as code is pushed helps to spread this workload out. This is clearly a good thing, and, for larger projects, it reduces the need to bring in external contractors to boost the test team at release. It also means your test team will feel more invested in the product.
Reduce the overall delivery timeline
Finding bugs sooner has a major impact on the rate at which software is delivered. You save time for fixing bugs, you allow the test team to work more efficiently, and you streamline the whole process.
The blockers to agile testing
Implementing agile model software testing isn’t completely straightforward. There are several major blockers that will make it harder. These mostly stem from the inadequacies of Selenium and related test scripting approaches.
Slow to write new tests
Each time you add a new feature to your code, you will need to create a new test. In fact, often you will need several tests to ensure you test both the happy and sad paths properly. However, writing a test script takes a significant amount of time. Each script has to be developed iteratively by a skilled test engineer. They need to test and debug the script exactly the same as any other piece of code. Then, having completed the script, they need to modify it for working cross-browser and cross-platform. This process can easily take as long again since it can require a complete rewrite of some parts of the script. And, of course, the more significant the new feature, the harder it is to write a good test.
Every change impacts existing tests
Tests are slow to complete
Selenium is not really optimized for the modern world of massively parallel execution in the cloud. Therefore, your test team is probably running it on a ropey set of servers in-house. This means that tests are not very efficiently run. This is particularly true when you are testing cross-browser, which can easily require 20+ runs of each test. And test engineers aren’t sysadmins. So, there’s every chance they are also stealing time from your sysadmin and DevOps teams to keep their infrastructure running.
Hard to integrate with CI/CD
In general, test automation works best when it is cleanly integrated with your CI/CD toolchain. Each time a new piece of code is pushed it should trigger a smoke test. Over longer time frames, the entire suite of regression tests should be run. However, achieving this with Selenium often requires home-rolled and complex integration scripts. Which, surprise surprise, means taking more time from your DevOps team who are the ones that understand that sort of stuff.
There is another way to do all this. Autonomous testing uses an intelligent test agent to speed up and simplify the process of test automation. This means it is ideal for implementing agile model software testing. Let’s look at Functionize as an example of how this works.
Functionize tests are created using Adaptive Language Processing™ engine. ALP™ is our advanced natural language processing system that takes test plans written in plain English and generates fully-functional tests. These tests work cross-browser with no need for any additional work from you. The test plans can be structured, like the ones produced by a test management system, or unstructured, like user journeys from your product team. This means tests can be created as soon as a new feature is implemented – an essential requirement for agile model software testing.
Our intelligent test agent is powered by Adaptive Event Analysis™. AEA™ is designed to minimize the requirement for test maintenance. It achieves this using a combination of AI techniques. Effectively, it is constantly learning how your UI actually works, just like a skilled manual tester does. This means that things like style changes, elements moving and even renaming buttons won’t faze it. All that happens is the test gets updated without you even noticing. This can slash test maintenance 80%.
All Functionize tests execute in our Test Cloud. This means they execute far more efficiently than the best Selenium Grid setup. Thousands of tests can run in parallel, allowing you to go through your complete regression tests in a fraction of the time.
Our system is designed to integrate with all standard CI/CD toolchains. Test orchestrations can be triggered as soon as you push new code. Effectively, your code is constantly tested throughout development. This is essential for agile model software testing.
Putting it all together
As you see, Functionize’s intelligent test agent makes it really easy to shift all your testing left. Tests are created in minutes, test maintenance is almost eliminated and you can complete your tests in a fraction of the time. As a result, it is the perfect way to move your team over to agile model software testing.