We’ve all been there. Your next big release is due at the end of the week and everyone is pushing you to give it the green light. Meanwhile, your test engineers are laboring under significant test debt.
That means they are desperately trying to complete the test maintenance that was triggered by all the latest UI changes. At the same time, you are trying to get your manual testers up to speed to complete testing of the new features. All this, while trying to actually evaluate whether the new release is good to go. Here, we explain how AI-powered test automation can transform your life as a QA manager and help every release go smoother.
Whenever we speak to QA managers the story is always the same: maintaining and updating existing tests is a burden that is hammering productivity. The common belief is that this is an inevitable side-effect of test automation. The truth is, it only exists because of weaknesses in how Selenium and scripted testing tools function. If you fix this issue, the test debt problem can be successfully addressed. Put simply, test scripts are written assuming the UI is static and unchanging. But of course, modern UIs change constantly and are generally dynamic. The best test engineers can reduce the maintenance burden through careful scripting, but even then, many UI changes will trigger failures.
Test maintenance may seem an inevitable part of test automation, but have you thought about the true impact it is having? For many teams, test maintenance absorbs almost 60% of their time. Think about what that really means. In every 100 hours of a test engineer’s time, 60 hours will be spent on test maintenance. That means 60 hours of fixing and debugging tests that were working perfectly before. The graph below shows this even more clearly.
The upshot of this is that over time, teams quickly start to suffer from test debt. This is analogous to the technical debt that can beset development teams. However, it is even more damaging to your QA team.
Imagine you have a team of 10 test engineers. They need 20 hours to automate a new test from scratch and just 2 hours to debug a failed test. The QA team works 50 hours per week on average. If there was no maintenance needed, your team would be able to add 20 new tests per week. However, the developers are leveraging modern tools and following the extreme development paradigm, so there is a new build released each week. Simply stated, there is a massive imbalance between the modern tools available to dev teams vs the tools being used by QA teams. Selenium, the most prevalent of them, is nearly 20 years old. As devs get faster at adding new features in every release, tests need to be similarly accelerated. That means much of the test engineers’ effort is diverted to test maintenance. The graph below shows how this causes the overall number of automated tests to rapidly tail off.
Clearly, by week 25, your team is only adding 2 new tests per week. In turn, that means your manual testers are having to test more. This looks bad enough. But this model completely ignores the time needed to analyze test results, so the reality may be even worse!
AI has been revolutionizing life in many ways. Every smartphone comes with an AI-powered virtual assistant. Tesla cars are already getting incredibly close to being able to drive themselves. And AI is increasingly able to assist physicians with making medical diagnoses. AI is also revolutionizing test automation in three key ways.
Smart recorders like Functionize Architect are designed to streamline test creation. Architect allows your team to create tests simply by stepping through the test case on screen. Tests aren’t just limited to checking the UI. They can also test visual elements, verify your API, validate file exports, and check if two-factor authentication is working. As the test is created, the underlying system is building a detailed model of your application. This requires it to store millions of data points, including detailed screenshots for every step in the test.
Most test scripts break because they rely on static selectors to identify elements in the UI. Our system uses Smart Element Selectors instead. Effectively, it creates a machine learning model for every element in the UI. If an element changes, it uses the same approach a human would use to find the most likely alternative. This is shown below:
The upshot is, more than 80% of test failures will self-heal, and there is better than 99.9% accuracy with the fixes the system identifies.
Analysing test failures is a key part of the job for your test engineers. First, they need to establish if the failure was due to a bug or did the test itself break. Second, if it was a bug, they need to check exactly what broke in order to help the developers fix it. Clearly, Functionize already helps with the first problem because there will be many fewer test failures. However, our system also makes it much easier to identify failures. The failed step will be clearly identified in the test results. If you click on the failure you can compare the screenshot now with the screenshot of the previous successful test. The system highlights the change on screen, or you can use the slider view to switch between the new and previous result. This makes it super easy for your team to see exactly what failed.
The benefit of AI-powered QA is obvious for the company, but how will it transform life for you as a QA manager? Here are a few things to consider:
Overall, you should find life getting much easier thanks to the power of AI! If you are intrigued by what you read, we’d love to show you Functionize for real. We’re confident you will love what you see. We also have a great new video that explains exactly what Functionize is and why it is so far beyond traditional test automation.