GTM – how an intelligent test agent reduces the risks

GTM is a milestone for every product and it marks a make-or-break moment. Using an intelligent test agent can reduce the risks.

GTM is a milestone for every product and it marks a make-or-break moment. Using an intelligent test agent can reduce the risks.

March 18, 2019
Tamas Cser

Elevate Your Testing Career to a New Level with a Free, Self-Paced Functionize Intelligent Certification

Learn more
GTM is a milestone for every product and it marks a make-or-break moment. Using an intelligent test agent can reduce the risks.

The risks and rewards of getting it right

Go-to-market is a milestone for every product. For many companies, it marks a make-or-break moment. So, it pays to make sure you get things right. Using an intelligent test agent can reduce the risks.


Launching a new product (or relaunching an existing one) is always an exciting rollercoaster of emotions. Get it right and you could be looking at wild success. Get it wrong and your company could be heading for the scrap heap. The two things that will guarantee failure are if your users are plagued by bugs or if your backend falls over under load. As a result, testing is key to your success. The problem is, ensuring suitably robust testing for a product under active development can be tough. Nowadays, no company wants to stop development for a week so that the test team can conduct an exhaustive set of tests on a release candidate. So, what’s the solution? In this blog, we explain how using an autonomous intelligent test agent can lead to faster, easier, and lower-risk go-to-market.

Red letter days

The traditional approach to product launches

Once upon a time, launching a software product was pretty easy. You started by coming up with an idea and getting funding. You then began the development cycle, writing code and testing prototypes with your customer/investor/users. Once you were sure the idea was good, you’d fix your specification and set a release date for the software. Taking this date, you’d work back to decide when you needed to have a release candidate ready for testing and to determine your sprint schedule for development. The key thing was that all development should have been completed well before release to allow time for testing and bug fixing.

Agility is key

As the software development approach became more agile, this way of doing things began to evolve. The aim is to reduce the development cycle, speed up the testing, and to run testing and development in parallel. Where there is still a hard deadline then you compromise, removing broken features if they can’t be fixed in time. However, now, more than ever, there’s a need to do proper testing. So, there is a clear tension between the need to test more thoroughly and the requirement to speed up testing.

Test automation to the rescue

Test automation was the saving grace that allowed testing to survive in the agile world described above. With test automation, scarce testing resources were able to be stretched further. Complex UIs could be tested more efficiently. Testing could happen 24/7. Regression testing was speeded up. However, this all came at a cost.

Test scripting

Suddenly, testers had to become pseudo-developers. Able to write complex test scripts, often using custom languages such as Selenese. This meant creating tests became a much bigger task. Before, you simply had to specify all the steps in your test plan. Now, there is a need to then take that test plan and create a script from it.

Test analysis

When you are doing manual testing, test failures are easy to spot. Humans are remarkably good at this sort of thing. But with automated testing, it isn’t so easy. You have to create suitably rich comparators to ensure you catch all possible failure cases. For instance, in a shopping site, you may need to check whether there is a valid price, in a valid currency, and with the correct amount of sales tax added.

Test maintenance

Test maintenance is a huge issue for test automation. Because of the fragility of test scripts, even minor changes to a site can break all the existing scripts. Even worse, sometimes that failure only happens many steps later. So analyzing each failure takes significant time and effort. As a result, test engineers find themselves spending half their time just fixing old test scripts. That leaves precious little time for creating tests and analyzing the results.

How can autonomous testing help?

The solution is autonomous testing, where an intelligent test agent runs your test automation for you. Before we look at how this helps you with GTM, let’s see how it solves the problems above.

Write test cases in English

We use NLP to convert test plans into fully-functional test scripts. This means that you don’t need to be a scripting expert in order to write a test. All you need is the ability to write out the necessary steps in a clear fashion. Conveniently, this is just the sort of task that test management software can help you with.  

Advanced analytics

Most test automation systems struggle with things like responsive design, deeply nested DOMs, and dynamic content. All these can make it harder to identify real test failures. But all these are standard tools in the modern UI-designer’s arsenal. Without these, most modern web apps would either not work or would look pretty boring. Fortunately, Functionize’s intelligent visual testing and element fingerprinting ensure that our system just deals with these.

Maintenance-free testing

The Functionize Adaptive Event Analysis (AEA™) engine solves most of the problems with test maintenance. It has several elements, including Self-healing Tests, Root Cause Analysis, and One-Click Updates. We have written about these extensively in other blogs, so we will just give the headline figure here. AEA™ can reduce maintenance time by 90%.

What does cloud-first testing achieve?

One of the most powerful capabilities of the Functionize system is that it is cloud-first. This means that ALL testing infrastructure is cloud-based. In turn, this enables a couple of cool (and vital) things.

Real-life load testing

Load and failure testing are vital elements for any product launch plan. Load testing is about being confident that your backend system can handle the expected user load. Here what matters is consistent performance. Failure testing is about adding load until you trigger a failure. This is essential for proper disaster planning. It can also be good for testing any failover/disaster recovery plans.

Usually, this sort of testing is done in a pretty dumb fashion, with the load being generated by repeated scripted calls to the API. The problem is, this sort of testing is very unrealistic. Our system makes three key improvements. Firstly, because it is cloud-based, the load can be added from fake users based in multiple geographic locations. This is important since it means that the actual natural delays in the network are taken into account during the test. Secondly, the system is able to use your actual test scripts to generate the load. Thirdly, each test is run from its own virtual machine. This means that each test will have a different TCP/IP 5-tuple. In turn, this will mean that your load balancer correctly distributes the test load across your backend infrastructure.  

An additional benefit is that our system can pinpoint exactly which pages are taking too long to load, or which pages have unpredictable load times. This means you can give detailed analytics to your DevOps team to enable them to solve any problems with overloaded or unresponsive database servers, etc. Overall this means the system is hitting your backend in a far more realistic manner.

Cross-browser testing at scale

Cross-browser testing has always been challenging. Many companies exist that aim to help you with this, but generally, their tests are limited to ensuring your site doesn’t crash or perform badly in a given browser environment. WIth Functionize you get far more. Every single one of your tests can be run against any browser at any screen size, without needing to modify the script. Furthermore, because it is happening in the cloud, each test runs as an autonomous virtual machine. So all these tests run in parallel. Testing a single browser takes as long as testing 1,000 combinations!

How does this help with GTM?

The upshot of this is that go-to-market becomes much less scary. Firstly, your team has much more time. This is because both test creation and test maintenance now take less long. This means that your test team can concentrate on ensuring the test plans are complete. They have time for proper manual exploratory testing, and to chase down those hard-to-recreate bugs that are the bane of every tester’s life. They also have time for more detailed acceptance testing – vital if you are to meet user expectations. Secondly, you can be much more confident that your backend system will cope once it sees load arriving from real users. Thirdly, you can be quite sure that your system will work with any combination of browser and screen resolution.

Taken together, these factors significantly reduce the risks involved in going to market. You can go to market quicker because the testing process takes a fraction of the time. You can be more confident that any bugs have been found because your team was free to concentrate on testing, not test maintenance. And you can feel secure in the knowledge that you have done everything possible to ensure a hassle-free launch. Music to the ears of your DevOps team!

What’s the story after GTM?

Of course, launching a great product is just the first step towards success. What matters after that is continuously improving the capabilities of your product. Ensuring there are no regressions, and understanding how your users are really interacting with it. Needless to say, we have that covered too.

Canary testing

Canary testing is the process of getting some of your userbase to act as your regression testers. It involves switching a proportion of the users over to the new codebase and comparing how the new and old systems perform. Functionize’s canary testing approach is revolutionary for two reasons. Firstly, it uses clever data analytics approaches to predict user journeys with about 85% accuracy. Secondly, the advanced anomaly detection system can automatically spot issues with the new code and can trigger an automatic rollback.

User-defined test cases

One of our newest features is the ability to define test cases based on how your users interact with your site. Users have a habit of finding novel ways to navigate through any application. Often, this means they are taking a user journey that was never planned, let alone tested. In turn, this increases the risk that they will trigger a hitherto unfound bug. By tracking user journeys, you can automatically generate new test cases that cover these unusual interactions.


Go to market is an exciting time for every new business. But as we have seen, using an intelligent test agent means you can make it a much less scary time. If you want to learn more about the technologies discussed above, please reach out to us and check out our demo.