What data does Functionize collect, and why?

Functionize is a world-leading AI-powered test automation platform. But this is only possible because we collect a vast volume of data. Read on to learn why.

Functionize is a world-leading AI-powered test automation platform. But this is only possible because we collect a vast volume of data. Read on to learn why.

November 3, 2021
Tamas Cser

Elevate Your Testing Career to a New Level with a Free, Self-Paced Functionize Intelligent Certification

Learn more
Functionize is a world-leading AI-powered test automation platform. But this is only possible because we collect a vast volume of data. Read on to learn why.
Over the past 5+ years, Functionize has developed one of the world’s most advanced AI models for test automation. Our platform is able to slash test maintenance and eliminate test debt. But this was only possible thanks to the vast volume of data we have collected. Here, we look at what data we actually collect, and why it is needed.

Why is data so important?

The majority of AI relies on machine learning. This is a technique where a computer teaches itself to recognise patterns in data and make a prediction based on the pattern. At the simplest level, this is computers teaching themselves to recognize a picture of a cat by analyzing millions of photos of cats. This works best when there is a large volume of data. Indeed, the more data, the better the model will be. Google search got so good thanks to its huge database of past searches. These allow it to predict what you are actually looking for even as you type the query. Amazon also uses data to predict your buying habits by understanding how millions of people shop. So, let’s look at a typical test automation problem that is ideal for an AI solution.

Self healing

In many ways, self healing is the holy grail of test automation. See, each time you update your site code or change the UI, most automated tests will fail. Imagine a simple change: the “login'' button moves from the top left to the top right of the screen, is restyled, and renamed “sign in”. Clearly, any human will instantly spot what has happened and still know which button to press. But there is a high probability that any automated test will be completely thrown by this change and break. This is an inherent issue with how test automation frameworks work. As a result, teams spend a huge proportion of their time just fixing broken tests. Solve this issue, and you instantly boost productivity. So, what approaches are available?

     
  1. Create an alternative selector
  2.  
  3. Choose from a list of fall-back selectors
  4.  
  5. Use machine learning to create a foolproof fingerprint of the element
Element fingerprining

The first approach is relatively dumb. The system simply tries to match other simple attributes (ID, Class, etc.) by following a simple set of rules and conditions. It assumes that if one of the alternatives matches, then that’s the same object. This works for simple cases, but is thrown when there is a more complex change. The second approach uses simple machine learning to create a list of alternative selectors for each element when the test is created. This approach often requires a human to “train” the model first. If the main selector can’t be found, it will try the list of alternatives until one matches. This works well, but over time, the list of alternatives gets more and more outdated and stale. The final approach is much more robust. Here, the system creates a dynamic machine learning model of the whole page. It records detailed data points for every element, allowing it to accurately identify the element even if a major change happens. But this needs data. A lot of data.  

Questions to ask any vendor

If a vendor is trying to sell you an AI solution, it’s important to learn how to evaluate it. Here at Functionize, we want to be very transparent about what we do. So, here’s a Q&A on what data we collect and why.  

How is your model trained?

Machine learning underpins all AI-powered test automation solutions, but there are two camps. Some vendors will just train a generic model that works “well enough”. This is much cheaper and simpler for them, but it is not very accurate. By contrast, we start from a general model and then train it specifically on your actual application. When creating a test, we ask you to give the AI some pointers so it knows it is on the right track. We call these “verification steps”. They are like waymarkers that let it know a given set of steps succeeded. However, the first few times a test runs, the system still needs to learn the exact ins and outs of your UI. It may even fail and ask you to verify what is meant to happen. But after a couple more runs, the model has learned exactly how your site behaves. It knows how long to wait for elements to load, it understands what your tests are trying to achieve, and it will carry on getting more and more reliable.  

How much data do you collect?

As we said already, AI needs data. True AI-first vendors will tell you about all the data they collect, and we’re no exception. The following table is just some of the data points we collect for each and every test run in our platform:

Type Examples
Geometry

Locations on the page: X and Y coordinates of the element on the screen

Structural position: the hierarchy of elements (parent / child / cousin / etc. relationships)

Context and relation to other elements: Similar to structural position, other related data to other elements

Timing and context

Scrolling data: do you have ‘lazy loading’ (is data dynamically loaded from the server when you scroll)

Timing data: how long does it normally take for each element to appear in this step

Network data: what network calls normally happen during/before this action? Should we wait longer?

Visibility and state

Visibility and focus state: if you can’t see it, you probably shouldn’t be interacting with it!

Pre and post states: Before interacting with an element, it may look different (e.g. placeholder text disappearing when clicking into a form element)

Path info for frames/windows: We determine this path as part of our ML models.

Code and CSS Relationship to code and CSS properties: relevant css data, including calculated values
Visual elements Screenshots:/ visual elements, before, during and after the step. Originally modeled ‘clean’ screenshots.

All this data is vital to allow us to deliver a robust and specialized AI model for each application being tested.

Functionize data collection point types

How does your model stay up-to-date?

Every company is constantly updating their applications as their business grows and evolves. Each time this happens, it risks breaking all the tests. A simple AI approach with pre-made models may cope with some changes but over time it will drift. Eventually, you will see fewer and fewer tests self healing until you manually retrain the model. Functionize is different. Our system uses continuous learning. As your site evolves, our tests evolve alongside. Each test run brings more data points to allow it to know what to expect. Imagine a skilled manual tester—the first time they use your application they use their knowledge to guess what is happening. Over time, they build up a detailed mental image of exactly how your site works. When there is a change, they quickly update their mental model. If there’s a significant change, they may need to relearn or get some help. Our models behave exactly the same way. This is what allows us to offer proper self healing with 99.9% accuracy.

Logic behind data collection mimics human logic
Functionize uses human-like logic

Why does this matter?

Our approach is pretty unique and means our tests are among the most robust out there. Importantly, we focus on eliminating test debt by drastically reducing test maintenance. Test debt hits you in all sorts of ways. Often, companies choose to tackle the debt relating to test creation but we think that’s the wrong priority. After all, test maintenance affects ALL your tests, whereas test creation is only ever a small subset of tests. Even halving the time needed for test maintenance gives you back a huge amount to spend on test creation. Not that we ignore test creation. Far from it—our Architect greatly simplifies test creation, opening up test automation to more teams. It also offers advanced features, including test data management, testing of multi-factor authentication, an API explorer, and the ability to program custom selectors and verifications. Sounds interesting? Why not book a demo today.