Selenium revolutionized testing. But it also changed the dynamic between the Quality Engineers and product team. Effectively, Selenium tests are completely black-box. Only the QE that wrote the test can tell exactly what it does. Here, we explain the problem with this and contrast it to how Functionize tests are created.
Many of you will be familiar with black-box and white-box testing. In black-box testing, you don’t know how the system works, you are just testing that it performs the required function. In white-box testing, you know exactly what is happening inside the system and monitor this. Traditionally, most testing is done black-box apart from unit tests. This is especially true for UI testing, where you are testing from the end-user perspective.
However, as with many technical terms, black-box and white-box have started to be used more broadly. For instance, they are now applied to cover many other aspects of the testing process. Recently, we have found ourselves in many conversations about whether Selenium or Functionize testing are more black-box. In this article, we hope to answer this once and for all.
Before we start we will look at how to create a test plan. Firstly, during the design process, the product team determines exactly how the product should function. Even in agile methodologies, things like user stories specify the desired behavior. During the development process, Business Analysts work with Quality Engineers to develop test plans. The BA will decide exactly what functionality needs to be tested, writing a test outline to specify what counts as correct behavior (for instance, how should the system react after multiple failed logins). Then the QE takes this test outline and converts it into a detailed test plan that can then be executed by the QA team.
In the days before test automation, all this made lots of sense. Quality Engineers would create a test plan based on instructions from the Business Analysts. These plans would be black-box in as much as the QE would have no need to understand the underlying codebase. But, everyone involved understands what the test itself is doing because the test plans are written out and discussed. This is vital as it allows the product team to have confidence that the product is being tested properly.
All that changed with the growth of test automation. Suddenly, tests weren’t a list of detailed steps for a manual tester to perform. Nowadays, they are more typically a long, complex Selenium script. Effectively, the tests became a true black box to everyone except the QE. As a result, BAs are no longer able to verify that tests are actually doing what was agreed. Given the differing skill level among QEs, this becomes a problem. In effect, Selenium has transformed the whole testing process into one giant black box. Selenium has made the whole testing process opaque.
But, you may ask, what’s the big deal with that? Why does it matter if only the QE understands the test? Surely, since their job is to create working tests, that’s fine? Well, the problem is that sometimes “working tests” becomes the main driver. Creating good Selenium tests is a skilled and time-consuming task. It’s easy for QEs to concentrate on debugging a test and making it work. But this can be at the expense of making the test correct.
Furthermore, QEs and BAs usually have very different world views. What may seem obvious to a BA may be entirely non-obvious to a QE. A BA may make assumptions about things like the test data being used by the QE. But if these assumptions are wrong, it can lead to unexpected test results. Equally, the QE may “fill in the blanks” in a test outline by guessing what the BA really meant. With manual testing, this is OK. The BA and QE can sit down together and run through the final test plan to check for misunderstandings. But that just isn’t the case with Selenium.
Some Business Analysts are keen. They determine that they are going to learn to read and understand Selenium. But this isn’t really a solution. Any large product is going to have hundreds of Selenium scripts created for it. Often, these are created by a large team of QEs or Developers in Test. That means there will be differences in code style and level of commenting. If the BA had no other job to do, then, conceivably, they might manage to understand a reasonable proportion of these scripts. But this would be an extremely ineffective use of their time. Moreover, it would mean they had no time to actually do their core job. This is not unlike the problem QEs face when test maintenance starts to take more time than test creation. But above all, even if BAs can individually learn Selenium, test verification has to be accessible to people across the whole product team.
So, how about Functionize? How do you create a Functionize test? And how does this differ from Selenium?
The first step for creating a Functionize test is to write a detailed test plan. You write the test plan in plain English and you can include keywords and structure to make it easier to understand. You may use the output of a test management system to create the plan, or you might write it by hand. The important thing is that, because it is in plain English, anyone can understand it.
Next, this test plan is parsed by the NLP engine. NLP takes the plan and transforms it into a form that can be understood by our intelligent test agent. Typically, the next stage involves passing batches of these test plans to our ML Engine. This system forms the brains of our intelligent test agent. ML engine takes the test plans from NLP and uses them to create a model of how your UI is supposed to work. This model includes aspects of machine learning as well as image recognition. Finally, you are ready to run the tests. This whole process takes just a few hours, even for batches of tens of test plans.
Finally, the tests can be run in parallel on the Functionize Test Cloud. After the tests have completed, we present you with a detailed analysis of how they performed, including screenshots before, during and after each test step executes.
It’s true to say that we hide the ML engine modeling process from you. This is because it is creating a complex set of machine learning models. Such models are always hard for humans to comprehend. This is because typically such models really just consist of a massive matrix of weights and function descriptions.
However, the big difference between Functionize and Selenium is that this is the only opaque aspect of the process. The rest of our system is completely open and anybody and everybody can understand what is happening. The initial test plans are in English, so the whole product team can check that they are really testing everything that they need to. You can then view the results on screen, with any unexpected outcomes highlighted. This means absolutely anyone can understand what is happening. Moreover, it means you can instantly confirm that the system was doing the correct thing (or not).
The sole purpose of testing is ensuring that your software achieves its required aim without bugs. The only person that is really in a position to verify that a Selenium test is correct is the QE that wrote it. By contrast, everyone can understand a Functionize NLP test plan. And anyone can then check visually that the resulting test steps match with what the plan wanted. We like to think that we are democratizing the whole test process. Test automation shouldn’t be the preserve of a few specialized test automation engineers. Instead, everyone in the organization should be empowered to participate in it.
Selenium transformed software testing. But it also significantly changed the dynamic between the product and testing teams. Selenium tests are notoriously opaque to anyone who isn’t an expert. As a result, Selenium has served to make testing into a totally black-box to everyone who isn’t a QE. By contrast, Functionize allows tests to be specified in plain English, meaning anyone can be involved in this. Understanding tests no longer needs the ability to read Selenese. Anyone can verify that the test performs correctly because the results are displayed graphically, with screenshots for every stage. We believe we are empowering the whole team and democratizing test automation by making it less opaque.