10 Reasons Selenium Tests Fail

Let us show how Functionize offers a better solution that avoids common Selenium test fails.

Let us show how Functionize offers a better solution that avoids common Selenium test fails.

September 10, 2018
Matt Young

Elevate Your Testing Career to a New Level with a Free, Self-Paced Functionize Intelligent Certification

Learn more
Let us show how Functionize offers a better solution that avoids common Selenium test fails.

For more than a decade, Selenium has revolutionized web application testing, bringing the ability to automate tests that previously could only be run manually. In theory, this has freed up QA Engineers to work on developing new tests and tracking down the actual cause of test failures. In practice, anyone who has used Selenium extensively will know that the majority of your time ends up being spent on troubleshooting tests, test maintenance and dealing with flaky tests.

In this article, we will look at the top 10 reasons why Selenium tests fail. We will also show how Functionize offers a better solution that avoids these pain points.

1.  Selectors move on screen

Probably the single greatest cause of Selenium failures is when a selector moves or subtly changes on screen. This is a real bugbear, especially with the growth of responsive applications where the screen layout dynamically changes with the screen size. When you add in the effect of even minor CSS styling changes, you can end up with tests failing to select the correct element even when no other functionality has changed.

Functionize avoids this problem altogether by using ML to perform intelligent selector identification. All the elements on the page are inspected and their attributes are fingerprinted. Importantly, the system understands the relationship between the different elements. This allows us to accurately identify elements even if they have moved on the screen or have been restyled.

2.  Selector functionality changes

A related issue to selectors moving on screen is when the functionality alters. For instance, adding a new menu item at the top of a menu will cause all the rest of the items to shift. This means that Selenium will now select the incorrect entry.

Functionize’s intelligent element fingerprinting covers every aspect of a system including all service calls, the visibility state of elements, the relationship between user actions and server calls and even the timing of page loads and actions. This leads to a collection of hundreds of attributes for each element which are ranked for accuracy and robustness. As a result, we are still able to choose the correct selector, even if the functionality has changed.

3.  Incorrect comparisons

Often if selectors swap places on screen, perhaps because they are being sorted in a different order, the wrong item may be selected by mistake. Later in the test, a comparison may be used to look for the original item. As an example, imagine testing a health insurance broker app. You may intend to select a policy from Kaiser but instead because the providers are listed in a different order, you select Aetna. If a later stage of your test checks whether the supplier == Kaisner, it will fail. However, this may not be a real failure if the test is only interested in the fact a supplier was successfully selected.

Functionize’s ML engine helps find this sort of error using its Root-Cause Analysis capability. This uses machine learning to identify what was probably intended in the test and either update the selector or relax the comparison so the test can proceed.

4.  Over-precise comparisons

Comparisons are used to check whether an action has completed properly. For instance, you might check that the total value of a shopping cart is correct. If you make your comparison too precise you run the risk that a minor change may make the comparison fail. For instance, if you check for a precise sum for the shopping cart, then any price change in your database will cause this comparison to fail.

Functionize’s new ML engine can address this using its rules-based expert system. This will identify places where over-precise comparisons have been made and can suggest better alternatives. In the case above it will suggest that a better alternative would be to use a greater-than or range-based comparison.

5.  Underlying data changes

One of the issues with any Selenium test is working with a known dataset. Changes to the data can trigger all sorts of test issues, which can be hard to pin down. We already met two examples of this issue. The problem is that often you want to test your system with real data which by its nature may change dynamically.

Functionize can solve this problem using our remodeling and self-heal functionality. If content or data has changed between a previous successful test and now, we can leverage our AI using a combination of element fingerprinting and the ML engine to understand what has changed, to predict what the test should be doing and to self-heal the test. This feature can reduce many man hours of debugging and test maintenance to a single click to update the tests.

6.  System in an unknown state

In any dynamic web application, existing system state can have a huge impact on testing. Here we are referring to things like the state of libraries that have been called, whether a library has been updated or replaced, whether the system logs have become too bloated and are slowing things down. Modern libraries such as React or Angular.js exacerbate this because they rely on client-side data models and dynamic generation of elements in response to user actions.

The usual solution to this is to ensure that your tests always start from a known system state and are running on a defined software stack with known library versions, etc. Functionize makes this easy because our tests handle dynamic content and are completely agnostic to the underlying software, be it React, Angular.js or any other modern web stack.

7.  Errors may only become apparent later in the test

Often with web testing, a test step can be incorrect, but the action will succeed and the test will proceed without apparent issue. Then, later on in the test this problem will manifest itself and trigger a failure. The problem is that the root cause of this failure was many steps earlier. Imagine the checkout flow for a webshop. After filling in their email address, a user can click to create an account and proceed to checkout. Now the page is redesigned so the create account button moves to a different part of the page and is renamed “signup”. In its place, there is now a guest checkout button. The test will still happily proceed right through checkout. But if the final step is to check the order status in the user account page the test will now fail.

Functionize’s ML Engine includes our patented Root Cause Analysis technology. This combines knowledge of previous successful tests with detailed analysis of each test step. It is then able to assess the probability that a given step was the trigger for a later error. In this case, it will know that the button that was meant to be selected is actually the one that moved, not the new guest checkout button. It will identify that this is the cause of the later failure and let you update the test accordingly.

8. Timing errors

Most Selenium tests rely on performing a set of actions in a specific sequence. If one of these actions is delayed, or if this sequence changes, the test is almost bound to fail. Let’s look at a simple example of testing a login flow.

1)    Click to open the login screen; 

2)    enter your username; 

3)    enter your password; 

4)    click login; 

5)    check that you are now able to access your profile from the homepage 

Here, items 2 and 3 can possibly be swapped, but not when the password box only displays after the username has been entered. However, all the other actions have to happen in the correct order and can only happen once the previous action has been successfully completed. If you recorded this test using Selenium IDE, you may have a test that makes assumptions about timings. If when you run the test the system is under heavier load, there may be a delay between clicking login and actually going to the homepage. In this case your test will fail. 

Typically, the solution in Selenium is to add delays to the test to allow for slow page loads. But because Functionize’s intelligent fingerprinting understands things like element visibility state, the link between user actions and server responses and even the timing of page loads, it won’t let the test proceed until the page has loaded properly. This ensures that you aren’t relying on inflexible, predetermined delays to cope with this issue. However, we still encourage you to add steps in your test that verify that events like page transitions have completed correctly.

9. Dynamic pages

We’ve already seen how responsive pages cause real problems for Selenium tests because page elements move dynamically. However, fully dynamic sites pose an almost insurmountable issue for Selenium and other test frameworks. In a dynamic site, the HTML is generated on-demand by scripts in the background and is then displayed on screen. What content is displayed is determined by a number of factors and can change unpredictably. For instance, a page may display a random news headline or the latest blog entries. This sort of dynamism means that Selenium often is never able to find the correct selector, or having found it, future comparisons may then break.

Functionize’s system uses AI to analyze and model the system at the granularity of the individual elements. Not only does it know what each element is, it is also able to understand what that element does and how that fits into the system as a whole. This means that even if elements change dynamically between tests, it is still able to identify the correct one and keep track of any changes.

10. iFrames

Often web applications use iFrames to embed content. For instance, they may embed a mailing list signup form. Sometimes these iFrames may even be nested (iFrames in iFrames). This causes no end of issues for Selenium and other test frameworks. At best, dealing with these iFrames requires significant coding effort. This is because the system has to keep track of which DOM it really is in before selecting an element and may need you to modify the script to repeatedly change between DOMs during the test flow.

Functionize avoids this issue because we use smart DOM selection logic to make sure we are always selecting the correct element in the correct DOM. Coupled with intelligent element fingerprinting and our AI modeling of the system being tested, this ensures that even deeply nested iFrames can be handled seamlessly.