Wed, 21 Mar 2018 23:15:38 +0000 en-US hourly 1 32 32 Robot Framework: A Closer Look at Keyword-driven Testing Approach Wed, 21 Mar 2018 23:15:38 +0000 A Brief History of Keyword-Driven Automation Testing Keyword-driven automation testing paradoxically figured among the original solutions to the problem of scripted automation testing tools. Robot Framework led the field of open source packages. The solution of the time was to reduce scripting, not to eliminate it altogether. Perhaps surprisingly, these scripted testing tools required automation […]

The post Robot Framework: A Closer Look at Keyword-driven Testing Approach appeared first on

A Brief History of Keyword-Driven Automation Testing

Keyword-driven automation testing paradoxically figured among the original solutions to the problem of scripted automation testing tools. Robot Framework led the field of open source packages. The solution of the time was to reduce scripting, not to eliminate it altogether. Perhaps surprisingly, these scripted testing tools required automation engineers long before the onslaught of Agile and Devops mandated engineers in QA.  Robot Framework presented a solution to the problem that early test scripts were either not reusable, or only reusable with significant complication because each new test case needed hard coding.

Robot Framework implements the method of “action words,” a set of keywords intended to make the arguments of certain test functions easier to invoke, and reduce the amount of coding required for new test cases.1 The keywords are moved out to a data file which is supposedly readable and more easily modified. Key functions are invoked by more user friendly language than standard programming languages. A function to click the login button after user inputs username and password is entered by a tester as, “Submit credentials.” And so here we see the predecessor of BDD scripting tools like Cucumber and its own scripting language Gherkin.

A feature of heavily scripted testing tools is that user input to be tested in the SUT can be added to a list when the script used to run a test requires no change. This method of prescribing user input to modify the test scenario is also called data-driven testing to distinguish it from testing frameworks which require more hard coding of actual scripts. Ideally, the approach is easier to maintain and requires less technical staff and reduces the burden on engineering. But the problem today is engineers are still needed in QA. And web apps are getting more complicated and more difficult for mortals to manage. If apps are getting more complicated then it follows that the engineering skills required to test the apps will intensify.

There is one extraordinary conundrum hovering around the rise of the myriad scripting tools intended to automate testing: if you can code Python then you can write your test scripts in Python; you don’t need to add another layer of tools. If you can’t program in Python then you probably can’t code Robot scripts either, because that requires Python, Robot’s Gherkin-like BDD syntax, and some knowledge of Selenese is implied although not absolutely required to follow what’s happening in the SeleniumLibrary (which must be co-installed with Robot). And Robot itself was written in Python! This appears on the surface to be a spiral staircase  of increasing development in QA. The conundrum is satisfied to a limited extent by Robot Framework’s report generation. Robot creates reports and logs in HTML form which can be easily reviewed in the browser, which is handy for testers already working on web app testing.

Robot’s Nuts and Bolts

Robot Framework’s intended objective is to automate acceptance testing, also called ATDD, or acceptance test-driven development. Tabular formatted data files contain the keywords and arguments for implementing test cases. The pivotal technology feature of Robot Framework is modularity. Libraries from many standard languages can be included and extended in this open source framework. SeleniumLibrary is actually required as a dependency of Robot Framework, and both require the Python interpreter.

Robot Framework, Python, and SeleniumLibrary plus its various dependencies can be installed with the pip package manager. Once installed, Robot Framework tests are data-driven, and use keywords which are specified in Test Templates and called by invoking their arguments in code. A theme inherent in Robot scripting, which is done in its own native IDE called RIDE, is the implementation of a workflow that is consistent with generic acceptance testing models. That is, Robot enables a repeatable pattern of testing which can be modified with new code changes. In practice, keyword testing divides the testing process into two phases: design and development plus test execution. Let’s look at a design and development example script provided by the architects.

*** Test Cases ***               USER NAME PASSWORD
Invalid Username                 invalid ${VALID PASSWORD}
Invalid Password                 ${VALID USER} invalid
Invalid Username And Password    invalid whatever
Empty Username                   ${EMPTY} ${VALID PASSWORD}
Empty Password                   ${VALID USER} ${EMPTY}
Empty Username And Password      ${EMPTY} ${EMPTY}

*** Tester invoked keywords ***
Login With Invalid Credentials Should Fail
   [Arguments]     ${username}    ${password}
   Input Username     ${username}
   Input Password     ${password}
   Submit Credentials
   Login Should Have Failed

Login Should Have Failed
   Location Should Be     ${ERROR URL}
   Title Should Be     Error Page

Here we can see how keywords are used to invoke logic from coded functions to do things like actually clicking the submit button to sign in for testing a web app. There is a substantial community of users and many code samples and debugging tips are provided on repositories like Bitbucket and Github. A typical contributor article provides development tips on structuring tests, variable naming, and passing arguments, such as this:

*** Test Cases ***
Withdraw From Account
   Withdraw From Account    $50
   Withdraw Should Have Succeeded

*** Keywords ***
Withdraw From Account
   [Arguments]    ${amount}
   ${STATUS} =    Withdraw From User Account    ${USER}    ${amount}
   Set Test Variable    ${STATUS}

Withdraw Should Have Succeeded
   Should Be Equal    ${STATUS}   SUCCESS

Here we see an example script in which the dollar amount is passed as an argument to test a transaction. Some of the coding necessity is circumvented and replaced with more natural language components. But we also readily see how even this simplified Robot Framework scripting language quickly mounts up to a programming and development task, although this is the task it was ostensibly designed to reduce. And this brings us to a vista opposite the hyperbola, a less fortunate view of engineering tests which rival the development of the original software under test! Here is a simpler script which reveals an important problem with Robot Framework: browsers to be tested must be specified as in this code:

robot –variable BROWSER:Chrome login_tests
robot –variable BROWSER:IE login_tests

This looks like an old fashioned workaround, or hedging that must be done with old style test scripting frameworks. Although it may not yet be commonly known, this type of automation appears as a manual drudgery in the light of true automation testing, a drudgery that is no longer necessary.

Robot Versus True Intelligence

At the time Robot Framework’s inventor conceived the system as a Master’s thesis project in the 90s, truly automated testing powered by machine learning and artificial intelligence was a distant fantasy, not on the event horizon. The modularity of Robot Framework’s design enabled it to transcend platforms and become cross-technology compatible, and that was indeed innovation back then. That was cutting edge. But there is now a new caveat in the automation testing contract which is nothing less than a renaissance in the field: scripted automation testing is no longer viable. Scripted testing is deprecated, and machine intelligence based testing is now poised to replace all antecedents. Functionize leads this field with innovative new patented technologies like Adaptive Event Analysis. AEA uses new methodologies in machine learning developed and extended uniquely by Functionize. One of the results are self-healing test cases.

Functionize’s testing platform not only removes scripting altogether and makes it possible to author tests very rapidly, it also contextualizes each step of the test creation so tests are no longer brittle. There is no need to write lines of code as above specifying various browsers. Functionize performs load testing simultaneously on hundreds of virtual machines and on all browsers. In Robot Framework, the browser that is used in the test case is specified by the ${BROWSER} variable which must be previously defined and scripted into the resource.robot file. Firefox browser is the default. Testing an app on other browsers must be coded. Functionize needs no such coding.

Another irony in the lore of keyword-driven testing adherents is that test assertions can be abstracted in a way that makes them reusable. But in reality, they are only reusable when rescripted. Demos always begin with a login test and credentials, perhaps because this is a natural starting point. But it is likely the only assertion which needs little or no rescripting, and this is a suspicion lurking in the minds of managers looking for an escape from the burden of hiring engineers to write test scripts. Promotional language claims that keyword coded assertions are readable; but it remains contentious if only developers can read it.

Functionize delivers the desired escape from an engineer dominated QA with a truly intelligent testing platform. Functionize is more like your testing assistant. Imagine a tireless and obedient assistant with a perfect memory for details, one that knows how to deliver software free of exceptions, an app in full compliance, an immaculate customer experience.



The post Robot Framework: A Closer Look at Keyword-driven Testing Approach appeared first on

]]> 0
A Brief Overview of Performance Testing Tools Sun, 11 Mar 2018 06:31:13 +0000 The field of performance testing is undergoing a major transformation. Highly technical tools like JMeter have helped companies make better business decisions in light of understanding a website’s performance across static and dynamic resources. Since the customer experience online is so critical today, companies cannot afford to be unaware of scenarios and conditions that could […]

The post A Brief Overview of Performance Testing Tools appeared first on

The field of performance testing is undergoing a major transformation. Highly technical tools like JMeter have helped companies make better business decisions in light of understanding a website’s performance across static and dynamic resources. Since the customer experience online is so critical today, companies cannot afford to be unaware of scenarios and conditions that could bring their site performance to a halt. However, there is room for growth, and Functionize is proud to introduce the next generation of Performance testing and analytics that do not require scripting.

The Metrics That Matter Most when Performance Testing

What are the most important metrics in performance testing? Four Equal and interdependent factors of performance command our immediate attention when testing a web app: confidence, speed, stability, and scalability. Confidence arises from our assurance of an excellent user experience. High marks in the remaining three tend to satisfy the first.

Customers expect a satisfying responsiveness and accuracy. To this goal, performance testing of apps under the stress of high traffic and on various platforms should guarantee both speed and stability. Will the app and its database maintain speed and stability and behave as expected under strain of high volume requests? Furthermore, is the app scalable when traffic far exceeds anticipated levels? These are the benchmarks of performance testing today.1 How are these crucial factors actually measured? What is the future of performance testing?

Too Many Performance Tools are Required

To measure the performance of a web app, user traffic must be modeled and measured. Commonly the test plan is scripted by QA engineers. If the ideal maximum capacity of a shopping cart app is 100 simultaneous users then then required number of virtual machines is scripted to simulate 100 users  plunking away at your user interface. Once rendered, and after all assertions are executed and timed on specified browsers, performance reports are generated. To accomplish this enterprises now juggle a bevy of developer-level testing tools, all of which realistically require an enormous amount of scripting by QA engineers. Although 100,000 simultaneous users may sound extreme, it is the specified product capability of a tool called Loadstorm.2 In practical terms, if you want to launch an ecommerce web app these may be the typical conditions you need to simulate in testing. Scalability tests then increase users to 200 and report back on how your app works under stress. Performance testing today requires a bewildering array of tools and engineers to deploy them.

To dramatize the point, here is a list of viable competitors in the field today: Apache JMeter, Appvance, CloudTest, Httperf, LoadComplete, LoadImpact, LoadRunner, Loadster, Loadstorm, LoadUI NG, NeoLoad, OpenSTA, QEngine, Rational Performance Tester (IBM,), Testing Anywhere, WAPT, WebLOAD. The list is alphabetized to avoid the spectre of favoritism. Let’s take the first one on the list and vivisect it to see what modern QA engineers are doing at work nowadays.

How to Use Popular Performance Tools like Apache & JMeter

JMeter is an Apache open source tool, probably the most popular in its class for several reasons. Originally designed as a performance testing tool, JMeter now also supports load testing as well as functional testing. Increasingly popular SPAs, Static and dynamic pages are all in the scope of JMeter’s web application testing capability. Here are some of JMeter’s intended test targets:

  • Web HTTP and HTTPS
  • Web Services XML SOAP REST.

  • FTP

  • Database JDBC

  • LDAP

  • Email

  • MongoDB

  • Java Objects

A wide range of traffic conditions can be simulated, such as heavy load requests on multiple servers, thousands of drone users amounting to hundreds of thousands of HTTP requests holus bolus and at intervals of your design, not to mention database resilience in various network conditions. How do QA engineers actually wield all this power?

The first step with JMeter is to create a test plan, as shown below in diagram 1. And this step really points out the fundamental action potential in JMeter, which is to create an army of simulated users – called a Thread Group in JMeter jargon –  which may all visit your web app UI and execute assertions via HTTP Samplers to see what works and record failures.3 During these assertions JMeter creates reports and logs which you receive as HTML files after the test is complete.

Diagram 1. JMeter Basic Setup

JMeter Performance Testing Basic Setup

With a Thread Group of users configured, the next step is to define the controllers to execute HTTP requests, amounting to the actions or assertions to be executed in your app. If your web app requires authentication, you need to tell JMeter the names of all fields in login form by inspecting the page source or by using the JMeter Proxy Recorder. The concise user manual says it best:

“To do this in JMeter, add an HTTP Request, and set the method to POST. You’ll need to know the names of the fields used by the form, and the target page. These can be found out by inspecting the code of the login page. [If this is difficult to do, you can use the JMeter Proxy Recorder to record the login sequence.] Set the path to the target of the submit button. Click the Add button twice and enter the username and password details. Sometimes the login form contains additional hidden fields. These will need to be added as well.”

Advanced test plans are supported as well, including actions such as handling user sessions with URL rewriting, using header managers, building database test plans, and JDBC for defining SQL requests. JMeter also supports development of per-thread cookies. The final component to configure in the test plan is a Listener, which is the element JMeter uses to store results of your JDBC requests and output results post testing. JMeter creates an impressive set of HTML reports based on test completion. So, does JMeter take home gold, silver, or bronze?

Pleasure & Pain

The items in this list mysteriously jump back and forth between the advantages and disadvantages columns depending on who is reading them. The JMeter UI may be “easy to use” if you’re clear about POST method and HTTP request authentication (the quote from the user manual above may look propitious or ominous depending on your tech background). It may be a headache if you were planning to validate Angular and AJAX scripts; JMeter can’t do this, and you will have to add a tool or a plugin*. For example, JMeter cannot simulate AJAX requests and cannot process .js files. Each AJAX request must be added as HTTP Sampler in JMeter.5 An engineer may love the open source code, but a CFO may see this as a second level development complex, a new HR sinkhole, or a bulging budget for QA now stocked with engineers.

In most top ten lists of performance testing tools one of the golden featured advantages regularly proclaimed is, “No scripting needed.” But this is only true in the limited case that nothing significant changes in the UI from version to version. Something always changes. The reality today is that QA needs engineers to script testing.

Why JMeter is a Popular Choice

Advantages of JMeter are plentiful and delineate its popularity. You can download the whole shebang free of charge from Apache, and you get the source code in the bundle. You can customize it to your needs and then contribute code back to the developer community.4 The developer community is vast, and there is bountiful support for JMeter online. JMeter is a Java app and runs on any platform. Reporting options are impressive and include charts and graphs. JMeter supports several data formats for report output including JSON, and all popular protocols. Newer versions of JMeter support test plans including load, stress, functional, and distributed testing.

QA Full of Engineers

The only drawback within the intended scope is that JMeter cannot test JavaScript code directly, which means AngularJS, JQuery, Ajax, and all JS libraries. There are workarounds published by the JMeter community, but the app is not a browser and cannot run JS directly. JMeter cannot run JavaScript like a browser does. Anything emerging from JavaScript will require a Sampler or another tool to test. And finally, JMeter only tests web apps; it cannot test desktop apps. The  overarching demand for testing tools in general is an extraordinary burden arising from the need for testing to keep pace with continuous integration. Accelerated development in Agile and Devops brought pressure to bear on QA such that many enterprises staff QA with more developers with the title QA Engineer.

Decline of Scripted Testing Apps

Like all testing tools, JMeter has its long list of advantages and disadvantages. JMeter is a tool designed by developers and intended for use by other developers. The provenance of this second level development complex of QA engineers is the success of Agile and Devops, which together increased development efficiency, and created a bottleneck at QA’s gate. But that was an evolution under pressure, not a design by intelligence. And it was a short term solution which has expired.

A Performance Solution that Requires No Scripting

A lighter and less technical solution is now needed to address the problem of a QA lab heavy with additional engineers. With Functionize, once you create a functional test, we provide all the performance data without any additional steps or scripting. As many enterprises today prepare for increasing complexity in testing by ramping up engineers in QA, Functionize takes a lighter approach. Any test suite created within the Functionize platform will provide performance data without additional scripting, allowing engineers to focus on building and deploying core product. Our patented machine learning methodologies such as adaptive event analysis makes possible self-healing tests. There is no need to learn another scripting language to modify scripts to modify tests. Functionize learns your testing paradigm and with every execution in the cloud, tests become increasingly resilient. Functionize provides performance data as a natural derivative of your regression suite, producing reports which are easily readable and immediately actionable, enabling businesses to make better decisions. Tune in again for the sequel to Performance Testing, in which we will provide a full description of the advantages of choosing Functionize as a total platform.


* Actually there are workarounds for running JavaScript in JMeter using runtime functions, and BSF Samplers.


  3. Apache JMeter User Manual

The post A Brief Overview of Performance Testing Tools appeared first on

]]> 0
Continuous Testing – The Good, The Bad, and the Ugly Sat, 03 Mar 2018 02:05:13 +0000 What is Continuous Testing? Test automation produces a set of failure/acceptance data points that correspond to product requirements. Continuous testing has a broader scope across more of the development cycle, focuses more on business risk, and provides more insight on the probability that a product is going to be shippable. It’s a shifting in thinking, […]

The post Continuous Testing – The Good, The Bad, and the Ugly appeared first on

What is Continuous Testing?

Test automation produces a set of failure/acceptance data points that correspond to product requirements. Continuous testing has a broader scope across more of the development cycle, focuses more on business risk, and provides more insight on the probability that a product is going to be shippable. It’s a shifting in thinking, and a broadening of processes in which the stakeholders change the driving question. In CT, it’s no longer merely sufficient to ask (late in the cycle), “Is testing done now?”. For teams that can achieve it, it’s far better to get a confident answer to this question:

“With this latest cycle iteration, are we now at the point at which the release candidate has an acceptably low level of risk to the business?”

Continuous testing is a framework for running automated tests—as early as practicable and across the product delivery pipeline—in which the results of these tests quickly provide risk exposure feedback on a specific software release candidate.

The promise of continuous testing is faster delivery of higher quality software.

Generally, the goal is to achieve higher speed with higher quality, by moving testing upstream and testing with a higher degree of frequency. It’s easy to ship a software product if testing is minimal; and it’s easy to get out good software if you’ve got a whole year to dump a feature. Test early, test often, test exhaustively, and get the payoff in higher quality products that potentially release sooner.

The price for all of this? You’ll need to reconfigure your delivery pipeline. Full-bore continuous testing includes not only code coverage, functional quality, and compliance, but also impact analysis, and post-release testing.

The Need for Continuous Testing

Changes in software development continue to increase stress on testing teams—like never before. Also, the complexity of newer technologies and components present more challenges in achieving test automation with conventional methods and tools.

Extensive, complex application architectures — software tools and technologies continue to become more complex, cloud-connected, distributed and expansive with APIs and microservices. There is an ever increasing number of combinations of innovations, application components, and protocols that interact within a single event or transaction.

Frequent releases/continuous builds — DevOps and Agile continue a big push toward continuous delivery, and this brought the industry to the point at which no small number of applications are release-ready builds many times per day. This is only possible when significant effort has been put into the product lifecycle to automate testing and assess risk of failure. It also means that end-of-cycle testing time must have a much short duration.

Managing risk — software is a primary business interface, so any application failure translates directly to a failure for the business. A “minor” glitch will have a serious negative impact if it significantly affects user experience. For many software vendors and service providers, application integrity risks are now a critical concern for all business leaders.

How does CT differ from Testing Automation?

We can categorize the main differences between test automation and continuous testing with categories: risk, broader coverage, and time.

Minimizing Business Risk

Today, most businesses not only exposed many elements of internal applications to external end users, they have also built many types of additional software that extend and complement those internal applications. Airlines, for example, provide access to  their previously-internal booking systems. They also provide extensions to these systems so that customers can browse, estimate, and book all aspects of a vacation— flights, hotels, rentals, and extra activities. These integrations are proving to be quite innovative—but this also tends to increase the number of failure points.

Major software application failures have brought serious repercussions to the extent that that software-related risks are now high-profile aspects in many business financial filings. On average, recent statistics suggest that notable software failures result in an average 4% stock price decline—about $2.5 billion reduction in total market capitalization. This is a direct hit to the bottom line, so business leaders are putting more pressure on their IT leaders to find a remedy.

Go back to the need for continuous testing: if your test cases haven’t been built to readily assess business risk, then the results won’t provide the feedback necessary to continually assess risk. The design of most tests is to provide low-level detail on whether requirements/specifications have been met. Such tests give no indication of how much risk the business would take if the software was released today. Think about this: Could your senior management intelligently make a decision to cancel a release according to test results? If the answer is no, then your tests are out of alignment with your business risk assessment criteria.

Let’s be clear: This is not to suggest that granular testing isn’t valuable. The point here is that the software industry has a long way to go in preventing high-risk release candidates from being sent into the wild.

Broader Coverage

Even when a company manages to avoid the detriments of large-scale software failures, it remains true that a supposedly minor defect can cause major problems. If a user evaluation results in an unsatisfactory experience or fails to meet expectations, there is a real risk that the customer will consider your competitors. The is also the risk of damage to the brand if any user takes his complaints to news media.

Merely knowing that a unit test fails or an interface test passes doesn’t tell you the extent to which recent app changes will affect user experience. To maintain continuity and satisfaction for the user community your tests must be sufficiently broad to detect application changes that will adversely impact functionality on which users rely.

Accelerating the Delivery Pipelines

The speed at which organizations ship software has become a competitive differentiator, so a majority of companies are looking to DevOps, Agile, and other methodologies to optimize and accelerate delivery pipelines.

In its infancy, automated testing brought testing innovations to internal applications and systems that built with conventional, waterfall development procedures and processes. Since these systems were fully under the control of the organization, everything was dev-complete and test-ready at the designated start of the testing phase. With the rise of Agile and DevOps, the expectation is forming in many companies that testing must start very soon after development begins. Otherwise, the user story itself won’t be tested. Rather it will be assumed to “done-done” and forgotten because of the intensity that is typical with short-duration sprints (about two weeks).

Some highly-optimized DevOps teams are actually realizing continuous delivery with consistent success. These teams can often deliver releases every hour of the day—or more frequently. Feedback at each step in the process must be virtually instantaneous.

If quality isn’t a critical concern at your company—minimal disincentive for rolling back when defects are found in production—then it might be sufficient to quickly run some unit and smoke tests on the release. If, on the other hand, your management and your team have got to the level of frustration that drives you to minimize the risk of releasing defective software to customers, then you might be searching for a way to achieve solid risk mitigation.

For testing, there are a number of significant impacts:

  • To be effective in continuous delivery pipelines, testing has to become an integral activity for the entire development cycle—instead of continuing to be seen as a hygiene activity that occurs post-development.
  • As much as possible, tests should built concurrently and be ready to execute very soon after the new functions or features are built.
  • The entire team should work together to analyze and determine which tests should be run at specific points in the delivery pipeline.
  • Each test suite should be configured to run fast enough to avoid any bottleneck in a particular stage in the software delivery pipeline.
  • Environment stabilization is important to prevent constant changes from raising false positives.

Continuous Assessment

To adequately realize some of the benefits of CT, a cultural shift must first get underway. Cultural change begins with a change in thinking. It can be helpful to think of testing as a product readiness assessment.

One perspective on continuous testing is to view it as continuous assessment. With a high degree of frequency—all along the pipeline—development and QA staff should be constantly inspecting code and asking: Is it ready, yet? Is it better? Is it worse? While a few product companies may claim to operate well in CD/CT programs, it is wishful thinking for most development teams.

Minimizing Risk

Continuous testing, if achievable, can significantly minimize business risk. It’s important to go up a level and think of continuous testing in strategic terms. A primary strategic goal for any product company is to reduce business risk in releasing applications, such that—at minimum—new code won’t frustrate or alienate customers. Test automation is a tactical activity that contributes to overall continuous testing goals.

For continuous testing, the focus shouldn’t be on unit testing details, or proper code formatting, how many bugs were found. Though that is part the entire pipeline, the most critical concern in CT is the risk to the business. Technical risk is a lesser concern. The guiding questions should ever be: Is the product release-ready? Will our customer continue to maintain high levels of satisfaction when they use the updated product?

Ready for more discussion on continuous testing? This is the first in a two-part series on continuous testing. In the next article, we’ll look at the challenges, scope, and pursuit of best-practices in continuous testing.

The post Continuous Testing – The Good, The Bad, and the Ugly appeared first on

]]> 0
Getting Past the Hype of Autonomous Testing Tue, 27 Feb 2018 21:04:05 +0000 In 2017, machine learning became a worldwide buzzword—and it now such that it seems that all product offerings can only garner attention if they are touted as being capable of machine learning. Although artificial intelligence / machine learning (AI/ML) technology has been employed in the software development industry for at least two decades, we’ve come […]

The post Getting Past the Hype of Autonomous Testing appeared first on

In 2017, machine learning became a worldwide buzzword—and it now such that it seems that all product offerings can only garner attention if they are touted as being capable of machine learning. Although artificial intelligence / machine learning (AI/ML) technology has been employed in the software development industry for at least two decades, we’ve come to the point at which machine learning is commonly marketed as complete solution—instead of only being a factor in the solution.

There is no shortage of folks who tout the promise of artificial intelligence and machine learning as all-inclusive, pointing us to the day in which we’ll watch a fully-intelligent software application autonomously and exhaustively perform testing on other software apps. But many industry leaders have serious doubts. We suspect that you, dear reader, also have some reservations.

To most, machine learning is a black box. The problem is that too many users—and many buyers!—are essentially clueless on how it works. Since they lack this understanding, they are incapable of properly evaluating its function or value. It may surprise you to know that too many product vendors don’t fully understand the ML feature of their own products. They can’t demonstrate the pathways to the results, they gloss over the benefits, and find themselves in the embarrassing situation in which they provide an entirely unsatisfactory explanation to software and QA professionals.

Far, far away?

Compounding all of this is the reality that AI/ML technology is still a long way from simulating human levels of intelligence. We are nowhere near the dream of achieving a human-machine unification that futurists believe will help us realize our fullest potential.

In the past few years, various attempts at machine learning (ML) adoption have been shown to somewhat disrupt a number of products and services. But this disruption doesn’t equate to a valuable technological advancement. Here are only a few recent headlines that exhibit the increasing skepticism, tentative assent, and tempered enthusiasm throughout the industry:

Gartner dubs machine learning king of hype:

Cutting Through The Machine Learning Hype

Do you need AI? Maybe. But you definitely need technology that works:

The time is well-nigh for ML product vendors to buck up and prove their worth. Otherwise, let’s all simmer down and calmly resolve to navigate toward testing automation that truly adds value to software development efforts.

A lesson from the push for self-driving automobiles

In May of 2016, a “self-driving” Tesla Model S was involved in a fatal crash in Florida.  The car was traveling down a road in Autopilot mode, when a large tractor-trailer truck turned left in front of the vehicle. The Tesla continued underneath the trailer at a speed high enough that its entire roof was sheared off. Obviously, the “autonomous” Tesla Autopilot entirely failed to recognize a very large object, resulting in the death of Joshua Brown.

Soon after the incident, Tesla published a response—expressing condolences and outlining the relevant Tesla safety procedures. Most importantly, Tesla states that Autopilot is disabled at first, that the driver must take action to enable this mode, and the driver must expressly confirm understanding that this is technology is provided in the context of a beta-phase delivery. After enabling Autopilot, an on-screen warning information declares that it is a driver-assist feature. It is necessary for the driver to keep a hand on the steering wheel at all times and maintain control of the vehicle.

There are clear analogies between this approximation to autonomous driving and the current state of software development and QA testing automation.

The Challenge of Testing Automation

For many years, testing tool vendors have made various promises about ever-higher degrees of test automation. For many months now, these and other vendors have been promising to bring machine learning to QA. But the facts don’t support the claims. Nearly all such vendors have not attained measurable business outcomes that might arise from their automation efforts.

Many testing platforms have outdated architectures

Well into the 21st century now, the industry is such that the most common software testing products still rest on foundations of old technology. This is quite surprising, since many application and enterprise architectures continue to evolve. It is very rare, for example, to find any vendor or development team that is building or maintaining a client/server application—or releasing software on discrete, quarterly cycles. It would prove difficult to find a testing team that is given an entire quiet month of testing prior to product launch. Shoehorning new functionality into an old platform doesn’t provide a good solution, and often adds complexity that actually increases costs while decreasing efficiency.

Test scripts are difficult to maintain

If the application continues to change with continuous development, it becomes increasing difficult to synchronize the test scripts. On non-trivial applications, many teams find that it can be easier and quicker to create new tests than it is to maintain existing ones. Not only does this bloat the test suite, more false positives will appear as the development team continues to press forward. Together with the code, the new scripts are susceptible to defects—and defects in the scripts are likely to cause additional false positives or interrupt test performance.

Software architectures is altogether different

The latest releases of most software products are drastically different from the architectures of even a few years ago. Also, the technology mixture has grown increasingly complex and larger in size. The industry continues to move swiftly away from client/server and mainframe systems toward cloud computing, APIs, microservices, and a vast universe of mobile applications and the Internet of Things (IoT).

At least two primary challenges face the community of testing professionals as we seek to get forward traction on automation:

  • It’s necessary to have a high level of technical expertise or business abstraction to test these technologies without getting into low-level details.
  • Different components of the application evolve at varying rates, and tend to create a process desynchronization.

Software development processes are quite different

Although many companies still maintain some waterfall development processes, there’s clear trend towards short iterations and smaller delivery packages. Release frequency has come down from quarterly to weekly or daily. This release cycle compression puts great strain on testing teams that must wait weeks or many days to prepare the environment and the test data.

Ownership of quality assurance is shifting

In response to need for shorter release cycles, many teams work to shift some of the testing upstream. Effectively, the developers take on more responsibility for ensuring quality code to assist in reducing the burden on the testers and achievable more reliable code further upstream—to have a chance of release on time. But, developers typically lack the necessary and—or sufficient time—to perform end-to-end testing.

Open-source tools are changing the landscape

The availability of open-source testing tools such as SoapUI and Selenium has been both beneficial and detrimental. Typically, an open-source testing tool is built to solve a particular problem for a specific type of user. Selenium, for example, is now a very popular tool for testing browser and web interfaces. Though Selenium is fast and agile, it doesn’t support comprehensive testing—across apps, databases, APIs, mainframes, and mobile interfaces. While it’s true that most applications today feature a web browser  UI that will require regular testing, a browser or web API interface are only a small proportion of the many components in an complex business process. API and SoapUI testing have this same limitation.

Where do we go from here?

Clearly, software testing must improve. These challenges cannot be squarely addressed by continuing to use legacy tools and processes. Disruptive methodologies such as Continuous Integration and Delivery,  DevOps, and Agile development are propagating across many industry segments. As this movement continues, software testing will become more central in making data-driven decisions when managing software releases. To continue making steady progress, organizations must acquire technologies that cultivate Continuous Testing. Otherwise, innovation will remain shackled to cumbersome, now-ineffective legacy testing tools.

Before bring this article to a close, let’s consider the elements of a solid test automation strategy that is worth striving for in the near future.

  • Automation process design — It’s important to take care in designing test automation processes. As much as possible. think how you can automate the entire pipeline. This includes project timeline, estimation, testing schedules, and testing environments.
  • Automation architecture — Though scripting knowledge historically has been very important in automating any testing, it’s important to think hard about which automation framework is best for your environment, and now to get familiarized with new scriptless frameworks.
  • Automating test case creation and execution — After deciding on the test automation framework, work can begin on building automation test cases. It’s important to prioritize key test cases.
  • Maintenance and monitoring — While test automation tools can provide excellent reports, it’s also important to cross-check with the execution logs. After logging defects into bug tracking tools along with the screenshots. So, after this, we need to enhance our test cases & test automation frameworks in this phase.

Final Thoughts

Many professionals fully realize the importance of software testing, but perhaps don’t have the time to stop and think of a way forward beyond our conventional testing practices. Though many companies are lured by the hype of how machine learning might boost software testing, have begun investing in test automation before conducting a thorough analysis of what is truly best.


The post Getting Past the Hype of Autonomous Testing appeared first on

]]> 0
How Functionize Stimulates Customer Experience Fri, 23 Feb 2018 21:54:10 +0000 Customer experience in recent marketing hyperbole is synonymous with quality. And notice that both aspire to imply good: good customer experience means good quality. Furthermore, quality software is a consequence of accurate testing. Naturally then customer experience equally improves when testing ensures intended performance. Customer experience is now so deeply entangled with quality and testing […]

The post How Functionize Stimulates Customer Experience appeared first on

Customer experience in recent marketing hyperbole is synonymous with quality. And notice that both aspire to imply good: good customer experience means good quality. Furthermore, quality software is a consequence of accurate testing. Naturally then customer experience equally improves when testing ensures intended performance. Customer experience is now so deeply entangled with quality and testing that enterprises place their own QA engineers in customer advocacy roles. These testers with Groovy developer skills spend much of their time on a customer’s site, identifying issues and advocating for solutions. Testing and quality are intimately related.

What exactly is the relationship? We talk about these subjects because there are apocalyptic quality failures which cost millions, and failure creates karmic problems, problems that ripple out across client relationships.

The costliest quality failures are discovered during a bad customer experience.

Although a quality failure at a customer site is stressful it is also the most intensely motivating kind of failure, because it has the power to escalate the level of urgency in the delivery pipeline, in the full cycle of continuous integration, continuous testing, and continuous delivery. Presently there is an additional strain in the relationship between customer experience and quality assurance. The strain results from the new requirement to have engineers in what were previously simpler testing roles, and this is the first unintended consequence of Devops.1 But because of the simultaneous requirement that testers assume customer advocacy roles there is a disconnect at precisely the pivotal point previously occupied by the non-technical QA testing staff.

QA Engineers in Customer Advocacy Roles

If an engineer is testing a customer site with customer team members involved, and scripting tests using tools like Selenium and Coded UI or BDD code like Cucumber and Gherkin, the customer team is likely to feel lost in a random forest of jargon. Engineers staffed for testing roles are not ideal for this position, but who else can do the development work required to script tests? Now Functionize resolves the entire dilemma by returning the testing capability to non engineering human resources.

Functionize partners with non technical personnel to intelligently replace engineers in QA. We accomplish this with patented new machine learning technology which learns test cases in such a way that no future scripting nor any editing of scripts is required. Functionize learns your user interface and actually creates new test cases with each new integration of code. The benefits of enabling non technical staff to gate and release new software with confidence and accuracy heralds a new wave of intelligent true automation testing. Let’s see how Functionize kills one of the most ancient and persistent bugs known to software development.

The Pesticide Paradox and Prejudice

In each cycle of regression testing a curious phenomenon emerges where bugs become increasingly immune to scripted tests in a manner analogous to E. coli developing immunity to penicillin! Let’s call that bug a mutant. Each time developers commit a bug fix to a module a subtle new error may be introduced which the scripted battery of tests are less likely to discover than in the previous cycle.2 The standard remedy nowadays is to edit existing test case scripts to achieve finer granularity, because the mutant is resistant to previous tests. In other words, presently the only method of fighting mutant bugs is for engineers to redact existing test scripts or write new ones. It is as if test cases are prejudiced; they are likely to detect only what they are coded to detect.

What if an intelligent ubertester could predict the appearance of mutants and present a line of new regression tests to its human companion tester without coding? That ubertester lives now in the form of Functionize. Functionize has a high IQ and actually reformulates its own tests to anticipate new mutant forms of old bugs in modules.

Functionize replaces engineers with a combination of human and machine intelligence: the perfect smart testing tool in the hands of intelligent non technical testers who are more likely on technical par with customers and end users. This simplifies a Devops devolution which continues to complexify the testing world and distribute headaches unevenly among all involved team members.

How Functionize Mitigates Defect Clustering

Another source of inequality arising from bug behavior is known as defect clustering, which means that bugs are not equally distributed across all modules in an application, but instead tend to lurk in groups, especially around newer more recently coded modules, and those with experimental functions. Bugs of a feather tend to huddle together; several in one module while a majority of modules may have no bugs at all. These empirical anomalies in software testing are a permanent headache for human team members. But Functionize has unlimited endurance to sift pests from the infinite tedium of modules. Functionize never forgets a failure. It can repeat test cases and revise itself to adapt to changing app behavior. No scripting is required.

Functionize automatically creates new test cases! The perplexing annoyance of scripting new test cases on a Selenium dashboard after every code change from developers is fully relieved  and eliminated by Functionize’s patented Adaptive Event Analysis. AEA is a new method of machine learning which recognizes changes in visual rendering and layout between code revisions, and actually adapts its own test cases to anticipate potential errors in new rollouts. Functionize is now literally revising the FURPS model.

How FURPS Relieve the QA Burden

Functionize extends the golden handshake to QA engineers. When was the last time you used a calculator that added two numbers incorrectly, or produced a division error? When the program does not change, quality is assured once and the game is over. QA is dismissed. We rarely enjoy this luxury in the innovative realm of software development in which everything under the Sun is new and must be proven. Human testers strain to maintain comprehensive test regimes because of increasing complexity. However, replacing traditional testers with engineers is not the solution. We believe that human testers empowered by the intelligence of Functionize is the answer. Functionize further relieves the burden because it never forgets a test. Once learned, test cases are recalled and revised automatically to ensure that new code changes do not break pre-existing modules.

Complexity in today’s software demands equal complexity in quality assurance and testing. But scripted test cases written by QA engineers to create replayable tests turns out to be a paradox, because it is a second level development complex. It installs more developers in your QA department, rather than preparing existing testers for future complexity. It is an unintended and impracticable outcome of Devops. And this is the pivot point where we define the relationship of quality to testing: quality is determined by the customer experience and it is guaranteed by Functionize.

Quality and Testing, an Inseparable Duo

We have arrived in a new world of complexity of software testing, one in which Big Data demands accelerated performance from both innovative machine learning methods and from the most recent hardware upgrades. QA engineering staff is only a temporary solution, a placeholder for Functionize and truly intelligent testing tools which can track ALL issues and never forget one. True FURPS quality assurance guarantees a pristine customer experience and Functionize revitalizes FURPS: Functionize, Usability, Reliability, Performance, and Supportability. Functionize’s intelligence keeps everyone in the quality loop. 




The post How Functionize Stimulates Customer Experience appeared first on

]]> 0
Why Testing Automation Hasn’t Reduced the QA Cycle Fri, 23 Feb 2018 00:29:33 +0000 Testing software is now an engineering concern. The culture of QA testing of new applications is presently devolving from straightforward validation and verification to a second level development complex in which testers need software engineering skills. This is a problem because it is neither desirable nor efficient to limit testing capability to developers or engineers. […]

The post Why Testing Automation Hasn’t Reduced the QA Cycle appeared first on

Testing software is now an engineering concern. The culture of QA testing of new applications is presently devolving from straightforward validation and verification to a second level development complex in which testers need software engineering skills. This is a problem because it is neither desirable nor efficient to limit testing capability to developers or engineers. The current situation in QA is a fuzzy Devops boundary where engineers script complicated test cases using a bevy of oddball tools like Cucumber and Jenkins the butler with ad hoc languages like Gherkin. These scripted test case do regression testing effectively, but at an extraordinary cost. So fragmented and disparate is the array of tools that Anaconda-like distribution packages of tools have appeared with the guise and pretense of integrating them into one singularity. Amalgams of open source tools like Cypress claim to be end-to-end; they’re really a patchwork of tools targeting various phases of testing. How did software testing get so complicated?

Groovy and JavaScript coding are now also standard requirements for BDD regression test scripting involved in the substantial task of building test cases. Although scripting has the product of an automated test, the scripting itself is a technical tedium rivaling other forms of development. The technical skill requirements of QA Engineers is a cost and complexity overhead which most enterprises now strain to accommodate. Needed is a new wave of intelligent testers who implement smart tools which do not require scripting. Functionize supplies this resource today. Functionize is a truly smart platform capable of learning. Functionize empowers non technical staff to generate comprehensive test suites without scripting, because Functionize learns how to test your application. Now you can staff QA with intelligent people who can apply true automation testing in the form of Functionize.

How Testing Automation become so Complicated

But how did we cross this Rubicon? How did QA slip down this narrow rabbit hole? After the widespread adoption of Agile, QA was suddenly the slowest runner in the relay – the continuous delivery relay, that is. Suddenly gating and release queued at QA’s door. The solution to one problem often reveals one or more new problems, and that is the case with Agile and QA. The success of Agile software development teams actually created new pressure in gating and release management of software updates and revisions. In a military style hurry up and wait, Agile enabled rapid software construction and envisioned equally rapid gating, release, and delivery. However, QA testing, versioning, and deployment were not prepared to run with Agile speed, and a new set of workflow issues arose, a sort of bottleneck.

The Purpose of Devops and the era of QA Engineers

The point of Devops is to embrace and automate all phases of software development, ideally including the new implementation pipeline, which further encompasses continuous integration, continuous testing, and continuous deployment. Devops envisions boosting QA’s speed and efficiency to equalize with or extend Agile by scripting test cases. This scripting is the new second level development complex mentioned above; it is commonly called automation testing. This boost in speed originally looked feasible when developers could script tasks like deployment to virtual machines, load testing, and building in containers like Docker. Engineers can code unit testing and regression testing in builds on both server and client with Node.js. And in fact it does accelerate the testing procedures; that’s why this conundrum – because it does work. The problem is that now you need engineers to test new code. This is actually the success and denouement of Devops.

But wait a minute… If Devops was supposed to integrate everything, why did testing become yet another development phase? Because widespread automation testing tools do not contain any intelligence. The vast majority of testing frameworks need developers to program them! Tools look smart to coders because coders know how to use them. That’s the unintended consequence. Tools like Selenium are great if you are a programmer or a developer. Let’s look at a great idea that is a technical failure.

MS Coded UI is supposed to record and replay tests. Undoubtedly the intention was to create a dashboard capable of testing a user interface by recording and replaying tests. Ideally this would be useful to non engineering testers. But there is a bug in the idea.  

You can’t replay the same old test if the code has changed, and there is no reason to test the code unless it changes. Therefore, Coded UI needs an engineer to edit test scripts before they can be reused in regression testing.

So, Coded UI works, but only for engineers. A non-tech tester can certainly record a test, as with Selenium or any other playback tool, but they can’t replay that test and rely on it after a code change. The script created by Test Builder will need revision, or the test will have to be recorded again. If that’s the case then there is no automation in the system whatsoever. And that was the whole point right? If tools need manual scripting then we cannot call it automation testing. We need to put smart testing tools in the hands of intelligent testers.

Software updates change things in subtle ways which even a brilliant coder cannot always anticipate. So the test needs to change. Here the core problem with Devops is revealed: Devops requires engineers to script test cases. Business people generally cannot script a test in Gherkin language even though it is supposedly a “business-facing” language. If a tester records a scenario with Selenium very likely the auto-generated script will need revision in subsequent test cycles. Those scripts can be edited and customized, but a developer with coding skills is required for the task. This looked good originally, or at least it looked like a reasonable solution, but it has now spiralled up into a cloud of complicated automation tools – effectively one automation tool for every technology! And this complexity grows every day.

Unintended Consequences

The monolithic unintended consequence of Devops was the creation of a new second level development complex, a complex of developers in QA. Fortunately, there is now a solution. Functionize solves the backlog of problems created unintentionally by the success of Agile. Functionize runs as a lightweight browser plugin which observes and learns as you test a user interface in your web app. Functionize learns from the assertions you create in the test cycle. But instead of generating scripts for engineers to edit later, Functionize knows how to update and revise its own code. Functionize removes the engineer from QA. We put smart testing in the hands of intelligent people, and liberate engineers to focus their creative energy on development.

Although Devops intended to reduce development cycles, it actually created a new development phase and installed it in QA. This is not aligned with business objectives. Now there is a truly intelligent alternative which solves the unintended consequence of Devops: Functionize ensures the integrity of testing but delivers us from the scripting quagmire. We do this in part with a novel patented technology of machine learning which our data scientist calls Adaptive Event Analysis.

How Adaptive Event Analysis shortens the QA Cycle

Functionize brings a new technique to intelligent automation testing. Adaptive Event Analysis is a self-healing function for test cases in which our machine learning based modules learn to self-correct by observing events and assertions in previous test cases and comparing them to new events in evolving scenarios. This breakthrough technology is an innovation in the field of machine learning as applied to automation testing of software.

Before Functionize, the state-of-the-art testing systems suffered from stationarity. A stationary process is a stochastic process in which parameters such as mean and variance do not change over time. However, evolving scenarios in a testing environment do not satisfy these conditions. Enter Functionize. Functionize’s AEA relies on building autoregressive integrated moving averages, which adapt to functional changes of a website. No longer is analysis done in a stationary manner. Functionize introduces the ability to dynamically adapt to a software platform.

In addition, Functionize builds Long Short Term Memory (LSTM) models, a type of recurrent neural network, which are capable of forecasting test case events. Test anomalies can be easily identify as outliers in model simulation. Self-healing test cases now become possible.


The post Why Testing Automation Hasn’t Reduced the QA Cycle appeared first on

]]> 0
End-to-End Testing Tools Fri, 16 Feb 2018 17:44:02 +0000 Software development in the year 2018 is mostly repetitive tasks, and the bulk of end-to-end testing tasks is now scripted. Combining myriad consoles and scripting languages to achieve CI/T/CD was until recently state-of-the-art automation testing. Ultimately, this craving to automate everything motivates the creation of end-to-end testing tools. In reality, there is no single tool […]

The post End-to-End Testing Tools appeared first on

Software development in the year 2018 is mostly repetitive tasks, and the bulk of end-to-end testing tasks is now scripted. Combining myriad consoles and scripting languages to achieve CI/T/CD was until recently state-of-the-art automation testing. Ultimately, this craving to automate everything motivates the creation of end-to-end testing tools. In reality, there is no single tool which satisfies the craving. But why automate a task when it really needs intelligence? If a redundant process can be scripted then we can use machine learning code to truly automate it. To reach this peak, we must first understand the reigning development regime and the tools we currently grapple with. Before we can make an intelligent change we must map this imaginary realm in which developers script the operation of myriad other scripts.

Automation testing of today is largely a developer’s task of writing programs to test other programs. This endeavor spawned hundreds of tools like Chai, Mocha, and Protractor. Today, an industry begging for a singular end-to-end solution to the piecemeal jumble of testing tools latches onto an amalgam like Cypress. But these supposed end-to-end testing tools really just mash all the Cucumbers and Gherkins into a distribution. They’re still the same tools and you still have to learn three scripting languages. It is not a homogenous tool, and it’s definitely not the solution. Functionize illustrates that intelligent software can completely replace both the redundant process and especially the scripting of test cases. In order to fully understand how, let’s explore today’s model of end-to-end testing by looking at the most popular tools and methods. First, here are the quintessential concepts.

When a developer adds or updates a module the new file must be copied to a server accessible to end users. This was once as simple as using an FTP client to send an HTML file to a server. The page went live instantly. But now we live in the Amazon era, wherein millions of dollars are packed into a single button click, careers are built and demolished by mouse hover events, and this previously simple task is now a “pipeline” of associated tasks in which many agents intervene to ensure everything functions precisely as intended. Enterprises are now burdened to staff QA with engineers. One recurring task is to assign the correct permissions to the new file and to make it executable. As we dissect this pipeline, which is now described variously as continuous integration, continuous testing, and continuous deployment, we see layer upon layer of increasing complexity. A hodgepodge of goofy names like Jenkins and Groovy are no less mystifying to spectators than the engineering practices they implement.

Jenkins scripts contain embedded Groovy code to automate the continuous integration of new modules. Bash scripts attached to code releases and called “hooks” in the Git repository lingo set permissions for the new file and do other file management operations. When it comes to testing the new module, the setup may be scripted with Cucumber, which is described as a Behavior Driven Development cycle (usually BDD). Cucumber’s own scripting language is called Gherkin, and is supposedly comprehensible to “business facing” team members. Decipherable may be a more appropriate word, but the prospect is spurious. Most of the testing frameworks include event recorders which create scripts as testers enter assertions. Coded UI is one such framework. Its Test Builder writes scripts compatible with .Net framework. Testers can then use the Visual Studio Enterprise edition to modify test cases. And the list of buzzwords continues throughout continuous deployment. We will have to delve into Docker and even get into Git if we are going to sort out this devolution before the road bends toward the abyss.

The Rise of Repos

All tools ultimately begin and end with the extraordinary rise of the versioning repository which has become standard fare in software integration, testing, and delivery. GitHub enables the Agile team to collaborate in new versions of code, perform rollbacks, and track changes to their apps. The core of this functionality is a set of event-driven actions attached to each commit. Events are scripted to react and automate every code commit. These events are called “hooks” in the Git realm. On each commit, Git checks the hooks directory to find and execute any attached scripts. Scripts run before, during, and after to correctly establish file permissions and deploy code, for example. This is a brilliant method of standardizing the deployment to ensure consistent compliance upon every event. Many end-to-end testing tools build on this repository strategy for sharing code and supporting documentation and data.

All Your Cukes In One Basket?

End-to-end testing tools have the purpose of testing the user experience and accuracy of an application, especially focusing on UI elements. During e2e testing data integrity must be verified, components and dependencies confirmed, and all discovered issues reported along the way. Although testing is straightforward in concept, tools which purportedly automate the process deliver limited success at a high overhead cost. The main source of this overhead is that fact that each testing tool is by nature limited to testing one target technology, but there are many technologies to test. The discussion begins with frameworks.

Jasmine and Mocha are two competing popular JavaScript frameworks and have a curiously asymmetric set of strengths and weaknesses which tends to suggest, “we need to use both of these but at different times.” Jasmine has an assertion library, but Mocha does not and uses Chai’s assertion library. On the other hand, Mocha has a command line interface for prototyping script, but Jasmine does not have this nifty feature. Mocha is a Node.js based JavaScript framework which creates test coverage reports. Mocha and Jasmine both support asynchronous test case builds in a variety of browsers. In this category we also find tools of varying facility and popularity such as Karma and QUnit. When the challenge of choosing a JavaScript test framework is over, it’s time to figure out which assertion library is right for your test cases and app.

Chai and Expect are competing assertion libraries to include in your JavaScript testing framework. These libraries support Behavior Driven Development (BDD) cycles and contain vast functionality for issues in UIs. Chai includes an assertion API which supports several styles of assertion including Should, Expect, and Difference. Chai is also extensible by way of a vast number of available plugins. A very basic Chai assertion is scripted like this:

var john = new Assertion('John Brown');
john._obj === 'John Brown'
, "expected #{this} to be 'John Brown'"
, "expected #{this} to not be 'John Brown'"

Which leads smart customers to this new assertion:'Functionize user!');

Bare Metal Versus Nonmetal

We are now in the middle of the automation testing abyss, and we are at the point which most developers face sooner or later, which is the point of realization that effectively all JavaScript based testing frameworks require Selenium. Here we have yet another Apache success story. The playback functionality of Selenium enables testers to record, rewind, and replay test assertions. The IDE also supports coding a wide variety of languages like Python and Groovy. There is also WebDriver, which is Selenium’s browser interface. Selenium Grid also enables running test cases on remote servers through virtual machines. Now another membrane to permeate is the open source versus paid testing framework choice. MS Coded UI falls toward the expensive end of the paid testing software spectrum, and there is nearly no community support online. Like with most MS products people who use this one amount to a captive audience, resulting from upper echelon enterprise decisions.

Wrapping it all up:

The next step in our end-to-end testing tool journey is to choose a wrapper for Selenium. What does this mean? Generally it means that we need another tool, like protractor or nightwatch Protractor is indeed a wrapper, because it actually contains Selenium Webdriver! And Protractor is also billed as an end-to-end test framework especially for AngularJS applications. Finally, we will need some more libraries like Sinon and TestDouble, because Mocha really does not include these doubles. And that brings us to the exciting conclusion of 101 testing tools (that is the number of tools, not the MOOC title). But there is still one monster tool which we must not overlook.

And now for the denouement, now for the clencher, there is one tool which boldly promises to be truly end-to-end, and not just front-to-middle or off-to-one-side. Cypress claims to do all of the above and without Selenium. It supports mocking and stubbing to boot. But there is one curious reality: Cypress is actually a collection of open source tools, one from each category described above, mashed together for your convenience into a huge distributable: like Anaconda to Python, but perhaps on a smaller scale – and packaging relabeled freeware products together is all too common today. It begins to feel like Cypress’ whole existence is a Selenium weak point orbital diagram. Their copy reads, “Cypress tests are only written in JavaScript,” as if a JavaScript coder can’t read “Selenese.” That is ad copy intended to persuade.

QA Developers?

Instead of moving toward intelligence in automation, testing now requires an overhead of scripting by QA Engineers with advanced degrees. Look at Jenkins, for example, a common skill requirement of Fortune 500 QA engineering team candidates. Jenkins is an automation server, which runs on Apache Tomcat servlets, and which enables developers to script a lot of continuous integration. This current trend toward engineering level testers is not viable because customer experience is ultimately defined by customers who cannot be likewise required to have engineering skills to operate the application under test. Now we are in position to bring intelligence to replace scripting. Functionize brings a unique intelligence to replace the scripting that once encumbered engineering talent, allowing them to optimize their focus on product development.

The post End-to-End Testing Tools appeared first on

]]> 0
Software Testing Fundamentals Thu, 15 Feb 2018 17:21:53 +0000 Thanks to the innovative, felicitous use of machine learning techniques, Functionize is the harbinger of a new era of testing automation which now transcends compatibility issues and relieves much of the pressure on enterprises to staff QA engineers who wield advanced degrees in computer science. Now it is very possible for nearly anyone to engage […]

The post Software Testing Fundamentals appeared first on

Thanks to the innovative, felicitous use of machine learning techniques, Functionize is the harbinger of a new era of testing automation which now transcends compatibility issues and relieves much of the pressure on enterprises to staff QA engineers who wield advanced degrees in computer science. Now it is very possible for nearly anyone to engage in verifying and validating a new software deployment because Functionize’s intelligence boosts everyone’s testing prowess; even marketing and customers can get involved. Functionize’s injection of intelligence into continuous testing was prophetic. Nowadays most enterprises recognize the need to efficiently release products, but until the rollout of Functionize, no testing automation enabled gating of releases with full confidence. How do we know exactly what the best companies are doing to create the best customer experience?

Reverse Engineering QA

Quality assurance is now relabeled and to some extent reinvented as customer experience. So profound is the dominance of this new perspective that, in a Hewlett Packard QA job description, the phrase “customer experience” actually appeared more often than “quality” or “testing.” The focus is the buyer, the end user, the customer. HP has a long tradition of the highest quality and most durable products, and so it is reasonable conjecture that the company’s QA methods are effective and efficient. Our study of software testing fundamentals may be well informed by way of intricately analyzing those methods. In order to discover the QA methods used by the best companies today we are taking the circuitous approach of dissecting actual job posts. From these we will learn exactly what is required of QA Engineers today. And by inference we will know the methods used within their continuous integration, continuous testing, and continuous deployment pipelines.

Once we discover the expertise and tools they require of team members we can infer their methods. Tools are designed to accommodate a specific suite of automation testing techniques. From these tools we may distill the absolute truth about software testing fundamentals of today, those in actual use, rather than speculating from a theoretical or academic set of principles. For example, one common requirement of a QA engineer is experience with a continuous integration tool such as Jenkins. This is no small demand when you discover that Jenkins is really a scripting language in its own right. Jenkins has a scripting console, from which engineers orchestrate the actions of a “master” and its “agents,” to offload builds, and by way of complex scripting thereby “automate” a phase of the integration and delivery process. But is this true automation? Not anymore.

QA Developers?

Instead of moving toward intelligence in automation, testing now requires an overhead of scripting by QA Engineers with advanced degrees! Look at Jenkins, for example, a common skill requirement of Fortune 500 QA team candidates. Jenkins is an automation server, which runs on Apache Tomcat servlet containers, and which enables developers to script many of the redundant processes of continuous integration and continuous delivery. But this means you need developers to carry the entire process from software revision to delivery. Where does traditional testing fit into this pipeline? It’s just too complicated. Right now top companies hire QA Engineers with advanced degrees; is so much technical skill necessary for testing? Do customers need this level of skill to use your product? If so, they will be scouring the internet for an alternative. But developers need Jenkins now and it’s imperative because the industry evolved toward complexity instead of intelligence! Testing is getting more difficult instead of easier. Testing should not require scripting. Now Functionize liberates developers from scripted testing.

And the setup can get even more complex when the OOP language Groovy plugin further enables developers to write Groovy code directly into the Jenkins console1. Jenkins documentation includes sample scripts which show how to execute Groovy scripts on agents. But the Jenkins doc takes a surreal turn when explaining how to install the Chuck Norris Plugin, and it feels like the developers have gone too far down a rabbit hole. Can this really be the state of the art in software testing fundamentals? When Fortune 100 companies like Tesla require their new QA members to be “good at open source tools like Jenkins,” the answer is clearly, “yes!”2 The QA world has become this complicated.” What are the other skill requirements of Fortune 100 companies’ QA team?

Customer Centric QA

Many QA engineers now work primarily in customer support roles, as specified in job descriptions. QA managers of the past may be surprised to find their modern counterparts bearing moniker which includes “customer assurance” as a variation of quality assurance. Technical support is now often remade as “technical customer support,” and responsibilities include dedicated customer centric quality management. QA engineers today must possess diplomatic skills to represent a customer’s sentiments while analyzing a variety of input from developers and business members; they must command data collection tools to integrate and analyze Big Data arising from Big Testing. Our modern QA engineer manifests customer advocacy and becomes the voice of the customer. But to perform this ambassadorial duty the QA member somehow needs a Master’s in engineering, along with certification in Lean Six Sigma!

Persistent Tradition

QA Engineers are still responsible for successful verification of Apps prior to deployment. And QA Engineers continue to create and execute tests both manual and automated to ensure product quality. These roles are the same on the surface but the mechanics have changed. Waterfall is replaced by Agile and Devops. Development of test cases now increasingly includes mobile apps and the complexity of mobile platforms. Functional system testing, integration and regression testing persist in the workflow of the QA engineer today. As illustrated with Jenkins above, one new form of exotica in QA today is the requirement to develop automated testing scripts for applications. Although rarely stated, this skill requires fundamentally the same technical knowledge as any other developer! And this extraordinary burden brings us into focus on intelligent testing at exactly the pivotal moment when it is most needed. Engineers are so valuable that they should be coding for the development of product, not scripting tests for the products of other engineers. Functionize democratizes testing automation so that everyone involved in business outcome can create incredibly resilient test cases – even without an advanced degree! Functionize frees your engineers to focus their time and expertise on product development.

The fundamentals of software testing in principle will always include the logging, tracking, and management of failures, along with reporting and resolution. However, the mechanics of performing those tasks may change in such a way that conceals the core, as a calculator conceals the fact that most people don’t know an algorithm to find the square root of a number. Now, in other words, QA engineers need the ability to write Java or Python code, and this inevitably means the use of libraries and reusable code. This translates to a lot of coding on top of development. A strong comprehension of REST and APIs is often required. Tesla actually requires QA engineers to be proficient in the design of frameworks, not just their use to achieve some short term scripting automation. With Functionize the overfitting problem of putting engineers in testing roles is solved.

The New Intelligence in Testing

One way of reverse engineering the testing process is to read job ads posted by Fortune 100 companies, wherein we can observe by proxy the mechanics of their testing procedures. A surprising revelation from this approach is that QA is retitled to CA – meaning Customer Assurance – to emphasize a new perspective and change the mindset. At the top of the CA’s list of responsibilities is customer advocacy. Functionize contains the testing intelligence to become the perfect partner in verification and validation of web apps because it the only testing solution that is powered by Adaptive Event Analysis. Functionize makes it possible for customers and any non technical team members to record and illustrate test cases and improve customer experience. This further relieves enterprises of the burden to staff QA with engineers. The intelligence of Functionize is the new wave in true automation testing. Functionize is a relief to developers, QA, customers, and even the human resources of an enterprise!


The post Software Testing Fundamentals appeared first on

]]> 0
Mobile Web App Automation Testing: the first scalable point-and-click mobile solution Fri, 09 Feb 2018 16:46:06 +0000 Functionize now extends its continuous QA platform of functional software testing to include mobile web app testing. This natural step expands the reach of Functionize’s cloud-powered automation testing to include myriad mobile devices and diverse display options. Functionize wields Google’s Nested Virtualization to execute test scenarios on all major mobile device types including iPhone and […]

The post Mobile Web App Automation Testing: the first scalable point-and-click mobile solution appeared first on

Functionize now extends its continuous QA platform of functional software testing to include mobile web app testing. This natural step expands the reach of Functionize’s cloud-powered automation testing to include myriad mobile devices and diverse display options. Functionize wields Google’s Nested Virtualization to execute test scenarios on all major mobile device types including iPhone and Samsung. The great benefit is that the advanced machine learning techniques we deploy to catch visual rendering errors now encompass the mobile device world. This progress further enhances QA engineers’ methods of ensuring the efficient flow of software development through the continuous integration, continuous testing, and continuous deployment pipeline. Let’s dive right into the details and get an inside picture of how a mobile test case works with Functionize.

First, Functionize is true automation testing which runs simultaneous emulations of test cases in all browsers on virtual machines which are dynamically scalable for load testing. Complex test scenarios run in Chrome and Firefox on hundreds of VMs in minutes and catch unanticipated web element rendering errors quickly. Functionize is 100% portable and is capable of testing all web applications because it is a browser extension with no installation overhead nor any compatibility issues. Now you can deploy all this power to automate the testing of your web applications on all mobile devices as easily as using any desktop platform. And crucially no scripting is required. How do we deploy this power to mobile app testing?

Functionize learns to act and react like your mobile users!

Functionize opens with your web app in your browser and actually observes and remembers your assertions and test gestures. After watching you develop test cases, Functionize uses diverse and sophisticated new machine learning techniques to actually learn and predict errors and failures, and to assist you in the creation of a perfect user experience by notifying you during the CI/CT/CD development cycle. Functionize notices when web elements move, when menus change, and even learns to recognize dynamic elements in order to anticipate your needs. You no longer need to explicitly code anomalies with scripts in various IDEs; Functionize does everything for you, and provides you with alerts when a problem needs attention.

Easy and Accurate Mobile Device Test Cases

Functionize runs a test case in the browser exactly in the same way that it runs on the mobile device. We use Google’s User-Agent Switcher extension to develop test cases for specific mobile devices. To record a test case for an iPhone 6 just select the phone from the menu. As we record the test going forward the browser mimics the phone and renders the page exactly as if it is running an iPhone mobile layout. This includes the scaling of dimensions to run successfully on the iPhone display. Meanwhile Functionize applies its breakthrough techniques to watch and learn from your gestures. So, instead of having to manually simulate a mobile display, we have the capability to have these diverse displays scaled and rendered for us to fit any target device.

Features of Mobile App Testing

Functionize mobile testing makes possible the complete testing and evaluation of mobile devices running your app without the necessity of running tests on the individual devices themselves. This is because Functionize perfectly simulates the behavior of each device’s behavior in the context of your web application. Naturally, scaling and device dimensions are essential to this success. In addition to scaling, we build on Google’s Nested Virtualization to emulate the CPU and other hardware specifically for testing devices which host your web app. Here also we realize the need for precision hardware virtualization.

Functionize’s mobile device testing uses multiple levels of virtualization. We start by implementing the test in a virtual machine and then layer in the virtualization of the specified device’s hardware using Google’s Nesting Virtualization.

Up to the minute updates for new devices

Functionize continually rolls out support for new mobile devices. New tablets and phones in the Android and iPhone markets and many others are supported. We are currently delivering support for the new iPad and have completed a full range of internal testing for the device. We cooperate with manufacturers to anticipate the newest hardware because we know that your innovations are likewise intertwined with breakthrough technology. When a new device hits the market we are prepared with support for you to test and deploy your apps.

Breaking Out Devices

A common question about Functionize support is about device browser support for newer iPhones such as iOS 8 for example. Functionize testing is not dependent on the browser running in the mobile device, but instead it is device dependent.

Suppose you are running test cases to compare iOS 8 and the iPad. Functionize observes substantial differences in the page layouts. For this reason, we record test cases based on unique device hardware features so that we can rapidly scan and adjust the aesthetic and functional presentation elements of any web application.

The variation among devices leads us to the crucial question of how much tolerance or margin there is when testing more than one device with a single assertion. In other words, how do we know when a separate test is required for a unique feature of a given device? One answer is, it may turn out that different screen sizes result in the same layout; but if two different screen sizes result in different user experiences then a separate test case must be recorded.

When we find empirically that there is a substantial difference in the layout between an iPhone 8 and an iPad we need to develop separate test scenario for each device. Figure 1. Shows Functionize conducting a test scenario wherein Chrome’s Responsive View (in Developer tools menu) simulates the iPhone 8 display. Here you can see how crisp and cleanly Functionize harmonizes with the browser to enable accurate mobile app testing on a desktop platform. iPhone 8 and iPad render with a significant difference in the layout, suggesting that we break these devices out into separate test scenarios.

Figure 1. Rendering Test for iPhone 8

This means that, from one device to the next, it is primarily the rendering and layout of the device which is targeted for detecting visual rendering failures. In this way, Functionize is capable of running tests on all new hardware up to the minute of deployment and delivery. We constantly track firmware updates to guarantee the precision rendering that you require for the best possible user experience of your web applications.

Efficient Asynchronous Testing

Looking at Functionize’s control panel as you build test cases, you will notice that the browser icons indicate the current state of progress for the respective test cases, and that each of these is processed as an individual entity in parallel.  This capability to manage hundreds of tests asynchronously on potentially thousands of virtual machines puts nearly unlimited capability in the hands of your QA engineer. Figure 2. shows Functionize in action with the current stability level of one in progress for iPhone devices.

Figure 2. Asynchronous Progress of Active Tests

Testing landscape view and display orientation

Yet another important question to be confronted empirically is, does the test experience  change if you rotate the screen at various points during the test and switch from portrait to landscape view? The orientation of the device can be controlled via settings for the test case on the Functionize UI.

Throttle tests

Based on bandwidth experience wifi vs. 3g for example, see Figure 3.

Figure 3. Throttle Tests by Bandwidth

Mouse Hover for Mobile Apps!

Intelligence is the basis of Functionize testing. One of the great intelligent features in the Functionize system is the ability to capture mouse hovers. Ordinarily this event is captured as a tap event. Functionize actually captures the mouse hover in a mobile browser app. In fact, Functionize automatically recognizes and converts desktop events into their mobile equivalents. This intelligent design ensures that while a tester’s experience is streamlined during test creation on a desktop it remains true to the performance on the target mobile device under test.

Figure 4. Capturing Mouse Hovers

Breakthrough Mobile App Testing

Functionize heralds a new wave of intelligent testing and now the width and breadth of the field is expanding to absorb the plethora of mobile devices which have long tortured testers with their confusing variety of hardware features. Continuous testing environments are even more streamlined. Compatibility issues no longer plague the process. Now testers operate in parity from desktop to mobile. QA engineers are now free to test any device from one platform with the intelligence of Functionize.

The post Mobile Web App Automation Testing: the first scalable point-and-click mobile solution appeared first on

]]> 0
Unified Integration Testing & Deployment Fri, 02 Feb 2018 22:30:54 +0000 Intelligent automation testing is the gatekeeper between continuous integration and deployment. Crucial deployment gating decisions pivot on testing. However, testing is often the slowest of the three runners in this relay. Enterprises now realize that true automation testing is the only methodology which can equalize this deficit so that continuous testing can actually keep pace with continuous […]

The post Unified Integration Testing & Deployment appeared first on

Intelligent automation testing is the gatekeeper between continuous integration and deployment. Crucial deployment gating decisions pivot on testing. However, testing is often the slowest of the three runners in this relay. Enterprises now realize that true automation testing is the only methodology which can equalize this deficit so that continuous testing can actually keep pace with continuous integration and deployment. True automation testing is the new hub which smoothly and seamlessly receives incoming commits from developers, catches failures, and decides crucially to deploy or not. True automation is the evolution of testing software that learns and improves itself with each guided gesture of the QA engineer. Functionize is the only true automation testing platform which achieves this lofty goal.

The ci CT cd Continuum

An Agile team scripting a test suite using Coded UI, protractor, and JUnit relies heavily on engineering, even becoming an endeavor of development in itself. But this is too slow to be competitive now; it is a component of Devops which we will explain shortly. Functionize needs no such scripting; Functionize remembers what you need, learns from your previous tests, and actually improves its own performance with each test run. Furthermore, Functionize catches, highlights, and notifies you automatically about crucial errors prior to deployment. You can thus be absolutely certain that new deployments are free of errors.

Less reliance on scripting means fewer human errors in the test cycle; Functionize never misses because it cannot forget a step. With Functionize learning from your testing assertions, you can spin up an integration, testing, and deployment cycle with Spinnaker and roll out new important software to your market in minutes. Spinnaker is convenient, but Functionize technology enables intelligent gating of releases. We are now talking about an automated gating and release of deployments. This is truly continuous testing, which fits brilliantly with integration and deployment. So let’s see how this is a whole strategy instead of a collection of components.

Devops Champion

An enterprise trend now gathering momentum is to unify software development and operation, which naturally includes verification and testing. The idealized panacea is to automate all aspects of software, including continuous integration, testing, and deployment. A goal Devops shares with Agile is to reduce the duration of each cycle; but realistically, the cycle length ultimately depends on the substance of the cycle. Furthermore, current methods of automation in testing involve a stilted, inflexible procedure of recording assertions, and then coding an often bewildering set of scripts to rerun the test cycle. This may be the reason that Devops really must dissolve testing and design into each other: engineers are required for every hands-full gesture. 1 Tools claiming to reduce scripting by providing an advanced API to remotely test user interface automata may actually increase the amount of coding done by QA! While the goal of devops may be to increase deployment frequency and to achieve business outcomes, if the testing procedure lags then only truly automated testing will invigorate the whole strategy. Enter Functionize.

True Automation Testing is Unscripted

Gated product release decision making for most enterprises is a hands-full endeavor because with each commit an engineer must anticipate a section of the recorded test which was customized with an anomaly or assertion, and now it needs revision again. You have a programming modification in your product and now you have a programming change in your spurious, automated testing protocol; you might get the feeling that your testing is not really automatic.

Achieving true automation testing requires automated intelligence. Functionize delivers a new form of automation testing which updates itself via machine learning algorithms which actually learn to test your product and improve with time. Instead of needing an engineer to customize it, Functionize rewrites its own assertions to catch unanticipated input. Functionize can see when a page element is moved, or renamed. And if the element is dynamic, such as an AngularJS ng-model then Functionize learns to predict related outcomes. Functionize is not a recorder like the Test Builder, but instead Functionize actually watches and learns how to test your product. Instead of a scripting adversary, Functionize is a creative companion to facilitate fast product rollouts.

Hourly, Even Minutely

Devops conceptually supports hourly deployment of revised software products, but the substance to actually achieve this concept only arises with the advent of Functionize. We see a lot of hype about Devops enhancing Agile to clear the pipeline and deliver products with confidence in hours or even minutes. But these were unattainable ideals until the arrival of truly automated testing. Now we are crossing the Rubicon into a realm of testing where scripted test runs are deprecated, no longer competitive. This is the era of Functionize; it is an era in which QA rises to the challenge of gating and releasing to customers as fast as required. With Functionize, QA is the fastest runner in the relay. Functionize renders your design application self-healing with an intelligence which complements the intuition of testers. Teams now deploy with a new assurance that Functionize is a pure partner, always working in the background to increase awareness of issues. Instead of getting bogged down scripting a custom assertion and missing a catch on another one, QA is liberated to follow farther reaching insights all the way to the user. Now testing accelerates to meet the demand; CI, CT, and CD are in tandem.

Undoubtedly, Devops enhances Agile, but it is a bridge under stress, an engineer under duress. Let’s look at load testing for deployment in a scenario prior to Functionize. A premier UK commercial bank experienced a failure across 3000 ATM machines, leaving customers with undefined transactions and in many cases without cash for a day because of a load test failure. The cost of non compliance fines alone would inspire a Devops based enterprises to reevaluate load testing. Functionize solves the problem of load testing on large scale deployments with calm elegance. Load testing spins up quickly and then vanishes from virtual machines, producing only the quintessential report of errors required for remedy.

With a view to testing enterprise deployment in the cloud, the scope increases with the scale of business, where a visual rendering error could have logarithmic consequences. A state health insurance provider rolled out an SPA form for member enrollment which sought to accommodate a range of display options. But catastrophically on each code change the system delivered unpredictable page rendering to the befuddlement of users. The opposite of the intended enrollment happened when thousands of customers were forced to enroll in Medicaid instead of the Affordable provider. The number of display options is too much for strictly human QA to verify. What is the solution?

Functionize can spin up hundreds or even thousands of virtual machines to emulate your load testing and confirm visual rendering accuracy on unlimited device displays. This enables the confident release of document changes to market in minutes. Functionize’s machine learning algorithm sees changes in your page rendering and tells you where a problem exists. Now you can deploy instantly and with absolute assurance.

When leaders grasp the reality that continuous deployment now requires QA engineers with skills virtually equal to developers, there comes a great new reckoning: solving one problem often subtly creates a new problem. Actual implementation of continuous integration, testing, and deployment could not work unless every team member possessed engineering finesse capable of delicately polishing every millimeter of the devops pipeline all the way to the customer’s device. That load is too heavy. Now, the road ahead is clear, machine intelligence is the vehicle, and Functionize is your Transporter!

The post Unified Integration Testing & Deployment appeared first on

]]> 0