The Future of Test Automation | Part 2 of 2 with Rebecca Karch

Yesterday we shared the first half of our two-part conversation with Rebecca Karch, QA advisor and former VP of Customer Success at TurnKey Solutions, to discuss the evolving role of QA, especially as it relates to continuous delivery. The final half of Rebecca’s conversation will cover:

  • Today’s test automation challenges
  • Trends that are shaping the future
  • The future of test automation
  • Skills that will prepare testers for the future

Today’s test automation challenges

For complex web-based software and services, which dominates the software industry, the single biggest challenge for test automation is object recognition. Whether it is a programmer writing test software, an open-source test software user, or a commercial test automation framework, reading and interpreting the Document Object Model (DOM) is critical so that each object can be described such that it can be used in an automated test. For services, being able to interpret, or import from, a Web Services Description Language (WSDL) is essential. And, for those focused on API testing monitoring message flows from top-level APIs through to a system back-end or validating side effects like DB updates can be extremely challenging. Each of these things requires skills that are well beyond the typical tester. And each can change quickly as software developers enhance the application, adding new objects with new functionality or updating object definitions and descriptions while they are fixing bugs.

Trends that are shaping the future

While I don’t have a crystal ball, I think that DevOps will continue to grow and the need to bring together development, test, operations, and line of business stakeholders to foster continuous integration, testing, and the rapid delivery of software will become the norm. Organizations are moving away from the centralized QA organization, although the need for testers and the art of testing will never go away in my mind.

I also see a need for a new form of performance engineering emerge, replacing traditional performance testing, as software must have consistently high performance across multiple platforms – including mobile and cloud platforms – with multiple OS environments and large numbers of users. And with the growing popularity of the Internet of Things (IoT) and inter-connected devices, security and usability join performance engineering in importance. And the trend towards having vast amounts of data move between multiple devices, platforms, and OS environments seamlessly is also growing.

The shift is moving away from functional testing, and the testing landscape must change with it. I think that companies who allow their users to test for them will not be successful because their customers are fed up. Companies will instead need to more sophisticated tools that integrate and interoperate seamlessly for each segment of the SDLC. They can no longer afford to test manually except in one-off, exploratory testing type scenarios.

The future of test automation

As the testing landscape is changing, I see a shift away from traditional test automation as a new class of testing frameworks emerge that are revolutionizing the way testing is executed. These frameworks autonomously drive all testing activities. Instead of manually testing the software or writing software to test software, developers and testers can now use their unique skills to train the test framework to do the testing for them. These tools go well beyond conventional test recording tools to perform intuitive testing of highly complex web applications across multiple environments. I recently presented an overview of this exciting new technology at SQuAD which represents the QA community in Denver.

Autonomous test solutions tackle the challenges of object recognition automatically, using artificial intelligence (AI) to continuously learn the DOM, WSDL and/or API. They use machine learning to generate comprehensive tests that ensure coverage for these objects, in context with how the application is used. Likewise, users can upload existing tests automatically triggering object recognition. The more tests that are created or uploaded, the more the framework learns about the application under test.

An autonomous test framework can also generate or upload application data which determines the execution flow for each of the tests. And they can handle large amounts of data, satisfying the need for big data coverage. Execution runs can be launched from a continuous integration tool and executed on a myriad of platforms and OS environments which the framework can be spun up on demand, reducing the amount of hardware overhead needed for the test environment. Naturally, performance is measured as the tests execute.

Moreover, these frameworks have onboard analytics and dashboards to display failure information, run results by release, show performance trends, and detect areas where test coverage is weak. As the application changes, both the object recognition and the test cases can be automatically ‘healed’ when tests are rerun and predictions about future software weaknesses can be analyzed.

I see autonomous testing as being the future of software development, and it’s the only real innovation in the software testing space over the last 10-15 years.

Skills you should start developing now to be ready for the future

I would suggest that testers have a basic, working knowledge of Selenium because of its ubiquity in the industry, and since Selenium continues to be the plumbing beneath many testing tools. But I would caution testers against putting all their eggs in that basket, so to speak, since I think this need to understand Selenium is short-term and testers won’t need to be an expert by any means.

Instead, to be ready for the future, I think testers need to spend their time understanding how their customers use the applications their company develops, how the applications and/or application modules interoperate with one another, and how the platforms on which they are run can impact the user experience. Focus on analyzing risk, understanding different usage taxonomies, and learning proper system-level test design methods like performance, usability, and security is critical because the user experience is what will drive success.

The single piece of advice I would give any QA executive would be to have your testers become an advocate for the user and use an autonomous testing framework to drive the testing for you. Every executive has innovation as one of his/her performance improvement goals. Combining innovation with today’s technology will ensure that your customers are getting software that works every time, protecting your company’s bottom line.

Sign Up Today

The Functionize platform is powered by our Adaptive Event Analysis™ technology which incorporates self-learning algorithms and machine learning in a cloud-based solution.