I recently had the pleasure of sitting down with Rebecca Karch, QA advisor and former VP of Customer Success at TurnKey Solutions, to discuss the evolving role of QA, especially as it relates to continuous delivery. As a two-part series, Becky will explore the current state of test automation and how autonomous testing is poised to transform the marketplace.

Over the course of the two interviews, Becky will cover:

  • The evolving nature of software quality assurance
  • The current state of testing automation
  • Testing’s role in Continuous Delivery
  • Today’s test automation challenges
  • Trends that are shaping the future
  • The future of test automation
  • Skills that will prepare testers for the future

A brief background: Becky Karch has dedicated her entire career in QA/Test and has deep experience with test automation.  She has run QA/Test teams for several startup companies and has also led large QA organizations. Becky’s career has included the oversight of testing all different types of software including web-based, enterprise-level, client/server, and embedded systems. Over the past 30+ years, she’s seen first-hand many of the challenges that companies face day-to-day, most significantly the high cost of developing and maintaining automated tests forcing organizations to spend significant amounts of time and money, often reverting to manual testing when automation efforts fail. That experience prompted her to transition from QA Director into the role of Customer Success executive for companies that design, develop, and deliver test automation frameworks that are revolutionizing the software test industry. She works tirelessly to ensure that companies are focused on testing the right things at the right time and finding long-term success with the test automation tools they purchase.

The nature of QA and the constant evolution of software

The QA industry has evolved to a point where open source is king. The notion that “I can’t be the first person to have this problem; let me see if someone else has a solution” is being fueled by the fact that most software is being released more rapidly than ever, and testers are typically under-skilled because of tight QA budgets. This dangerous combination has testers searching the internet for quick, free answers. I say dangerous because, whether they be an automated code snippet or a manual test flow, open source tests are not tailor-made for everyone’s application and often miss the mark. Despite this, Selenium test automation, a “free” open-source framework, has become extremely popular, especially over the past five years (a quick Google search for Selenium Tester jobs results in over 1 million hits) but the Selenium automation framework has a lot of limitations and takes a skilled, expensive resource to use. The fact that software is evolving quickly, employing more sophisticated methods to cover more sophisticated technology platforms, further reduces the usefulness of open source solutions to the point where “free” is not really “free.” When I talk to testers and QA managers at events like StarEast/West, or at SQuAD (local, Denver-based testing meetup group), most are using open source Selenium for the little automation they’re doing, although manual testing is still largely predominant.

The current state of test automation

Test automation has changed very little over the past 10+ years in my opinion. Two primary test automation methods are being used: scripted/programmatic test development and record/playback automation using an automation tool.

First off, let me say that it makes no sense to me that testers are writing software to test software since the test software can have as many bugs as the software which is being tested, not to mention the fact takes a lot of time and skill (those testers don’t come cheap).  Yet this is the method that dominates the test automation landscape. Companies are hiring scores of offshore resources to do this cheaply, but that is fraught with additional problems in that these offshore resources have trouble interpreting the real intention of the tests and are so far removed from the impact of software failures on a business, they have no real investment in the quality of tests they are writing. Furthermore, as application software changes quickly, keeping those tests updated is no simple task; it’s easier to throw away the old test and write a new one. To an offshore team, rewriting tests fuels their bottom line, which is good for them, but it increases cost unnecessarily.

Record/playback technology is about 20 years old. While it offers a fast and easy way to record your tests, it’s only suitable for those paths (and only those paths) that were recorded. When a screen, feature, or path through the software application changes, the test becomes obsolete, requiring new tests to be recorded. Companies that are regulated (e.g., SOX, HIPPA, etc.) cannot simply update or overwrite the recording, which many frameworks allow. I have seen many organizations get tripped up because they have spent too much time and resources organizing tests and cleaning up after each release, especially when new releases are coming at them rapidly.

There are a few other component-based and model-based methods in the marketplace, but these offer little significance since they are too confusing to use and don’t provide reasonable means for measuring test coverage.

Testing’s Role in Continuous Delivery

With the industry’s push towards DevOps and the rapid release/deployment of software, QA teams are under more pressure than ever to ensure the highest quality possible in the fastest amount of time. QA organizations are having a hard time keeping up and the burden of testing, or rather bug-discovery, is, unfortunately, placed on the end-user. In an Agile practice, I hear from testers and their managers that they are taking too much time automating and maintaining tests to the point where developing software to test software becomes part of each sprint’s technical debt, creating an enormous backlog that slows down the software delivery. This debt becomes so large that organizations are opting for manual testing to get some level of testing coverage before they release the software. Furthermore, testers are spending the vast majority of their time on functional testing, focusing only on a few specific new features or changes. But spending all your time testing any single module’s functionality is dangerous since the typical user experience spans many modules or applications which need to work together seamlessly with adequate performance under heavy load.  It’s sad to say that as consumers, we’ve all seen websites crash, online shopping carts that don’t work correctly and have had our personal information hacked or compromised in some way. These so-called ‘software glitches’ are not typically things that can be found by functional testing. But the hit to a company’s bottom line is real – lost or disillusioned customers have a long-term cost to a business. And, all fingers point back to QA.

 

 

 

Ready to Experience the Power of Functionize?

GET STARTED