Crowdsourcing has drawn a lot of attention as a way of predicting and mapping future coronavirus hotspots based on user-reported symptoms. However, it’s also an increasingly popular approach to software application testing, engineering, and quality assurance.
Software crowdsourcing is making inroads in the software industry, as it gives companies quicker and less costly ways of developing and testing apps that meet users’ real needs. Meanwhile, individual crowdsourced testers, also known as crowdtesters, are earning money through remote work, ranging from a few dollars for each detected bug to weekly salaries of $2,000 and up.
Is software crowdtesting a panacea, though? The answer is “No.”
In crowdsourcing, people can contribute to a collaborative platform over the Internet from wherever they are. It’s well known outside the software development field. As one example, there’s the Wikipedia online collaborative encyclopedia. A couple of others are Quora and Stack Exchange, web-based platforms where people ask and answer each other’s questions. Crowdsourcing has also made headlines lately as a way of predicting and mapping future COVID-19 hotspots based on user-reported symptoms.
In software development, crowdsourcing is used in design, requirements analysis, and coding. But as a business endeavor, crowdsourcing is getting especially prevalent in testing and quality assurance tasks. Businesses, as requesters, delegate tasks to outside individuals or groups, as providers, with the support of crowdsourcing platforms. The business pays a fee to the crowdsourcing platform, which then performs various degrees of candidate vetting and task management. The crowdsourcing company also handles payments to testers.
The global crowdsourcing testing market is set to soar from $1.3 billion in 2019 to $2.0 billion in 2024, says a recent report from Markets & Markets. The analysts anticipate growth to be particularly strong in two areas: mobile testing, for making sure apps run smoothly on various phones and tablets; and location testing, for seeing to it that language and cultural nuances are well suited to local end users in various countries.
Without hiring extra inhouse staff, a company can easily gain extra tech talent for projects at a relatively low cost. This is true whether an organization needs an army of usability testers in 50 nations or an expert individual in a specialized tech area, such as iOS app design.
Software crowdsourcing does have its downsides, though. “We use software crowdsourcing in our deployments, but while valuable, we use it sparingly, only for initial testing of new deployments where we don’t have the experience to run the basic tests,” notes David Johnson, CTO at Mulytic Labs. “We do not base our whole testing strategy around crowdsourcing as it presents too much risk and not enough consistency.”
Before drilling down into the pros and cons of software crowdtesting, here’s a quick look at the evolution of software crowdsourcing.
In a 2006 Wired article, Jeff Howe defined crowdsourcing as representing “the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call.”
Yet the concept predated the term. Linus Torvalds, originator of the Linux operating system, is widely credited as an early practitioner of crowdsourcing. When he was a 21-year-old student at the University of Helsinki in Finland way back in 1991, Torvalds put up a quick message on Usenet’s comp.os.minix newsgroup: “I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones. This has been brewing since April, and is starting to get ready. I’d like any feedback on things people like/dislike in Minix,” Torvalds wrote. He ultimately lured so many developers that Linux 1.0 was accomplished in just three years.
IDC estimates that today, 24.2 million developers worldwide are contributing to open source projects, including Linux, LibreOffice, the GIMP photo editing tool, the GNU compiler collection, and many more. Open source developers often are motivated by factors such as the ability to learn new skills, gain recognition, and support a worthy cause. While the contributors aren’t paid directly, many benefit financially, such as contributing while working as paid employees for commercial software organizations.
However, the paying side of crowdsourcing is a totally different animal from open source. It has itself become, well, crowded with competing platforms.
Many online labor markets use a model known as microtasking. Microtasks are intended to be done in a matter of minutes. Collectively, though, the microtasks add up to a solution for a more complex task.
Some microtasking platforms, such as Test IO, are designed for detecting software defects and pay crowdtesters on a “per bug” basis. Test IO’s testers can generally expect to receive anywhere from $5 for a “low severity bug” to $50 for a “critical bug.”
Other sites, such as UserTesting.com and TryMyUI, focus on testing usability, or on applications’ “user-friendliness.” UserTesting.com pledges to return usability tests within an hour. TryMyUI requires no previous experience in software testing and pays $10 per 20-minute online review.
You’ll also find individual sites that supply a wide range of crowdtesting services and are looking for testers. A recent glance at the project board on uTest’s site showed gigs involving bug testing, usability testing, mobile testing, PC game testing, and localization testing. uTest pays on a “per performance” basis, depending on the gig type. Some project needs are quite specific, such as Android testers with accessibility switch devices, or people with native Australian, New Zealand, or UK accents.
At the top end of the pay spectrum, testers specializing in cybersecurity have reportedly earned anywhere from $500 to tens of thousands of dollars per security bug detected, either directly from tech companies like Google, Facebook, or Mozilla or through cybersecurity crowdsourcing platforms like Bugcrowd and Synack.
Crowdtesting can also come into play in areas that include functional testing, for making sure that applications comply with company-specified requirements; compatibility testing, for testing software across various browsers and operating systems; regression testing, to find out whether a software change broke existing functionality; and connectivity testing, to check out issues related to network connectivity.
In another model, companies including Topcoder,LeetCode,Codeforces, and CodeChef gamify crowdtesting by treating workers as contestants. Typically, after a client proposes a project, a coordinator – called a “copilot” at Topcoder – decomposes the project into tasks or “competitions.” The contests might revolve around requirements, architecture, UI design, implementation, or testing, for example, with each task lasting a matter of days.
Contestants offer competing solutions. From this pool, a winner and runner-up are chosen. Workers who complete the task and has their solution chosen is paid either a bounty award or an hourly wage.
Topcoder, a pioneer of the contest model, has reportedly awarded more than $25,000 per day to contestants. What’s more, after proving themselves in various skill sets in competitions, participants become eligible for Topcoder’s Gig Work engagements. Gig Work assignments cover the whole gamut of software disciplines; a recent search turned up job titles such as QA lead, testing lead, and cloud testing lead. Typically paying $1,000 to $3,200 a week for U.S. workers, these Gig Work jobs may be as short as a month and as long as a year.
These are independent consulting engagements. As with other independent contractors in today’s gig economy, the crowdsourced workers don’t receive the same benefits that full-time employees might expect, such as health insurance or vacation leave. Nor are they eligible for state unemployment assistance when a job terminates. Plus, there’s no guarantee that crowdsourced work provides the remote workers with a consistent income level. But freelancers are already familiar with that scenario.
What are the specific advantages of crowdtesting to software teams? Speed, cost effectiveness, and language testing top the list, according to some practitioners.
“Crowdsourcing allows you to hire a large number of people who can quickly test the software in diverse environments, with 24/7 coverage,” says Dan Kelly, founder and senior partner at The Negotiator Guru, a company specializing in sourcing, negotiating, and managing complex IT contracts.
For example, Microsoft worked with Wipro and crowdsourcing platform Topcoder to increase testing speeds for its Microsoft Teams product to match the development release cadence, executing 24-hour testing cycles for Teams every week. “Via on-demand testing on Topcoder, ramp-up and ramp-down was made simple and a worldwide community of testers continuously provided feedback and documented defects, while helping Microsoft achieve wider test coverage across more devices and operating systems,” according to a written statement from Topcoder.
How has crowdtesting gone at Mulytic Labs? “We’ve been able to get initial testing done very quickly with crowdsourcing and to find initial bugs that would have taken us more time and energy than desired,” Johnson says. At first, results from crowdtesters weren’t as detailed as the company wanted, but the company has since made changes to resolve that situation.
”When you crowdsource your testing, you don’t need to shell out a salary for an employee, something which can make a significant dent in a project’s budget,” remarks Tomasz Hanke, team leader at startnearshoring.com.
On the other hand, software crowdsourcing companies can vary significantly in how much they pay to gig workers and also on the basis of those payments. You do get what you pay for, Kelly suggests.
“Compromised quality can lead to considerable costs, which can make it hard to justify savings with crowdsourced testing. And if you pay crowdtesters for each bug they find, they will not be looking in-depth at your software, but rather searching for significant bugs.” Kelly notes.
“Crowdtesting allows for access to diverse ideas and language in localization projects and can be much more effective than a translation-based approach,” Hanke observes.
Some software organizations are combining traditional language translation services with crowdsourced language testing. For instance, some years back, after translating an email app into 29 target languages, Acompli turned to Testlio to wrap up language testing.
Testlio, a crowdsourcer specializing in mobile apps, then sourced the appropriate language experts from its testing community. “Within a few short weeks, Accompli went from being English-only to 30 on both iOS and Android,” Testlio said in a statement. Accompli was later acquired by Microsoft, also a Testlio partner, which transformed the app into Outlook Mobile, now available in 63 languages.
Remember, though Crowdtesting is not a panacea. Here are three major disadvantages of crowdsourcing for software teams.
“Crowd sourcing suffers from the same commitment problem as open sourcing,” asserts Jay Andrew Allen, a technical content developer for Azure, AWS, and Dynamics with Microsoft.-
“Participants may be committed at the start, but interest wanes as other tasks that are more important to them crowd out the crowdsourcing work.” Of course, some might argue that the same could be said for crowdtesters as for freelancers in any field, and that levels of long-term commitment between a company and workers aren’t decided by workers alone.
Depending on the approach you choose, you might experience a higher error rate with crowdtesters than with a dedicated test team, according to the professionals. For one thing, it stands to reason that because crowdtesters are less familiar with the software, they’re also less likely to find bugs.
But Allen also points to the high turnover rates which he says are common in crowdtesting. “You run the risk that new participants will simply repeat the same mistakes—reporting false positives, not testing according to your org's standards, etc. – that your previous crowdsourced testers already ran into. Since these participants aren't a permanent part of your organization, the knowledge loss from month to month is sizeable,” he says.
One trick for error prevention that Mulytic Labs has learned is to incentivize provable bug discoveries. “By giving the ‘crowd’ an ability to benefit, either through recognition or more future work, we encourage more honest feedback,” says Johnson.
“We used crowdsourcing to get feedback on the UX/UI (user experience/user interface) for a web design company. I noticed that we have to be very specific when dealing with crowdtesters,” Kelly recalls.
“For example, asking questions like ‘Does the website look good?’ can produce vague results,” says Kelly. “But it’s a different story if you ask, ‘Are there any layout errors?’ ‘Are images/ videos loading fast?’ ‘Is the text font/size readable on mobile?’”
Before you give any tester an assignment, be sure that you know what makes test cases fail. Our white paper gives you a solid starting point.