The problems you encounter with user acceptance testing aren't technical. They're all political. You can't solve all of the messes when things go wrong – but you can do quite a bit to prevent them.
Users are required to have intimate involvement with application development at only two points in a project: at the beginning, and at the very end. At the beginning of the project, the user has to explain what he wants the software to do, so that requirements can be created – whether that specification is as simple as, "I need a utility to back up these files" or a requirements document as long as some novels.
Agile methodologies encourage user participation throughout the development process, which can improve the likelihood of a positive result. Be that as it may, the other point where an end user must get involved is at the end of the project: user acceptance testing (UAT). That project phase has a lot of names, though my own working definition is "the last phase of the testing cycle, right before the team goes out for a celebratory beer."
Only a few challenges in "acceptable" acceptance testing are about the tests. The transition from "bits of code that might work" to an application ready to go into production is more political than technical. The Acceptance Testing phase is the last opportunity for someone outside the QA or development team to stick a thumb in the pie. You know whom I'm talking about: the people for whom the word "deadline" doesn't seem to apply ("I know I was supposed to identify problems last month, but gosh this is really important"); political shenanigans when management wants to take credit or cover their butts; consulting clients who are determined to find problems to avoid paying a time-based delivery bonus.
As a result, the celebratory beer is short lived if users respond, "This is wrong, and that is not right, and I really wanted this over here, and oh, by the way, we decided that we really didn't need this functionality after all, so could you take it out?"
Good beer should never be wasted, so let's see what we can do about this issue. (Note that I collected this input years ago, so I filed off my QA contacts’ names. However, none of their advice has changed.)
I seem to have a bullet point about correctly setting expectations in every article I write, but it's become obvious how much it matters for every QA endeavor. None more importantly, however, than in defining up front what the user considers acceptable. That's simply got to happen at the beginning of the project – or, failing that, at some point before you drop the completed application in their laps.
When developers create a requirements document, they think primarily about establishing what the application needs to do, and what it must accomplish before the software goes to production. However, defining the acceptance criteria early – as part of the requirements process – both helps you find hidden, unspoken requirements, and shows users what this testing stuff is all about.
As consultant David says, "End users discover something they don't like – whether it is a real problem or not, whether it is per spec or not. Then they promptly declare the entire product rubbish based on that single point observation. They have no accountability, but seemingly magical authority to send the entire project back to the drawing board, and at the very last stage of the game."
"An acceptance test is a type of contract between the users (or their representatives) and the producers, saying in essence 'If you can do this, then the product is acceptable,'" says a tester named Joe. "But as professional QA testers can attest, it's never quite that simple." All sorts of unstated assumptions are left out of the Acceptance Criteria, and can become the subject of negotiation near the end of the development cycle. Those can turn into political shenanigans, development bashing, or delaying tactics, if not handled well.
While it won't necessarily solve all the problems – I don’t think anything will – some QA professionals find that you can discover or prevent difficulties by creating a UAT team (even if that's only one person) to deal with the users. That person works closely with the customer, holds their hand, guides them, and cajoles them throughout the project to do what they need to do. That individual should be experienced, able to build relationships, and well respected. (A heart that’s pure and the strength of ten would not go amiss, either.)
It's also important for the liaison to deal with the right users. The business users who define the functional specs may not be the people who use the system. One tester described a project that had to go back to the drawing board; this time, the functional requirements were created by people who actually used the system. Imagine that.
In other situations, as QA professional Christian points out, managers show up at the beginning of the project to approve milestones and budgets. The managers let actual users or their representatives define requirements and ensure that budgets and timetables are on track. But then, says Christian, the managers show up again at UAT time because the project needs their go-ahead (such as to approve the use of people or machine resources). Suddenly, project management responsibilities are reshuffled. People who were previously uninvolved or had little to say if budgets were on track, are now confronted with the reality of the work and all the imperfections it holds. "Here comes the political mess," Christian says.
Software developers, testers, and users can get so wrapped up in the process of creating an application that they lose sight of the effect of deploying it. Unfortunately, this may not become apparent until it's time for user acceptance testing.
Fritz, a project management consulting, explains, “A damn good reason for acceptance testing that is to make sure the new functionality will actually enhance, not hinder, business operations. Until a workable product is demonstrated for evaluation, the users may find it hard to understand the ins-and-outs of how their operation will change once the software is deployed, which makes UAT a major risk management effort." In other words: Once the users discover how the new software affects their jobs, they realize they don't want it after all.
Fritz sees this as a reason to adopt, if not the full Agile development set of methodologies, at least some of its underlying philosophy. The requirements-development-UAT cycle "demonstrates that the requirements are merely a starting point and the user feedback is essential to refine requirements prior to the next cycle," he says.
Stephen, a consultant, concurs. "We deliver early, pre-production code, for demonstration and further definition. We go back to the shop to re-tool around the aspects found during the latest discovery session. This can be repeated – based on the scope of the project – several times."
Whether you do this as an application prototype or as an Agile development cycle, developers and testers say that it make a huge difference. "This process puts 90% of the acceptance testing up front, and allows us a way to challenge scope creep once the requirements and prototype have been signed off on," says Scott, an independent software vendor.
In the end, the question comes down to creating an application that adds value to the user. And that's exactly what makes UAT so messy. "The number one problem with UAT testing is that it is political. Some of the things the user may not like, or that negatively impact their workflow, are not actual functional errors. Therefore, the system may have passed every test run with flying colors and is still 'wrong,' says Linda a QA manager. "Overall, it doesn't matter if you produce a perfectly-balanced, perfectly priced, perfectly packaged dog food if the dog won't eat it."
One candidate to serve as a end-user liaison might be a Chief Quality Officer. Have you considered hiring someone for that role?