After 15 years of implementing software for membership organisations and charities, I’ve seen a curious phenomenon: projects that tick every specification box but still leave users frustrated. The culprit? We’ve been solving the wrong problem.

I once delivered a membership renewal system that met every single acceptance criterion. Three months post-launch, member complaints doubled. Technically, we’d built exactly what was specified. Practically, we’d missed what members actually needed. The system worked perfectly – for robots. For humans, it was a daily source of friction.

This experience taught me something uncomfortable: perfect specifications can create perfectly useless software.

The Specification Trap

We’ve been taught to worship the traceability loop: requirements flow into user stories, user stories become acceptance criteria, acceptance criteria generate test scripts, and test scripts validate functionality. It’s elegant, logical, and fundamentally flawed when applied to human-centred organisations.

The dirty secret of our industry is that perfect traceability can create an echo chamber that amplifies initial assumptions. Each stage of the loop references the previous one, creating what feels like validation but is actually just repetition. If your original requirements missed something important about how people actually work, that gap will be faithfully replicated through every subsequent stage.

I’ve seen this countless times: a charity implements a new donor management system that perfectly captures all the data fields specified in discovery, but fails to account for the messy reality of how fundraising teams actually operate. The system passes every test because it does exactly what was asked. It fails users because what was asked wasn’t what was needed.

The 85% reality

In our recent UAT webinar, I shared a statistic that often surprises people: if you do excellent user acceptance testing, you can deliver 85% of a project successfully and reduce post-launch defects by the same margin. People expect that number to be 100%. Why isn’t it?

That 15% gap represents something crucial: the difference between specifications and reality. No matter how thorough your discovery process, you cannot anticipate every edge case, every workflow variation, every individual preference that emerges when real people use real systems under real pressure.

The organisations that acknowledge this gap and plan for it do better than those that chase the impossible dream of perfect prediction. They build systems with flexibility. They budget time for post-launch refinement. They understand that going live isn’t the end of the development process – it’s the beginning of the optimisation phase.

But here’s the uncomfortable truth: if UAT isn’t done effectively, bugs that slip through to production are 6-15 times more expensive to fix. This isn’t just about money – it’s about user confidence, stakeholder trust, and organisational reputation. The cost of getting UAT wrong extends far beyond the technology budget.

The human factor

The most important insight from my years as a technology partner, and now as a consultant helping organisations manage these implementations, is this: testing isn’t about finding bugs – it’s about discovering whether we’ve built something people actually want to use.

Technical people have their place in testing – they understand data flows, integration points, and system architecture. But they’re not the ultimate arbitrators of success. If non-technical people are saying “yes, but it doesn’t work,” that feedback matters more than any technical specification.

I’ve learned to be suspicious of UAT processes dominated by technical staff. They tend to focus on what the system can do rather than what users need it to do. They test for conformance to specification rather than fitness for purpose. They find bugs efficiently but miss usability problems completely.

The most successful UAT processes I’ve witnessed involve a diverse group of actual users – people at different skill levels, with different roles, approaching the system with different expectations. They find things that technical testers miss: confusing language, illogical workflows, missing contextual information.

The expectation gap challenge

One of the most difficult conversations in any implementation happens when someone says: “Technically, it does what it’s supposed to do according to the acceptance criteria, but it doesn’t meet my expectations.”

This statement makes project managers uncomfortable because it feels like scope creep. It makes technology partners nervous because it challenges the sanctity of specifications. But it’s often the most honest feedback you’ll receive during UAT.

The traditional response is to dismiss this as a training issue or redirect it through change control processes. But what if we treated it as valuable intelligence about the gap between what we thought we were building and what users actually needed?

I’ve started encouraging clients to create space for “this works but it’s not what we hoped for” conversations. These discussions often reveal fundamental misunderstandings that won’t be fixed by bug reports or user training. They point to deeper issues with workflow design, information architecture, or user experience assumptions.

Reframing UAT as assumption validation

Perhaps it’s time to stop calling it User Acceptance Testing altogether. The phrase implies that users should accept what we’ve built, rather than validate whether we’ve built the right thing.

What if we approached UAT as assumption validation instead? Every specification contains assumptions about how people work, what they need, and how they prefer to interact with systems. UAT becomes an opportunity to test those assumptions against reality.

This reframing changes everything. Instead of asking “does the system do what the specification says?” we ask “do our assumptions about user needs hold up under real-world conditions?” Instead of measuring success by bugs found, we measure it by insights gained about the gap between designed workflows and actual workflows.

This approach requires different skills from the UAT team. Instead of just following test scripts, they need to observe, question, and provide feedback about whether the system supports or hinders their actual work patterns. They become user advocates rather than quality assurance assistants.

The testing culture challenge

Creating this mindset shift requires what we call “testing culture” – an organisational environment where questioning, experimentation, and honest feedback are valued over compliance and acceptance.

In traditional UAT, finding problems is seen as negative. In assumption validation culture, finding gaps between expectation and reality is valuable intelligence. Users who identify usability issues aren’t being difficult – they’re protecting the organisation from implementing systems that will create friction and resistance.

This cultural shift is particularly important in membership organisations and charities, where the people using the system often aren’t the people who procured it. Members, donors, volunteers, and beneficiaries rarely have input into technology decisions, but they experience the consequences daily. UAT might be their only opportunity to influence the final outcome.

Building testing culture means rewarding participation, taking feedback seriously, and being transparent about how user input influences system refinement. It means acknowledging that the people closest to the work often have insights that specifications miss.

Beyond bug hunting

The organisations that get the most value from UAT treat it as much more than a quality assurance exercise. They use it as an opportunity to build user ownership, transfer knowledge, and strengthen stakeholder relationships.

When users participate meaningfully in validating a system, they develop understanding of how it works and why certain design decisions were made. They become advocates rather than resistors. They’re more likely to help colleagues adapt to new workflows and more willing to provide ongoing feedback for continuous improvement.

This transformation from passive recipients to active participants doesn’t happen automatically. It requires thoughtful design of the UAT process, clear communication about the value of user input, and visible action on the feedback received.

The way forward

After years of watching organisations struggle with the gap between what gets built and what gets used, I’ve concluded that our problem isn’t technical – it’s cultural. We’ve approached software implementation as a series of problems to be solved rather than a process of collaborative discovery.

The most successful implementations I’ve witnessed share common characteristics: they treat users as partners rather than subjects, they plan for iteration rather than perfection, and they value learning over compliance.

This doesn’t mean abandoning rigour or accepting sloppy specifications. It means building systems that acknowledge the complexity of human organisations and the inevitability of change. It means designing UAT processes that generate insights, not just defect reports.

Perfect specifications are the enemy of good solutions. Our job isn’t to build what was asked for – it’s to build what’s actually needed. And we can only discover what’s actually needed by involving real users in meaningful validation of our assumptions.

The 85% success rate isn’t a failure to achieve perfection – it’s an acknowledgment that perfection isn’t the goal. The goal is building systems that serve people, support missions, and adapt to changing needs. That requires humility, flexibility, and a willingness to admit that specifications, no matter how carefully crafted, are just educated guesses about human behaviour.

The organisations that embrace this uncertainty and plan for continuous refinement will build better systems and stronger stakeholder relationships. Those that chase the illusion of perfect prediction will continue to deliver technically correct solutions that leave users frustrated and resistant.

The choice is ours: we can keep building systems for specifications, or we can start building them for people.

This article is based on insights from a recent webinar by Hart Square, “Demystifying User Acceptance Testing (UAT): A complete guide” featuring practical guidance drawn from implementations across 200+ membership and charity organisations. You can watch the full webinar on demand here.

Alan Perestrello
Alan PerestrelloManaging Director, Hart Square