Featured27 April 2026

User testing - and how we deploy it at Absurd

User testing is one of those things everyone says they do. Far fewer do it properly.

Back to all posts

User testing is not about collecting reassurance. It is about reducing uncertainty in a way that makes decisions more reliable.

The difference usually comes down to methodology. Not tools, not budget, not even team size. Just how you approach learning from users.

Why methodology matters

Without a clear approach, user testing quickly turns into random feedback, leading questions, confirmation bias, and the familiar line that everybody seemed to like it.

That is not research. It is reassurance, or something close to it.

A structured methodology does two things. It makes your findings more reliable, and it makes your decisions more defensible.

That is what actually helps products move forward.

The core methodologies we use

At Absurd, we do not treat user testing as a single activity. We use a mix of methods depending on the stage of the product and the type of uncertainty we need to reduce.

1. Exploratory research

This is where most projects should start.

The goal is to understand behaviours, motivations, and problems. At this point, we are not trying to validate solutions. We are trying to understand reality before shaping one.

Methods often include:

  1. 1:1 user interviews
  2. Contextual inquiry, where we observe users in their environment
  3. Open-ended questioning designed to surface behaviour rather than opinion

The key rule is simple: do not talk about the product too early.

Many teams jump straight into asking what people think of an idea or interface. We focus first on how they currently solve the problem, what workarounds already exist, and where frustration or effort shows up.

That tells you far more than an early reaction to a design ever will.

2. Concept testing

Once there is a direction, we test ideas before committing to build.

The goal here is to reduce risk, and therefore unnecessary cost, before development starts.

Methods can include:

  1. Wireframes
  2. Clickable prototypes
  3. Journey walkthroughs

At this stage, we are looking for clarity, relevance, and immediate friction. If users hesitate, misread the intent, or struggle to move forward, that is a signal. It is not something to explain away.

Concept testing is valuable because it gives teams the chance to challenge an idea while it is still cheap to change.

3. Usability testing

This is where most people think user testing begins, but it is only one part of the picture.

The goal is to identify where users struggle with a product or service.

Methods often include:

  1. Task-based testing, where participants are asked to complete specific actions
  2. Moderated or unmoderated sessions
  3. Screen and behaviour observation

The important distinction is that we are not asking users what they think. We are watching what they do. That difference matters more than almost anything else.

There is also a common misconception that usability testing has to be expensive, involving recruitment agencies, lab hire, and specialist equipment every time. We do use labs where they are useful, but we are equally comfortable finding lighter-weight situations that still meet the research objective. Sometimes that means standing with an iPad in the centre of Manchester’s student village. Sometimes it means sitting in a hotel lobby asking customers for ten minutes of their time. The point is to match the method to the question, not inflate the process for its own sake.

4. Quantitative validation

Once patterns begin to emerge, we validate them at scale.

The goal is to confirm that the insights hold across a broader audience and are not just isolated observations.

Methods can include:

  1. Surveys
  2. Analytics review
  3. A/B testing

Qualitative research tells you why something is happening. Quantitative validation tells you how often it is happening and how widespread the pattern really is.

You need both if you want your insight to stand up to scrutiny.

So how is this deployed at Absurd?

The methodology itself is not the hardest part. The real challenge is integrating research into live projects without slowing everything down.

Our approach is based on a few principles.

1. Research is continuous, not a phase

We do not treat research as something that happens neatly at the start, in the middle, or at the end. It is embedded into design and development as an ongoing input.

Small, consistent feedback loops are usually more effective than one large research sprint followed by silence.

2. We translate insights into impact

Raw research is not useful on its own.

We do not hand over recordings or notes and leave the rest to chance. We connect findings directly to product decisions, technical implications, and business outcomes.

If a finding does not change a decision, it is noise.

3. We prioritise ruthlessly

Not every insight deserves immediate action.

We map findings against likely user impact, development effort, and business priority. That keeps teams focused on what matters most and stops research from becoming a bottleneck to delivery.

4. We manage expectations early

User testing often reveals complexity that stakeholders did not expect.

That is why we do not simply present findings. We frame them clearly. If an insight affects one part of the product, we explain what else that changes, what that means for scope, and how it may affect timelines or priorities. That alignment is what keeps projects moving, even when the learning is uncomfortable.

User testing is not about asking users what they want

It is about reducing uncertainty.

The teams that get the most value from it are not always the ones doing more research. They are the ones applying the right methodology at the right time, then translating what they learn into decisions in the right order.

That is the difference between insight and noise.