
How To Conduct Effective User Testing
Table of Contents
TLDR
How to Conduct Effective User Testing Across the Web
If you’re not testing your site or app with real users, you’re guessing. And guesswork is expensive.
This guide breaks down how to run strategic, outcome-driven user tests across websites, web apps, and customer portals, from framing the right questions to observing real behaviour, spotting friction, and turning insight into action.
You’ll learn:
- Why analytics tells you what, but user testing tells you why
- How to ask smarter questions that drive design and revenue decisions
- What to test (and how) across different surfaces, marketing sites, SaaS apps, and portals
- How to recruit the right users (not just available ones)
- Why scenario realism makes or breaks your results
- How to turn feedback into fixes, quickly
- How to embed testing into your sprint cadence so it’s consistent, not chaotic
Bottom line: testing isn’t a UX ritual. It’s a competitive advantage. Done well, it shortens feedback loops, de-risks decisions, and helps you build digital experiences that actually perform.
• • •
• • •
Introduction
I’ve sat in too many rooms where “user feedback” meant the CEO asking their mate what they thought of the homepage.
Let’s be blunt: most teams don’t test. They ask. They guess. They ship. And they wonder why bounce rates climb, conversions stall, or onboarding flops.
Whether it’s a marketing website, customer portal, or web app, the principles hold: if you’re not observing real people trying to use what you’ve built, you’re building blind.
I’ve led product strategy for FTSE brands, global NGOs, and fast moving scale ups. And the biggest unlock, every time, is watching someone actually try to complete a task. It’s not about being surprised. It’s about being humbled. They don’t read your clever copy. They don’t click the shiny CTA. They do what makes sense to them. If it doesn’t work, that’s on us, not them.
When we rebuilt the National Extension College’s website, one round of moderated user testing surfaced a critical disconnect between their navigation model and how real users actually searched for solutions. Fixing that friction, plus a handful of targeted layout tweaks, dramatically improved content discovery and conversion quality. That’s what happens when you stop guessing.
If you want to conduct usability testing that actually improves outcomes, you need the right focus from the start. That means identifying clear goals, defining your target users, and running a pilot test to make sure your scenarios reflect real-life behaviours.
This article is a practical guide for leaders, designers and product owners who want to run a better user testing process, and build better web experiences. No fluff. No buzzwords. Just real ways to make your web presence work harder.
• • •
Not All Testing Is Created Equal
Feedback is Not Evidence. Opinions Are Not Insights.
Most “user testing” isn’t testing, it’s opinion-chasing in disguise. A quick round of stakeholder feedback. A few Slack messages. Someone’s mate saying the homepage “feels a bit busy.”
But none of that tells you how users behave, or where they get stuck.
Real insight doesn’t come from opinions. It comes from watching real people in real scenarios, using a thoughtful user testing method. It’s about user behaviour, not boardroom preferences. It answers the hard, specific questions:
Did the user complete the task? Where did they hesitate? What did they expect but not find?
Over the years, I’ve seen this gap play out again and again. Product teams obsess over scroll depth, but miss that no one understood the CTA. Marketing teams run A/B tests while a broken user interface erodes trust. And customer portals go live with strong NPS scores, even though half of users never return.
Here’s the truth:
Analytics tells you what happened.
User testing tells you why.
That difference is critical, especially across modern websites, SaaS tools, and internal portals.
- Your site might look great, but if users can’t find the contact link in three seconds, it’s broken.
- Your app might function flawlessly, but if users hesitate at every step, you’ve lost flow, and probably conversions.
- Your portal might check every compliance box, but if it doesn’t match mental models, it bakes friction into every login.
To get real insight, you need to conduct user testing that shows not just where the problems are, but why they exist. A successful user testing process doesn’t just confirm the obvious, it uncovers the mental shortcuts, habits, and expectations your users bring to every session.
That’s where user research comes in. And it starts by choosing the right testing method, not just heat-maps or analytics, but live observation and task-based validation.
Because sometimes, watching a single user fumble through your best-designed page gives you more clarity than a thousand data points.
That’s not failure. That’s your next opportunity.
• • •
Define What You’re Really Trying to Learn
Vague goals lead to vague insights.
One of the biggest reasons a user test fails? The team doesn’t actually know what they’re testing. They show a few screens, ask if people “get it,” and walk away with a bunch of polite feedback and no idea what to fix.
User testing isn’t a vibe check. It’s a way to validate assumptions and uncover friction, fast. But only if you’re specific about what you want to learn.
That means asking better questions of the test participants from the start.
Instead of “is this clear?” ask:
- Can a first-time visitor find pricing within three clicks?
- Can an existing customer log in and update their payment details without confusion?
- Does our new onboarding flow help users activate a feature in under five minutes?
These are sharp, measurable, outcome-tied questions. And they force clarity. On what matters. On who you’re testing with. On what success looks like.
Here’s how we approach it inside Ronins:
- Start with the business goal : “We need to increase qualified leads.”
- Translate it to a user task: “Can users reach the enquiry form from the homepage in under 15 seconds?”
- Then test that task in context : and observe what gets in their way.
When we restructured the navigation for Orbus Software, we weren’t asking if it looked clean. We were testing if enterprise buyers could quickly find the product suite that matched their role. That insight didn’t just improve UX, it sharpened their value proposition and improved conversion quality .
The same applies whether you’re testing a hero layout or a customer portal’s billing flow. Vague testing gives you general feedback. Focused testing gives you insight you can act on next sprint.
So before you write a script or line up participants, ask yourself one thing:
What do I need to know, that only a real user can show me?
• • •
Test the Right Things, the Right Way
Different surfaces. Different stakes. Different methods.
One-size-fits-all doesn’t work in user testing. The right approach depends entirely on what you’re testing, and why.
- A marketing site needs clarity and persuasion.
- A SaaS dashboard needs flow and comprehension.
- A customer portal needs trust, accessibility, and zero dead ends.
That’s why smart testing starts with segmentation, not just of users, but of interfaces.
Websites
Here, a usability test is about clarity, motivation, and conversion. Can users find what they came for? Does the message land? Is the CTA visible at the right moment, not just eventually?
When we launched the new Montreaux Homes site, we tested how prospective buyers searched developments by location. What looked elegant to us was skippable noise to users in a hurry. The fix? Sharpen search, declutter copy, and move CTAs above the fold. Result: stronger funnel, higher lead quality .
Web Apps
In apps, it’s about confidence, control, and flow. Do users understand what to do next? Can they complete a task without explanation? Are you surfacing what matters at the right time?
For the Catch app, we tested complex features like AI-powered catch reports and session blueprints. But the win came from small insights: anglers didn’t always trust the AI, so we added user verification steps and made the system explain itself better. Trust went up. So did adoption .
Customer Portals
Portals demand frictionless functionality. Can users log in, update details, complete key workflows, and get out , fast? Errors here cost not just conversions, but support time, satisfaction scores, and retention.
Our work with Orbus revealed that different user groups navigated with totally different expectations , decision-makers skimmed, practitioners drilled. We restructured flows to match. Bounce rates dropped. Time on task improved. And sales teams had a clearer case to pitch from .
Picking the Right Method
- Moderated testing gives you deep insight, ideal for prototypes, critical flows, or complex tasks.
- Unmoderated testing gives you speed and scale, perfect for fast iteration or A/B validation.
- Remote testing reflects real environments, but don’t skip the tech dry-run.
- In-person testing gives you tone, body language, and context, gold when friction is subtle.
Don’t get hung up on the tools, Maze, Lookback, UserTesting, PlaybookUX, they’re all useful. What matters most is clarity: what are you testing, for whom, and why now?
Because the method doesn’t matter if the goal is muddy. But if the question’s clear? The right approach to conducting user testing practically chooses itself.
• • •
Recruit Like It Matters
Five random users won’t cut it, unless they’re the right five.
Let’s get something clear: bad user testing starts with the wrong users. You can design the perfect script, test the right flows, even record beautiful highlight reels, but if you’re testing with people who’ll never use your product in the real world, your insights are worthless.
It’s not about sample size. It’s about relevance.
Yes, five users really can reveal 80% of the friction. But only if they reflect the user satisfaction, behaviours, expectations and context of your target audience.
Who You Test Changes What You Learn
- Building a B2B portal for finance teams? Don’t test with marketers.
- Targeting first-time buyers? Don’t recruit digital natives with perfect credit.
- Designing for busy parents? Don’t test with interns in WeWork.
The Catch app is a great example. Some of our early testers were digital-savvy, high-frequency anglers. They flew through the onboarding, but their feedback was useless for understanding friction points facing casual, tech-anxious users. Once we switched to testing with real weekend anglers on older Android devices, the insights were instant: button size, contrast, and flow order all needed work .
Where to Find Real Users
- Your own audience: email lists, CRM segments, social channels
- B2B or niche: Leverage partners, communities, or ask your client-facing teams for intros
- Consumer-facing: Panels like UserTesting, PlaybookUX, or even guerrilla testing via paid social ads
- No budget: Internal proxy users (e.g. new hires, unrelated staff) are better than nothing — but flag the limits
Don’t Skip the Screener
A good screener is like a filter for relevance. It lets you hand-pick testers who actually reflect your personas, not just whoever clicks fastest.
Include:
- Tech confidence levels (to catch edge cases)
- Device and browser (especially for mobile-first or legacy compatibility)
- Experience with similar tools (helps avoid false positives from power users)
And always ask one open ended question: “Tell us about the last time you did [related task].” That answer tells you way more than a checkbox ever will.
Bonus Tip: Over-recruit
No show rates are real. People get cold feet, forget, or flake. Always recruit more than you need, and if you’re running remote tests, build in a buffer.
Because when testing time comes, the worst-case scenario isn’t bugs. It’s silence.
• • •
Design Test Scenarios That Reflect Real Life
If the task isn’t real, neither is the feedback.
This is where most tests fall apart. The setup is clean, the participant is qualified, and then they’re handed a task that reads like it came from an internal Jira ticket:
“Navigate to the Solutions page, locate the knowledge hub, and access the tiered pricing structure via the top nav.”
Nobody talks like that. Nobody thinks like that. And your user certainly doesn’t wake up hoping to “navigate to the knowledge hub.”
The point of user testing isn’t to validate whether your interface can be used. It’s to reveal whether it makes sense in the context of the user’s real goal.
Good Testing Starts With Good Scenarios
Here’s what a strong test prompt looks like:
“You’re looking for a platform to help your business automate reporting. You land on this site, what’s the first thing you do?”
That’s realistic. Open ended. Anchored in intent, not instructions.
Whether you’re testing a landing page or a subscription flow, scenarios should feel like the user wrote them, not your product manager.
Test for Goals, Not Steps
In the real world, users don’t follow your flow. They try to achieve an outcome. Your job is to watch what they do when they try.
Don’t say: “Click the CTA and complete checkout.”
Say: “You’ve found a product you like, try to buy it.”
That small shift turns testing from a checklist into a discovery process. You’ll learn where friction hides, where users hesitate, and where your language or layout isn’t doing the job.
Avoid Leading the Witness
No hints. No jargon. No safety nets.
The moment you tell someone “Use the search bar to find pricing,” you’ve already skipped past the insight: Would they have found pricing on their own? Would they even expect it to be under that label?
Design your tests like you’re setting up an experiment, one that uncovers what your users assume, expect, and miss.
Real World, Real Conditions
Context matters.
- If it’s a mobile site, test on mobile, not just resized desktop screens.
- If it’s used in high-pressure moments (like customer support portals), replicate that mindset.
- If your users are often distracted, test in their real environment, not a calm usability lab.
One reason we’ve had success testing portals and web apps is because we bake that context into every script. If a user’s logging in to submit an urgent form, we don’t ask them to “explore the dashboard.” We ask them to get something done, then watch what slows them down.
Because that’s the gap you’re really trying to close:
What you thought was clear… and what the user actually experiences.
• • •
From Observations to Decisions
Insights are only useful if they trigger action.
Watching five users struggle to complete a task is interesting. Changing the design so the next five fly through it? That’s valuable.
Too many teams treat user testing like a research phase. Something you do, write up, and file away. But the real power of testing shows up in how quickly you move from observation to implementation.
Start With a Fast Debrief
Right after testing, while it’s fresh, gather your team and answer three questions:
- Where did users hesitate?
- What did they expect that wasn’t there?
- What frustrated or surprised them?
You don’t need a 40-page report. You need a list of things to fix, and a rough sense of priority.
Categorise by Impact
Not all issues are equal. Some kill conversion. Others are just polish. Group them into three buckets:
- Critical blockers: things that stop users completing key tasks
- Major friction: slow users down, cause frustration, or create support tickets
- Minor annoyances: cosmetic, optional, or edge cases
If a user can’t find the sign-up button, that’s a blocker. If they find it but hate the label, that’s friction. If they don’t like the font? File it under “later.”
Show the Team, Not Just the Deck
The fastest way to create change? Show people the footage.
When stakeholders see users struggle, priorities shift. Debates disappear. Everyone aligns faster.
That’s why we often cut short highlight reels, raw clips of real people getting confused, stuck, or saying “I don’t trust this.” It builds empathy. And momentum.
Prioritise What You Can Actually Fix
Not everything needs a redesign. Often, a single label change, a button move, or a tighter heading hierarchy solves the problem. Quick wins first.
Then plan for the deeper issues, workflow changes, new layouts, content rewrites. Log them. Assign owners. And most importantly: track whether the next version performs better.
Create a Feedback Loop
Testing isn’t a one-and-done. The best teams bake it into their rhythm. Observe → Adjust → Re-test.
Every time we’ve tested a product, implemented changes, and looped back with users, the second round always feels smoother, because we’re not designing in a vacuum. We’re designing with real behaviour as our north star.
• • •
Operationalising User Testing
Make it a habit, not a heroic effort.
The biggest mistake? Treating user testing like a one-off milestone. Something you “tick off” before launch, and then forget until complaints start coming in.
The best-performing teams I’ve worked with treat testing like dev treats Git. It’s just part of the workflow.
Build It Into Your Sprint Rhythm
If you’re running Agile, you already have the cadence. Use it.
- Scope testable features or flows during planning
- Recruit and prep while design or dev builds
- Run lean usability tests mid-sprint or just after
- Feed prioritised insights into the next sprint
This isn’t theory, we’ve done it. At Ronins, we often structure micro-tests one sprint ahead, using Figma prototypes or staging environments. That way, feedback lands early, before time and cost compound.
Make Testing Visible in the Process
Add it to your workflow. Literally.
- Every epic should have a validation step
- Every launch should include a test-and-review window
- Every major feature should be tracked with: “How do we know this works for users?”
You wouldn’t ship without QA. So why ship without usability confirmation?
Train the Team to See What Matters
Testing doesn’t need to be the UX team’s private domain. Involve devs, PMs, even marketing leads. Get them to observe a session. Or watch a five-minute clip.
The more people see the struggle, the faster they’ll care about solving it. You shift from opinion-based debates to evidence-led delivery. Culture follows process.
Standardise the Essentials
You don’t need a “research department.” You need a lightweight playbook:
- A basic test script template
- A clear recruitment guide
- A Notion or Confluence space to log findings and recommendations
- A Slack or Teams channel for sharing clips and wins
Keep it frictionless. The easier it is to test, the more often it’ll happen.
Know When to Dial It Up
You don’t need to test everything, every week. But you do need to test the moments that matter:
- New user flows
- High-impact pages
- Pricing, onboarding, checkout, retention journeys
- Anything that drives revenue or trust
These deserve the spotlight. So give them a cadence, quarterly, monthly, or around major releases.
Because the truth is: user needs evolve. And without testing, your product won’t.
• • •
Final Word, You Are Not the User
You can be brilliant. But you’ll still be wrong sometimes.
Let’s wrap where this started: with the biggest lie in digital.
“We know our users.”
No, you know of them. You know what they’ve clicked, what they’ve said, where they’ve dropped off. But unless you’ve sat quietly and watched them try, fumble, hesitate, misread, overthink, you’re working with guesses, not truths.
And I get it. You’re close to the product. You care deeply. But that’s the problem.
You know too much. You’ve seen the Figma files. You’ve sat in the roadmap meetings. You’ve heard all the justifications.
Your user hasn’t.
They land cold. Scanning. Multitasking. Distracted. And if your site, app or portal doesn’t help them move, clearly, confidently, without unnecessary effort, they bounce. Not because they’re impatient. Because they’re human.
That’s what user testing gives you: not data, but clarity. A direct window into what real people need, expect, and miss.
So if you take one thing from this guide, let it be this:
- Watch someone use what you’ve built.
- Let it humble you.
- Then go fix it.
That’s how websites convert.
That’s how SaaS sticks.
That’s how great products stay great.
And if you want help making that happen? You know where to find me.
Ready to stop guessing?
At Ronins, we help founders and digital leaders build websites, web apps and customer portals that actually work, because they’re shaped by what real users do, not what internal teams assume.
If you want to run smarter user tests, fix the right things, and build web experiences that drive results, let’s talk.
• • •
• • •
Top 10 Sources for User Testing Across the Web
- Nielsen Norman Group – Why You Only Need to Test with 5 Users https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/
- UserTesting – The Complete Guide to Usability Testing https://www.usertesting.com/resources/ebooks/usability-testing-guide
- Maze – How to Run Website Usability Testing: Benefits & Best Practices https://maze.co/guides/usability-testing/
- Shopify UX Blog – User Testing: What It Is, Why It Matters, and How to Do It https://www.shopify.com/partners/blog/user-testing
- Google UX Playbook for Retail https://www.thinkwithgoogle.com/intl/en-gb/marketing-strategies/app-and-mobile/ux-playbook-retail/
- Baymard Institute – Cart Abandonment Research & UX Benchmarks https://baymard.com/lists/cart-abandonment-rate
- Forrester Research – The ROI of UX Design (Cited across industry; source report: “The Six Steps For Justifying Better UX”)
- Hotjar – What is Usability Testing? Methods, Examples, and Best Practices https://www.hotjar.com/usability-testing/
- Bitovi – 10 Best Practices for Usability Testing in Agile Teams https://www.bitovi.com/blog/10-best-practices-for-usability-testing
- W3C/WAI – Involving Users in Web Projects for Better, More Inclusive Design https://www.w3.org/WAI/test-evaluate/involving-users/