We all want our users to be able to easily find what they’re looking for on our websites and apps. Ensuring they start down the right path from the onset gives them the best chance of success. Click tests are a fast, simple and effective way to measure where your users place that first step; however, to get the most out of a click test, it’s fundamental to both prepare the tasks and to interpret the results correctly. So with that, let’s go over some best practices when running your own click test.
Writing tasks for click tests
The preparation of the tasks given to testers is crucial and to avoid pitfalls that might skew the test results follow these tips:
- Ensure tasks employ the language your users would use and understand and not language that’s overly technical or specific to your company.
- Tasks should provide test participants with a problem to solve that will simulate a natural interaction with your website. They should resemble scenarios rather than simply words to look for.
- Scenarios should be short and contextualise why the task is being performed. For example: “You are holding a dinner party this Saturday and you want to find out what you need to prepare vegetable lasagna”. Rather than: “Find a vegetable lasagna recipe on this site”.
- Tasks should be: action based, specific, measurable, realistic and relevant.
- To be most effective tasks should represent the most common user goals and the most important conversion goals for your business.
- Avoid giving away the answer in the question. For example asking users where they would register for an account on an interface that has a button labelled “register” may cause participants to simply focus on similarly phrased items.
When preparing a series of tasks using test sets, please follow these additional tips:
- Start with a simple task to build confidence.
- Make the tasks independent from each other. Present the tasks in random order to avoid bias.
- Prepare a maximum of 10 tasks per test set but ensure that they cover all the areas that you want to investigate.
Summary and 10 example tasks are available here in PDF format.
Interpreting click test results
Click test results consists of heat and click maps showing where users clicked and response times. Below are some simple recommendations to help you make the most out of this data.
Have a minimum of 20 quality responses as it’s difficult to draw conclusions with less. You’re able to order more responses after the fact if you feel a greater sample size is required. See what the users’ top choices are and for each one write down the number of clicks and the average response time. Report the number of clicks for each choice as a percentage of the total of quality responses (this will allow you to compare results with tests having a different total). Check if each choice is a suitable path to task success or not; if some users choose the search icon (magnifying glass) or the search box, report it as neutral (neither suitable nor unsuitable path).
A good result is when the percentage of not suitable paths is below 20% of the total. Of course further improvements can be sought, both in terms of reducing the percentage of not suitable paths as well as cutting down the response times of suitable paths. The percentage of users choosing a search icon or search box shouldn’t be more than roughly 35% when testing the desktop version of a website. If the percentage is higher it could mean that it is difficult for users to identify suitable options on the interface. When testing on mobile phones, where generally there are less interface options, it’s acceptable if this percentage is exceeded.
If the 20% target stated above is not reached, you could prepare a new mockup or wireframe, modify it in accordance to the test results, and run a new click test. An open card sorting test can also provide helpful indications to be integrated before testing.
Further examples of first click tests with analysis and recommendations can be downloaded here in PDF format.