Settle design debates with data

Make confident decisions
by testing designs with
real users.

Get started now
  1. UsabilityHub

    New design, new dashboard, new features

    It’s been a busy start to the year in the UsabilityHub office, and today we’re releasing a collection of updates we’ve been working on for you.

    The most obvious thing you’ll notice is a tidy new design. The new sidebar helps you jump between tests faster, filter tests, and view your sets, all in one place.

    We’ve also made some improvements to our handling of test variations and sets, making it easier to group your tests, recruit your own testers, and control who sees which tests.

    Variations now allow you to choose meaningful names for variation sets (eg. “Homepage options”), and for each variation within it (eg. “Long form content”, “Big header image”, “Emphasise social proof”) instead of simple numbered variations.

    named-variations

    You can now duplicate tests, making it easier to re-run tests on new designs. This is particularly handy for UX consultants who often run the same tests for their clients. Just duplicate an old test and switch out the image.

     

    duplicate-test

    As always, we’d love to hear your feedback, so get in touch if you have any suggestions for us: support@usabilityhub.com

  2. UsabilityHub

    Getting the most out of first click testing

    We all want our users to be able to easily find what they’re looking for on our websites and apps. Ensuring they start down the right path from the onset gives them the best chance of success. Click tests are a fast,  simple and effective way to measure where your users place that first step;  however, to get the most out of a click test, it’s fundamental to both prepare the tasks and to interpret the results correctly. So with that, let’s go over some best practices when running your own click test.


    Writing tasks for click tests

    The preparation of the tasks given to testers is crucial and to avoid pitfalls that might skew the test results follow these tips:

    • Ensure tasks employ the language your users would use and understand and not language that’s overly technical or specific to your company.
    • Tasks should provide test participants with a problem to solve that will simulate a natural interaction with your website. They should resemble scenarios rather than simply words to look for.
    • Scenarios should be short and contextualise why the task is being performed. For example: “You are holding a dinner party this Saturday and you want to find out what you need to prepare vegetable lasagna”. Rather than: “Find a vegetable lasagna recipe on this site”.
    • Tasks should be: action based, specific, measurable, realistic and relevant.
    • To be most effective tasks should represent the most common user goals and the most important conversion goals for your business.
    • Avoid giving away the answer in the question. For example asking users where they would register for an account  on an interface that has a button labelled “register” may cause participants to simply focus on similarly phrased items.

    When preparing a series of tasks using test sets, please follow these additional tips:

    • Start with a simple task to build confidence.
    • Make the tasks independent from each other. Present the tasks in random order to avoid bias.
    • Prepare a maximum of 10 tasks per test set but ensure that they cover all the areas that you want to investigate.

    Summary and 10 example tasks are available here in PDF format.


    Interpreting click test results

    Click test results consists of heat and click maps showing where users clicked and response times. Below are some simple recommendations to help you make the most out of this data.

    Prepare

    Have a minimum of 20 quality responses as it’s difficult to draw conclusions with less. You’re able to order more responses after the fact if you feel a greater sample size is required. See what the users’ top choices are and for each one write down the number of clicks and the average response time. Report the number of clicks for each choice as a percentage of the total of quality responses (this will allow you to compare results with tests having a different total). Check if each choice is a suitable path to task success or not; if some users choose the search icon (magnifying glass) or the search box, report it as neutral (neither suitable nor unsuitable path).

    Analyze

    A good result is when the percentage of not suitable paths is below 20% of the total. Of course further improvements can be sought, both in terms of reducing the percentage of not suitable paths as well as cutting down the response times of suitable paths. The percentage of users choosing a search icon or search box shouldn’t be more than roughly 35% when testing the desktop version of a website. If the percentage is higher it could mean that it is difficult for users to identify suitable options on the interface. When testing on mobile phones, where generally there are less interface options, it’s acceptable if this percentage is exceeded.

    Iterate

    If the 20% target stated above is not reached, you could prepare a new mockup or wireframe, modify it in accordance to the test results, and run a new click test. An open card sorting test can also provide helpful indications to be integrated before testing.

    Further examples of first click tests with analysis and recommendations can be downloaded here in PDF format.

  3. UsabilityHub

    Redirect testers after test set

    You can now provide a redirect link where we’ll send your testers after completing a test set.

    The simple way to use this is to direct all testers to the same page on your website where you can thank them and give them a discount code, or any other reward.

    For example:

    http://www.your-website.com/thankyou.html

    You can also use redirect links to track your testers, or send them customised thank-you pages. To do this, you’ll need to send each tester a different link to take the set of tests with a ref parameter to identify them.

    For example, invite your testers to:

    https://usabilityhub.com/do/s/a1324c?ref=tom
    https://usabilityhub.com/do/s/a1324c?ref=kate
    https://usabilityhub.com/do/s/a1324c?ref=raymond

    And set up a redirect link which uses this value by placing “{{ref}}” in the redirect link wherever you want the value to be used:

    http://www.your-website.com/thankyou.html?username={{ref}}

    The exact same redirect behaviour is available on all individual tests, as well as test sets.

    Need any help setting up your redirects? Just drop us a line at support@usabilityhub.com.

  4. UsabilityHub

    Five-Second Tips: Good Test Instructions #2 — Be clear, sufficient and appropriate

    Good test instructions will strike the proper balance of brevity and detail, reduce the likelihood of bias, and adequately “set the table” for the participant to provide useful feedback.

    The researcher or test planner is solely responsible for making sure that the test instructions are:

    Clear: Each sentence should represent a single concept or idea. Words should be chosen carefully and tested repeatedly, so that they are likely to be understood the same way by all who read them.

    Concise: Instructions cannot be perceived as lengthy. For the remote unmoderated tests you can do at usabilityhub.com, instructions should be limited to only 1-2 very short sentences.

    Sufficient:  Instructions should establish a realistic expectation about what is going to be presented, and what is expected of the participant.

    Appropriate for the test format: A previous post noted that the test format is important for ensuring that the data is appropriate and useful. Proper instructions will indicate the format in such a way that participants will understand what types of data they will be asked to provide.

    The goal of the memory dump test is to determine which element(s) stand out in a design. For this type of test, a simple statement about the process that is to occur is all that is required:

     “You will have five seconds to view the image. Afterwards, you’ll be asked a few short questions.”

    “After viewing a design for five seconds, be prepared to tell us what you recall about it.”

    However, this approach will not work for attitudinal tests, which contain opinion-centered questions that require the participant to recall, consider and comment on the design as a whole entity. For this test format, setting the correct expectation in the instructions means putting the participant in the proper frame of mind for delivering opinion-based responses:

     “You will have five seconds to view the image. Afterwards, you’ll be asked a few short questions about your general reaction to the design.”

     “You’ll see a screen for a few seconds – pay attention the general aesthetic and visual appeal of the design.”

    Instructions for target identification tests are a little trickier and reiterate the critical point that it’s all about the research goal(s) and focusing on the one thing per test that you want to learn. Because the focus is on the participant’s ability to comment on one or more specific elements, it’s sometimes it’s necessary to reference the target in the instructions, so that the participant has enough time to view and consider it.

    For example, if the goal is to learn about a website’s navigation bar, it is probably better to reference the bar in the instructions, so that the participant will seek out and focus attention on the bar for the full five seconds.

    But, be careful — if the goal is to know whether a specific target is easily spotted in a design, it’s better to not reference it in the instructions.

  5. UsabilityHub

    Five-Second Tips: Use an image that fits the screen

    Imagine being asked to give your opinion on a book after reading only the first sentence, or to rate a movie based solely on the opening credits, or to describe Van Gogh’s “Starry Night” after only a glimpse at the painting’s top left corner. In effect, this is what you ask test participants to do when you use an image that requires scrolling (horizontal or vertical).

    In any situation involving images, a person can’t comment meaningfully on your design if (s)he can’t see it in its entirety. The more you require the participant to scroll an image, the greater the likelihood of getting non-responses to your test questions.

    While image scrolling has a negative impact in all tests formats, the greatest effect is in target identification tests and in mixed format tests that include target identification questions. In most instances (or unless the test instructions indicate otherwise, participants are inclined to internalize specific elements of a design first. The process of scrolling an image takes focus away from the image and places it on the actions required to scroll.

    Researchers always have to be thoughtful about how the image is presented within the test. When testing in-person, it’s a simple matter of knowing what monitor will display the test image and customizing the image to fit the screen resolution. When using a unmoderated testing tool (like this one), you’ll need to use an image that is likely to not require scrolling when used in the most frequent screen resolutions (doing an internet search on “browser display statistics” should provide a number of current resources to reference as you consider the technologies of who might be taking your tests).

    In order to make it fit, you’ll most likely need to do some image resizing. Cropping the image will preserve the detail necessary to provide a realistic view of the design, but provides only a segment of the image at the cost of losing the context of the image as a whole. Scaling will retain the entirety of the image, but could degrade the image detail so much that the image becomes too distracting to elicit meaningful feedback. (If one or the other does not suffice, combining the techniques may provide an acceptable balance of preserving context and presenting discernible detail.)

    As always, the main concern is to make sure that the participant is put in the best position possible to meaningful feedback about the design.

  6. UsabilityHub

    Five-Second Tips: Good Test Instructions #1 – Beware of the context statement

    Many of the tests I’ve seen at UsabiltyHub ask participants to put themselves in a specific context for answering questions:

    “Imagine that you . . .”
    “Pretend that you . . .”

    Context statements like these can add realistic “color” to the instructions and ground participants in a situation of relative familiarity. However, they can also confuse or introduce negative bias in your participants before the test has even begun.

    Consider the instructions for a test about the design of a poster advertisement:

    “Imagine that you are standing on the street and a bus drives past. You see the following advertisement on the back.”

    Most (if not all) people can relate to seeing advertisements in real-world contexts, so including a context statement certainly does no harm, and may actually assist in putting the respondent in a better frame of mind for answering test questions.

    However, using a context statement can be counter-productive when it represents an unnecessary or unrealistic context:

    An unnecessary context adds no value to, or practical assistance with, the participant’s ability to answer test questions. Statements like “Imagine that you are looking at the following web page while browsing” or “Imagine that you are searching for information about mobile plans and you come across this site” will likely do little to help produce useful answers about the design you’re testing, especially when you’re looking for responses about a specific aspect of your design.

    An unrealistic context tries to place a participant into a situation to which (s)he cannot relate. Consider this example: “Imagine that you work for a bank and you’re researching software vendors.”  Unless you’re sure that your test sample consists solely of bank employees and/or software purchasers, this type of context statement can put your test at risk for indifference (“I’ll just skip this test”) or hostility (“How can you expect me to answer questions about this?”).

    Additionally, context statements rarely work well as instructions in and of themselves. Supplementing a context statement with an indication of test format can help alleviate any potential negative impact of an unrealistic context. “Imagine that you’re researching software vendors for your bank” becomes more accessible (and less easily dismissed) when combined with “You’ll be asked for your opinion about the website’s color scheme.”

  7. UsabilityHub

    Five-Second Tips: When NOT to run a five second test

    Many researchers unfortunately seem to view the five-second test as a quick-and-dirty substitute for other types of research. As a planning strategy, the rule of thumb should be:

    The five-second test is the wrong choice for anything requiring more than five seconds’ worth of exposure in order to provide a meaningful answer.

    Some common misuses of the five-second test include:

    • Testing for usability issues. This should go without saying, but you’d be surprised how many tests ask users to identify interaction flaws or predict errors. You can’t test for usability simply by looking — participants need to interact with an actual working system or prototype in a formal usability test.
    • Requiring reading text to answer questions. Reading anything more than a tagline or slogan involves higher level cognitive processes that are better served by techniques that are not subject to a time limit. If the user has to read copy to get the data you need, you’re using the wrong test.
    • Comparing designs. Tests that contain multiple visual elements simply put too much demand on the participant to allow for a meaningful comparison within five seconds. (Fortunately, since the publication of my book, UsabilityHub has added Comparison Test functionality to more effectively and efficiently perform comparison tests, although this tool is limited in that it won’t tell you why a participant prefers one design over another.)
    • Predicting future behavior. Just you likely wouldn’t make a purchasing decision based on a five-second sales pitch, questions like “Would you hire this company?” or “Would you sign up for this service?” are unfair to the participant. Remember that five-second testing is designed to gauge what the viewer can perceive and recall within a very short amount of time – participants will not be in a position to answer questions like these.

    Five-second tests offer convenience and cost-effectiveness, but those factors should never supercede the need to get meaningful data to inform your design decisions. Before testing, always be sure you’re using the right tool for the right job.

  8. UsabilityHub

    Five-Second Tips: Focus — and use the appropriate test format

    There are a number of different formats for five-second tests, each of which has its own set of guidelines, limitations and opportunities for customization. Knowing what you want to achieve will help determine which format to use, keep the test focused, and assist in the crafting of the remaining test components.

    Memory Dump tests help confirm whether specific design elements stand out, in support of the business goals. Basically, it consists of a series of “what do you remember” questions:

    • “What do you remember most about the page you saw?”
    • “What else do you remember?”

    With no targeted questions guiding the responses (as in other test types), responses can be expected to follow a specific pattern of specific-to-general. However, it is important that the test instructions set the proper expectation before the test image is shown – i.e., that participants should remember as much as they can about what they see, and that they will be asked to document the things they remember most vividly.

    Target Identification tests focus on specific “targets” in a design. Questions in this type of test directly challenge a respondent’s recall of one or more targets:

    • “Where was the phone number for contacting the call center located?”
    • “Between what hours is phone support available?”

    The researcher can learn not just whether a target is noticeable, but also whether specific aspects of that target are memorable. The chances of getting useful results using this format are increased when the test is focused on a singular target.

    Attitudinal tests focuses on what people like, believe, or perceive about a design. It is very much in line with traditional survey methodology, in that the data will reveal matters of opinion, such as aesthetic appeal and/or emotional response. Typical questions are:

    • “What word(s) would you use to describe . . . ?”
    • “Rate the overall visual appeal of the design on a scale of 1-10 . . .”

    As with other types of surveys, care must be taken with respect to formation of instructions and questions, so as to minimize bias and the reporting of false data.

    A Mixed Format test uses components of more than one of the other test formats. Results yielded in a mixed test are likely to be – well, mixed, so these tests need to be very carefully. If the memory of a design is sharpest when the first question is asked, it makes sense to ask a target identification question first, followed by other types of questions. Useful data can be expected for the first 1-2 questions, followed by some likely drop-off as more questions are added. However, in keeping with good research practice, better results will be obtained by creating separate tests for each individual question.

  9. UsabilityHub

    Five-Second Tips: An Introduction

    Spend a little time taking some random tests on this site, and you’ll see that people are using the Five-Second Test to evaluate all sorts of designs in all sorts of ways. This has given design and UX pros a valuable addition to their testing toolkit, but has also introduced a lot of room for error in how tests are designed.

    The earliest instances of the test as a UX technique can be traced back about 15 years ago, to the collaborative efforts of Jared Spool, Christine Perfetti, and Tom Tullis. Their original guidelines limited the test’s use within a fairly narrow set of use cases, and always within a controlled and moderated environment.

    I became a UsabilityHub customer in 2008 and quickly became an enthusiastic advocate of the site. Creating and distributing Five-Second, Click and Navigation tests became my preferred way of settling inter-office squabbles (I called them ‘bar bets’) before putting designs in front of users for more formal testing – and accumulating Karma points made me feel as though I was returning the favor for all of the feedback I’d received for my own work.

    However, as I started taking the tests of others, it quickly became clear that something was amiss. In test after test, I found myself reluctantly answering questions with “I don’t know” responses – which led me to suspect that my own tests were probably not as good as they could be. Looking for answers, I found precious little information available about getting the most out of the technique using online, unmoderated sites and tools.

    Convinced there was a better way, I set out to examine the method more closely and devise a set of guidelines for testers, which resulted in my book, The UX Five Second Rules. In upcoming posts, I will offer tips and tricks for creating great online five-second tests, discuss what types of tests are (and are not) conducive to testing in this way, and present specific strategies for designing tests that provide useful and usable data.

    Posts will come every few weeks, so check back often. The Five-Second Test offers a wealth of UX data, if you know what and how to ask.