Settle design debates with data

Make confident decisions
by testing designs with
real users.

Get started now

Paul Doncaster

A graduate of Bentley University’s HFID master’s program, Paul Doncaster has spent his career working on highly-complex UX projects within the domains of course technology, legal and intellectual property.

He has written and spoken on many UX topics, including designing for emotional response, online readability, designing for tablet users in the legal domain -- and of course, the Five-Second Test.

Twitter: @UX5SecondRules

The UX Five Second Rules is available at the Elsevier Publishing Online Store or online at

  1. UsabilityHub

    Five-Second Tips: Good Test Instructions #2 — Be clear, sufficient and appropriate

    Good test instructions will strike the proper balance of brevity and detail, reduce the likelihood of bias, and adequately “set the table” for the participant to provide useful feedback.

    The researcher or test planner is solely responsible for making sure that the test instructions are:

    Clear: Each sentence should represent a single concept or idea. Words should be chosen carefully and tested repeatedly, so that they are likely to be understood the same way by all who read them.

    Concise: Instructions cannot be perceived as lengthy. For the remote unmoderated tests you can do at, instructions should be limited to only 1-2 very short sentences.

    Sufficient:  Instructions should establish a realistic expectation about what is going to be presented, and what is expected of the participant.

    Appropriate for the test format: A previous post noted that the test format is important for ensuring that the data is appropriate and useful. Proper instructions will indicate the format in such a way that participants will understand what types of data they will be asked to provide.

    The goal of the memory dump test is to determine which element(s) stand out in a design. For this type of test, a simple statement about the process that is to occur is all that is required:

     “You will have five seconds to view the image. Afterwards, you’ll be asked a few short questions.”

    “After viewing a design for five seconds, be prepared to tell us what you recall about it.”

    However, this approach will not work for attitudinal tests, which contain opinion-centered questions that require the participant to recall, consider and comment on the design as a whole entity. For this test format, setting the correct expectation in the instructions means putting the participant in the proper frame of mind for delivering opinion-based responses:

     “You will have five seconds to view the image. Afterwards, you’ll be asked a few short questions about your general reaction to the design.”

     “You’ll see a screen for a few seconds – pay attention the general aesthetic and visual appeal of the design.”

    Instructions for target identification tests are a little trickier and reiterate the critical point that it’s all about the research goal(s) and focusing on the one thing per test that you want to learn. Because the focus is on the participant’s ability to comment on one or more specific elements, it’s sometimes it’s necessary to reference the target in the instructions, so that the participant has enough time to view and consider it.

    For example, if the goal is to learn about a website’s navigation bar, it is probably better to reference the bar in the instructions, so that the participant will seek out and focus attention on the bar for the full five seconds.

    But, be careful — if the goal is to know whether a specific target is easily spotted in a design, it’s better to not reference it in the instructions.

  2. UsabilityHub

    Five-Second Tips: Use an image that fits the screen

    Imagine being asked to give your opinion on a book after reading only the first sentence, or to rate a movie based solely on the opening credits, or to describe Van Gogh’s “Starry Night” after only a glimpse at the painting’s top left corner. In effect, this is what you ask test participants to do when you use an image that requires scrolling (horizontal or vertical).

    In any situation involving images, a person can’t comment meaningfully on your design if (s)he can’t see it in its entirety. The more you require the participant to scroll an image, the greater the likelihood of getting non-responses to your test questions.

    While image scrolling has a negative impact in all tests formats, the greatest effect is in target identification tests and in mixed format tests that include target identification questions. In most instances (or unless the test instructions indicate otherwise, participants are inclined to internalize specific elements of a design first. The process of scrolling an image takes focus away from the image and places it on the actions required to scroll.

    Researchers always have to be thoughtful about how the image is presented within the test. When testing in-person, it’s a simple matter of knowing what monitor will display the test image and customizing the image to fit the screen resolution. When using a unmoderated testing tool (like this one), you’ll need to use an image that is likely to not require scrolling when used in the most frequent screen resolutions (doing an internet search on “browser display statistics” should provide a number of current resources to reference as you consider the technologies of who might be taking your tests).

    In order to make it fit, you’ll most likely need to do some image resizing. Cropping the image will preserve the detail necessary to provide a realistic view of the design, but provides only a segment of the image at the cost of losing the context of the image as a whole. Scaling will retain the entirety of the image, but could degrade the image detail so much that the image becomes too distracting to elicit meaningful feedback. (If one or the other does not suffice, combining the techniques may provide an acceptable balance of preserving context and presenting discernible detail.)

    As always, the main concern is to make sure that the participant is put in the best position possible to meaningful feedback about the design.

  3. UsabilityHub

    Five-Second Tips: Good Test Instructions #1 – Beware of the context statement

    Many of the tests I’ve seen at UsabiltyHub ask participants to put themselves in a specific context for answering questions:

    “Imagine that you . . .”
    “Pretend that you . . .”

    Context statements like these can add realistic “color” to the instructions and ground participants in a situation of relative familiarity. However, they can also confuse or introduce negative bias in your participants before the test has even begun.

    Consider the instructions for a test about the design of a poster advertisement:

    “Imagine that you are standing on the street and a bus drives past. You see the following advertisement on the back.”

    Most (if not all) people can relate to seeing advertisements in real-world contexts, so including a context statement certainly does no harm, and may actually assist in putting the respondent in a better frame of mind for answering test questions.

    However, using a context statement can be counter-productive when it represents an unnecessary or unrealistic context:

    An unnecessary context adds no value to, or practical assistance with, the participant’s ability to answer test questions. Statements like “Imagine that you are looking at the following web page while browsing” or “Imagine that you are searching for information about mobile plans and you come across this site” will likely do little to help produce useful answers about the design you’re testing, especially when you’re looking for responses about a specific aspect of your design.

    An unrealistic context tries to place a participant into a situation to which (s)he cannot relate. Consider this example: “Imagine that you work for a bank and you’re researching software vendors.”  Unless you’re sure that your test sample consists solely of bank employees and/or software purchasers, this type of context statement can put your test at risk for indifference (“I’ll just skip this test”) or hostility (“How can you expect me to answer questions about this?”).

    Additionally, context statements rarely work well as instructions in and of themselves. Supplementing a context statement with an indication of test format can help alleviate any potential negative impact of an unrealistic context. “Imagine that you’re researching software vendors for your bank” becomes more accessible (and less easily dismissed) when combined with “You’ll be asked for your opinion about the website’s color scheme.”

  4. UsabilityHub

    Five-Second Tips: When NOT to run a five second test

    Many researchers unfortunately seem to view the five-second test as a quick-and-dirty substitute for other types of research. As a planning strategy, the rule of thumb should be:

    The five-second test is the wrong choice for anything requiring more than five seconds’ worth of exposure in order to provide a meaningful answer.

    Some common misuses of the five-second test include:

    • Testing for usability issues. This should go without saying, but you’d be surprised how many tests ask users to identify interaction flaws or predict errors. You can’t test for usability simply by looking — participants need to interact with an actual working system or prototype in a formal usability test.
    • Requiring reading text to answer questions. Reading anything more than a tagline or slogan involves higher level cognitive processes that are better served by techniques that are not subject to a time limit. If the user has to read copy to get the data you need, you’re using the wrong test.
    • Comparing designs. Tests that contain multiple visual elements simply put too much demand on the participant to allow for a meaningful comparison within five seconds. (Fortunately, since the publication of my book, UsabilityHub has added Comparison Test functionality to more effectively and efficiently perform comparison tests, although this tool is limited in that it won’t tell you why a participant prefers one design over another.)
    • Predicting future behavior. Just you likely wouldn’t make a purchasing decision based on a five-second sales pitch, questions like “Would you hire this company?” or “Would you sign up for this service?” are unfair to the participant. Remember that five-second testing is designed to gauge what the viewer can perceive and recall within a very short amount of time – participants will not be in a position to answer questions like these.

    Five-second tests offer convenience and cost-effectiveness, but those factors should never supercede the need to get meaningful data to inform your design decisions. Before testing, always be sure you’re using the right tool for the right job.

  5. UsabilityHub

    Five-Second Tips: Focus — and use the appropriate test format

    There are a number of different formats for five-second tests, each of which has its own set of guidelines, limitations and opportunities for customization. Knowing what you want to achieve will help determine which format to use, keep the test focused, and assist in the crafting of the remaining test components.

    Memory Dump tests help confirm whether specific design elements stand out, in support of the business goals. Basically, it consists of a series of “what do you remember” questions:

    • “What do you remember most about the page you saw?”
    • “What else do you remember?”

    With no targeted questions guiding the responses (as in other test types), responses can be expected to follow a specific pattern of specific-to-general. However, it is important that the test instructions set the proper expectation before the test image is shown – i.e., that participants should remember as much as they can about what they see, and that they will be asked to document the things they remember most vividly.

    Target Identification tests focus on specific “targets” in a design. Questions in this type of test directly challenge a respondent’s recall of one or more targets:

    • “Where was the phone number for contacting the call center located?”
    • “Between what hours is phone support available?”

    The researcher can learn not just whether a target is noticeable, but also whether specific aspects of that target are memorable. The chances of getting useful results using this format are increased when the test is focused on a singular target.

    Attitudinal tests focuses on what people like, believe, or perceive about a design. It is very much in line with traditional survey methodology, in that the data will reveal matters of opinion, such as aesthetic appeal and/or emotional response. Typical questions are:

    • “What word(s) would you use to describe . . . ?”
    • “Rate the overall visual appeal of the design on a scale of 1-10 . . .”

    As with other types of surveys, care must be taken with respect to formation of instructions and questions, so as to minimize bias and the reporting of false data.

    A Mixed Format test uses components of more than one of the other test formats. Results yielded in a mixed test are likely to be – well, mixed, so these tests need to be very carefully. If the memory of a design is sharpest when the first question is asked, it makes sense to ask a target identification question first, followed by other types of questions. Useful data can be expected for the first 1-2 questions, followed by some likely drop-off as more questions are added. However, in keeping with good research practice, better results will be obtained by creating separate tests for each individual question.

  6. UsabilityHub

    Five-Second Tips: An Introduction

    Spend a little time taking some random tests on this site, and you’ll see that people are using the Five-Second Test to evaluate all sorts of designs in all sorts of ways. This has given design and UX pros a valuable addition to their testing toolkit, but has also introduced a lot of room for error in how tests are designed.

    The earliest instances of the test as a UX technique can be traced back about 15 years ago, to the collaborative efforts of Jared Spool, Christine Perfetti, and Tom Tullis. Their original guidelines limited the test’s use within a fairly narrow set of use cases, and always within a controlled and moderated environment.

    I became a UsabilityHub customer in 2008 and quickly became an enthusiastic advocate of the site. Creating and distributing Five-Second, Click and Navigation tests became my preferred way of settling inter-office squabbles (I called them ‘bar bets’) before putting designs in front of users for more formal testing – and accumulating Karma points made me feel as though I was returning the favor for all of the feedback I’d received for my own work.

    However, as I started taking the tests of others, it quickly became clear that something was amiss. In test after test, I found myself reluctantly answering questions with “I don’t know” responses – which led me to suspect that my own tests were probably not as good as they could be. Looking for answers, I found precious little information available about getting the most out of the technique using online, unmoderated sites and tools.

    Convinced there was a better way, I set out to examine the method more closely and devise a set of guidelines for testers, which resulted in my book, The UX Five Second Rules. In upcoming posts, I will offer tips and tricks for creating great online five-second tests, discuss what types of tests are (and are not) conducive to testing in this way, and present specific strategies for designing tests that provide useful and usable data.

    Posts will come every few weeks, so check back often. The Five-Second Test offers a wealth of UX data, if you know what and how to ask.