Settle design debates with data

Make confident decisions
by testing designs with
real users.

Get started now
  1. UsabilityHub

    Top 10 Usability Testing Influencers to Follow on Twitter

    Justinmind-Top-Twitter-influencers

    Usability testing is always in flux: innovative test tools, new methodologies, new devices, new prototyping platforms, even whole new generations of users mean that the usability testing landscape is constantly shifting. That’s what makes it fun, right? But it can also be a challenge staying abreast of such a fluid field of expertise.

    That’s where these 10 usability testing influencers come in handy. By following these experts on social media, testers will find at their fingertips industry updates, news, tips and techniques and a whole lot of usability testing debate. From Jared Spool’s sage advice on everything usability and UX related, through Susan Weinschenk’s considered research to Steve Krug’s news and event updates, these 10 usability testing influencers cover all the bases for test professionals keen to stay at the cutting edge of usability.

    1. MeasuringU @MeasuringU

    Jeff Sauro and the rest of the team behind MeasuringU curate a fine collection of usability and UX tweets, harvested both from the MeasuringU blog and other sources. You’ll find plenty in-depth research, links to real-world case studies and customer experience insights.

    2. Jared Spool @jmspool

    Everyone involved in UX and usability has heard of Jared Spool. Keynote speaker, author, founder at UIE and hilarious social media sharer, Jared runs a must-follow Twitter feed that condenses his 15 years of experience into 140 characters daily.

    3. Patrick Neeman @usabilitycounts

    The face of blog Usability Counts, ‘digital prophet’ Patrick Neeman is full of good advice and salty opinions, both of which he shares liberally on Twitter. Patrick discusses everything from usability testing with prototypes to the job market for UXers – it’s no surprise he has over 100,000 followers. Plus if you ever wanted to see ‘social media explained in donuts’, Patrick is your man.

    4. Usability Hub @UsabilityHub

    A light-hearted yet savvy look at usability from the team here at Usability Hub. They share articles that go to the heart of user testing methodology, as well as commentary from industry insiders, expert tips on usability testing and useful resources for usability testers of all stripes.

    5. Nielsen Norman Group @NNgroup

    The grandaddies of usability and UX, the NN Group team are a go-to reference for original research, guidance and thought on usability. You’ll get daily updates on Jakob Nielsen’s and Don Norman’s words of wisdom, compelling videos and event shares. Everything a user tester needs.

    6. Steve Krug @skrug

    The author of ‘Don’t Make me Think’ runs a great Twitter feed replete with UI design tips, thoughts on user behavior and industry updates. Steve also shares links from his blog Advanced Common Sense, which covers web usability.

    7. Justin Mifsud @justinmifsud

    Usability expert Justin is passionate about one thing above all – web usability. His blog, Usability Geek, focuses on bridging the gap between theory and practise in usability, and his Twitter feed is a mash-up of original content, musings on testing methodologies, interface geek stuff and UX articles.

    8. Susan Weinschenk @thebrainlady

    Susan, or ‘the brain lady’, is well known for her fascinating insight on usability as a behavioral scientist. As the author of a slew of UX/usability books and Founder of The Team W, Susan is well versed on combining brain science with product usability.

    9. Tema Frank @temafrank

    We love Tema Frank’s feed! Sharing actual usability errors and customer service clangers, Tema brings humor to serious usability. She’s an expert in understanding customers and actually founded the world’s first company to do “omnichannel” customer experience testing, Web Mystery Shoppers Inc (way back in 2001).

    10. Luke Wroblewski @lukew

    Granted, Luke Wroblewski is technically a product guy (Product Director at Google, no less); but that doesn’t stop him sharing awesome pearls of wisdom on UI usability. In addition to his revealing YouTube videos and blog posts, Luke runs a Twitter feed full of real life examples of the usability impact of design decisions.

     

    This was a guest post from Justinmind.

    Justinmind is a prototyping tool that allows you to prototype web and mobile apps so you can visualize and test your software solution before writing a single line of code.

  2. UsabilityHub

    New design, new dashboard, new features

    It’s been a busy start to the year in the UsabilityHub office, and today we’re releasing a collection of updates we’ve been working on for you.

    The most obvious thing you’ll notice is a tidy new design. The new sidebar helps you jump between tests faster, filter tests, and view your sets, all in one place.

    We’ve also made some improvements to our handling of test variations and sets, making it easier to group your tests, recruit your own testers, and control who sees which tests.

    Variations now allow you to choose meaningful names for variation sets (eg. “Homepage options”), and for each variation within it (eg. “Long form content”, “Big header image”, “Emphasise social proof”) instead of simple numbered variations.

    named-variations

    You can now duplicate tests, making it easier to re-run tests on new designs. This is particularly handy for UX consultants who often run the same tests for their clients. Just duplicate an old test and switch out the image.

     

    duplicate-test

    As always, we’d love to hear your feedback, so get in touch if you have any suggestions for us: support@usabilityhub.com

  3. UsabilityHub

    Getting the most out of first click testing

    We all want our users to be able to easily find what they’re looking for on our websites and apps. Ensuring they start down the right path from the onset gives them the best chance of success. Click tests are a fast,  simple and effective way to measure where your users place that first step;  however, to get the most out of a click test, it’s fundamental to both prepare the tasks and to interpret the results correctly. So with that, let’s go over some best practices when running your own click test.

     

    Writing tasks for click tests

    The preparation of the tasks given to testers is crucial and to avoid pitfalls that might skew the test results follow these tips:

    • Ensure tasks employ the language your users would use and understand and not language that’s overly technical or specific to your company.
    • Tasks should provide test participants with a problem to solve that will simulate a natural interaction with your website. They should resemble scenarios rather than simply words to look for.
    • Scenarios should be short and contextualise why the task is being performed. For example: “You are holding a dinner party this Saturday and you want to find out what you need to prepare vegetable lasagna”. Rather than: “Find a vegetable lasagna recipe on this site”.
    • Tasks should be: action based, specific, measurable, realistic and relevant.
    • To be most effective tasks should represent the most common user goals and the most important conversion goals for your business.
    • Avoid giving away the answer in the question. For example asking users where they would register for an account  on an interface that has a button labelled “register” may cause participants to simply focus on similarly phrased items.

    When preparing a series of tasks using test sets, please follow these additional tips:

    • Start with a simple task to build confidence.
    • Make the tasks independent from each other. Present the tasks in random order to avoid bias.
    • Prepare a maximum of 10 tasks per test set but ensure that they cover all the areas that you want to investigate.

    Summary and 10 example tasks are available here in PDF format.

     

    Interpreting click test results

    Click test results consists of heat and click maps showing where users clicked and response times. Below are some simple recommendations to help you make the most out of this data.

    Prepare

    Have a minimum of 20 quality responses as it’s difficult to draw conclusions with less. You’re able to order more responses after the fact if you feel a greater sample size is required. See what the users’ top choices are and for each one write down the number of clicks and the average response time. Report the number of clicks for each choice as a percentage of the total of quality responses (this will allow you to compare results with tests having a different total). Check if each choice is a suitable path to task success or not; if some users choose the search icon (magnifying glass) or the search box, report it as neutral (neither suitable nor unsuitable path).

    Analyze

    A good result is when the percentage of not suitable paths is below 20% of the total. Of course further improvements can be sought, both in terms of reducing the percentage of not suitable paths as well as cutting down the response times of suitable paths. The percentage of users choosing a search icon or search box shouldn’t be more than roughly 35% when testing the desktop version of a website. If the percentage is higher it could mean that it is difficult for users to identify suitable options on the interface. When testing on mobile phones, where generally there are less interface options, it’s acceptable if this percentage is exceeded.

    Iterate

    If the 20% target stated above is not reached, you could prepare a new mockup or wireframe, modify it in accordance to the test results, and run a new click test. An open card sorting test can also provide helpful indications to be integrated before testing.

    Further examples of first click tests with analysis and recommendations can be downloaded here in PDF format.

  4. UsabilityHub

    Redirect testers after test set

    You can now provide a redirect link where we’ll send your testers after completing a test set.

    The simple way to use this is to direct all testers to the same page on your website where you can thank them and give them a discount code, or any other reward.

    For example:

    http://www.your-website.com/thankyou.html

    You can also use redirect links to track your testers, or send them customised thank-you pages. To do this, you’ll need to send each tester a different link to take the set of tests with a ref parameter to identify them.

    For example, invite your testers to:

    https://usabilityhub.com/do/s/a1324c?ref=tom
    https://usabilityhub.com/do/s/a1324c?ref=kate
    https://usabilityhub.com/do/s/a1324c?ref=raymond

    And set up a redirect link which uses this value by placing “{{ref}}” in the redirect link wherever you want the value to be used:

    http://www.your-website.com/thankyou.html?username={{ref}}

    The exact same redirect behaviour is available on all individual tests, as well as test sets.

    Need any help setting up your redirects? Just drop us a line at support@usabilityhub.com.

  5. UsabilityHub

    Five-Second Tips: Good Test Instructions #2 — Be clear, sufficient and appropriate

    Good test instructions will strike the proper balance of brevity and detail, reduce the likelihood of bias, and adequately “set the table” for the participant to provide useful feedback.

    The researcher or test planner is solely responsible for making sure that the test instructions are:

    Clear: Each sentence should represent a single concept or idea. Words should be chosen carefully and tested repeatedly, so that they are likely to be understood the same way by all who read them.

    Concise: Instructions cannot be perceived as lengthy. For the remote unmoderated tests you can do at usabilityhub.com, instructions should be limited to only 1-2 very short sentences.

    Sufficient:  Instructions should establish a realistic expectation about what is going to be presented, and what is expected of the participant.

    Appropriate for the test format: A previous post noted that the test format is important for ensuring that the data is appropriate and useful. Proper instructions will indicate the format in such a way that participants will understand what types of data they will be asked to provide.

    The goal of the memory dump test is to determine which element(s) stand out in a design. For this type of test, a simple statement about the process that is to occur is all that is required:

     “You will have five seconds to view the image. Afterwards, you’ll be asked a few short questions.”

    “After viewing a design for five seconds, be prepared to tell us what you recall about it.”

    However, this approach will not work for attitudinal tests, which contain opinion-centered questions that require the participant to recall, consider and comment on the design as a whole entity. For this test format, setting the correct expectation in the instructions means putting the participant in the proper frame of mind for delivering opinion-based responses:

     “You will have five seconds to view the image. Afterwards, you’ll be asked a few short questions about your general reaction to the design.”

     “You’ll see a screen for a few seconds – pay attention the general aesthetic and visual appeal of the design.”

    Instructions for target identification tests are a little trickier and reiterate the critical point that it’s all about the research goal(s) and focusing on the one thing per test that you want to learn. Because the focus is on the participant’s ability to comment on one or more specific elements, it’s sometimes it’s necessary to reference the target in the instructions, so that the participant has enough time to view and consider it.

    For example, if the goal is to learn about a website’s navigation bar, it is probably better to reference the bar in the instructions, so that the participant will seek out and focus attention on the bar for the full five seconds.

    But, be careful — if the goal is to know whether a specific target is easily spotted in a design, it’s better to not reference it in the instructions.

  6. UsabilityHub

    Five-Second Tips: Use an image that fits the screen

    Imagine being asked to give your opinion on a book after reading only the first sentence, or to rate a movie based solely on the opening credits, or to describe Van Gogh’s “Starry Night” after only a glimpse at the painting’s top left corner. In effect, this is what you ask test participants to do when you use an image that requires scrolling (horizontal or vertical).

    In any situation involving images, a person can’t comment meaningfully on your design if (s)he can’t see it in its entirety. The more you require the participant to scroll an image, the greater the likelihood of getting non-responses to your test questions.

    While image scrolling has a negative impact in all tests formats, the greatest effect is in target identification tests and in mixed format tests that include target identification questions. In most instances (or unless the test instructions indicate otherwise, participants are inclined to internalize specific elements of a design first. The process of scrolling an image takes focus away from the image and places it on the actions required to scroll.

    Researchers always have to be thoughtful about how the image is presented within the test. When testing in-person, it’s a simple matter of knowing what monitor will display the test image and customizing the image to fit the screen resolution. When using a unmoderated testing tool (like this one), you’ll need to use an image that is likely to not require scrolling when used in the most frequent screen resolutions (doing an internet search on “browser display statistics” should provide a number of current resources to reference as you consider the technologies of who might be taking your tests).

    In order to make it fit, you’ll most likely need to do some image resizing. Cropping the image will preserve the detail necessary to provide a realistic view of the design, but provides only a segment of the image at the cost of losing the context of the image as a whole. Scaling will retain the entirety of the image, but could degrade the image detail so much that the image becomes too distracting to elicit meaningful feedback. (If one or the other does not suffice, combining the techniques may provide an acceptable balance of preserving context and presenting discernible detail.)

    As always, the main concern is to make sure that the participant is put in the best position possible to meaningful feedback about the design.

  7. UsabilityHub

    Five-Second Tips: Good Test Instructions #1 – Beware of the context statement

    Many of the tests I’ve seen at UsabiltyHub ask participants to put themselves in a specific context for answering questions:

    “Imagine that you . . .”
    “Pretend that you . . .”

    Context statements like these can add realistic “color” to the instructions and ground participants in a situation of relative familiarity. However, they can also confuse or introduce negative bias in your participants before the test has even begun.

    Consider the instructions for a test about the design of a poster advertisement:

    “Imagine that you are standing on the street and a bus drives past. You see the following advertisement on the back.”

    Most (if not all) people can relate to seeing advertisements in real-world contexts, so including a context statement certainly does no harm, and may actually assist in putting the respondent in a better frame of mind for answering test questions.

    However, using a context statement can be counter-productive when it represents an unnecessary or unrealistic context:

    An unnecessary context adds no value to, or practical assistance with, the participant’s ability to answer test questions. Statements like “Imagine that you are looking at the following web page while browsing” or “Imagine that you are searching for information about mobile plans and you come across this site” will likely do little to help produce useful answers about the design you’re testing, especially when you’re looking for responses about a specific aspect of your design.

    An unrealistic context tries to place a participant into a situation to which (s)he cannot relate. Consider this example: “Imagine that you work for a bank and you’re researching software vendors.”  Unless you’re sure that your test sample consists solely of bank employees and/or software purchasers, this type of context statement can put your test at risk for indifference (“I’ll just skip this test”) or hostility (“How can you expect me to answer questions about this?”).

    Additionally, context statements rarely work well as instructions in and of themselves. Supplementing a context statement with an indication of test format can help alleviate any potential negative impact of an unrealistic context. “Imagine that you’re researching software vendors for your bank” becomes more accessible (and less easily dismissed) when combined with “You’ll be asked for your opinion about the website’s color scheme.”

  8. UsabilityHub

    Five-Second Tips: When NOT to run a five second test

    Many researchers unfortunately seem to view the five-second test as a quick-and-dirty substitute for other types of research. As a planning strategy, the rule of thumb should be:

    The five-second test is the wrong choice for anything requiring more than five seconds’ worth of exposure in order to provide a meaningful answer.

    Some common misuses of the five-second test include:

    • Testing for usability issues. This should go without saying, but you’d be surprised how many tests ask users to identify interaction flaws or predict errors. You can’t test for usability simply by looking — participants need to interact with an actual working system or prototype in a formal usability test.
    • Requiring reading text to answer questions. Reading anything more than a tagline or slogan involves higher level cognitive processes that are better served by techniques that are not subject to a time limit. If the user has to read copy to get the data you need, you’re using the wrong test.
    • Comparing designs. Tests that contain multiple visual elements simply put too much demand on the participant to allow for a meaningful comparison within five seconds. (Fortunately, since the publication of my book, UsabilityHub has added Comparison Test functionality to more effectively and efficiently perform comparison tests, although this tool is limited in that it won’t tell you why a participant prefers one design over another.)
    • Predicting future behavior. Just you likely wouldn’t make a purchasing decision based on a five-second sales pitch, questions like “Would you hire this company?” or “Would you sign up for this service?” are unfair to the participant. Remember that five-second testing is designed to gauge what the viewer can perceive and recall within a very short amount of time – participants will not be in a position to answer questions like these.

    Five-second tests offer convenience and cost-effectiveness, but those factors should never supercede the need to get meaningful data to inform your design decisions. Before testing, always be sure you’re using the right tool for the right job.