usability testing – UX Mastery https://uxmastery.com The online learning community for human-centred designers Thu, 01 Apr 2021 16:43:59 +0000 en-AU hourly 1 https://wordpress.org/?v=6.3.2 https://uxmastery.com/wp-content/uploads/2019/12/cropped-uxmastery_logotype_135deg-100x100.png usability testing – UX Mastery https://uxmastery.com 32 32 170411715 Getting Started with Popular Guerrilla UX Research Methods https://uxmastery.com/popular-guerrilla-ux-research-methods/ https://uxmastery.com/popular-guerrilla-ux-research-methods/#respond Fri, 03 Nov 2017 02:41:37 +0000 http://uxmastery.com/?p=61865 Amanda's last article covered how to “guerilla-ise” traditional UX research methods to fit into a short timeline, and when it makes the most sense to use them.

Now, she's back to walk us through some of the most popular guerilla methods—live intercepts, remote and unmoderated studies, and using low fidelity prototypes. She covers pros, cons and tips to make sure you make the most of your guerilla research sessions.

The post Getting Started with Popular Guerrilla UX Research Methods appeared first on UX Mastery.

]]>
In my last article, I talked about how you can “guerilla-ise” traditional UX research methods to fit into a short timeline, and when it makes the most sense to use them. Read the post here.

This time, I’ll walk you through some of the most popular guerilla UX research methods: live intercepts, remote and unmoderated studies, and using low fidelity prototypes.

I’ll cover pros, cons and tips to make sure you make the most of your guerilla research sessions.

Conducting research in public

Often the go-to guerilla technique is to skip the formal participant recruitment process and ask members of the public to take part in your research sessions. Live intercepts are often used as shortened versions of usability tests or interviews.

Getting started

Setting up is easy—all you need is a public space where you can start asking people for a few minutes to give you feedback. A cafe or shopping centre usually works well. 

This is a great way to get lots of feedback quickly, but approaching people takes a little courage and getting used to. 

I find it helps to put up a sign that publicises the incentive you’re offering, and if possible, identifying information like a company logo. This small bit of credibility makes people feel more comfortable.

Make sure you have a script prepared for approaching people. You don’t need to stick to it every time, but make sure you mention where you work or who your client is, your goal is, their time commitment and their compensation.

Try something like:

Hi, I’m [firstname] and I’m working for [x company] today. We’re trying to get some feedback on [our new feature]. If you have about [x minutes] to chat, I can offer you a [gift card/incentive].

Be sure to be friendly, but not pushy. Give people the chance to opt out or come back later. Pro tip: I always take a piece of paper with time slots printed so that people can sign up for a later time.  

The location you choose has a major impact on how many people you talk to and the quality of your results. Here are some tips for picking a good spot:

  • Pick a public place where there will be a high volume of people and make sure you get permission to be there. Aim to be visible but not in the way. A table next to the entrance works well.
  • Try to pick a place that you think your target audience will be. For instance, if you’re interested in talking to lawyers, pick a coffee shop near a big law office.
  • Look for stable wi-fi and plentiful wall plugs.
  • Regardless of where you choose, stake out the location ahead of the research session so you can plan accordingly.

A few limitations

There’s no doubt that intercepting people in public is a great way to get a high volume of participants quickly. Talking to the general population, however, is best reserved for situations when you have a product or service that doesn’t require specific knowledge, contexts, or outlooks.

If you’re doing a usability test, you could argue that whatever you build should be easy enough for anyone to figure out, so you can still get feedback. Just be aware that you may miss out on valuable insights that are specific to your target audience.

Let’s say you’re working on a piece of tax software. A risk is that you end up talking to someone who has a spouse that handles all the finances, or miss finding a labelling error that only tax accountants would know to report.

To avoid this, I always recommend asking a few identifying questions at the beginning of each session so you can analyse results appropriately. You don’t always need to screen people out, but you can choose how to prioritise their feedback in the analysis stage.

Context also matters. If you usability test a rideshare app on a laptop in a coffee shop, but most people will use the app on their phones on a crowded street, you may get misleading feedback.

Watch for bias when user-testing in a cafe. Photo via Unsplash

You should also be aware that you may run into bias by intercepting all your participants from one location. Think about it: the people that are visiting an upscale coffee shop in a business centre on a weekday are likely to be pretty different than the people who are stopping at a gas station for coffee in the middle of the night. Again, try to choose your intercept location based on your target audience and consider going to a few locations to get variety.

Keep in mind that only a certain type of person is going to respond positively and take the time to give you feedback. Most people will be caught off guard, and may be suspicious or unsure what to expect. You won’t have much time to give participants context or build rapport, so be especially conscious of making them feel comfortable.

Some final tips:

  • Set expectations clearly. Tell participants right away how long you’ll talk to them and how you’ll compensate them for their time. Be clear about what questions you’ll ask or tasks you’ll present and what they need to do.
  • Pay extra attention to participant comfort. Give them the option to leave at any time and put extra emphasis on the fact that you’re there to gather feedback, not judge them or their abilities. Try to record the sessions or not take notes the whole time, so you can make eye contact and read body language.
  • Remember standard rules of research: don’t lead participants, get comfortable with silence, and ask questions that participants can easily answer. Be extra careful asking about sensitive topics such as health or money. In fact, I don’t recommend intercepting people if you need to talk about very sensitive topics.

Remote and unmoderated studies

Taking the researcher out of the session is another proven way to reduce the time and cost of research. This is achieved through running remote and unmoderated research sessions.

Getting started

Traditional research assumes that a researcher is directly conducting sessions with participants, or moderating the sessions. Unmoderated research just means that the participants respond without the researcher present. Common methods include diary studies, surveys or trying out predetermined tasks in a prototype.

The core benefit is that people can participate simultaneously so you can collect many responses in a short amount of time. It’s often easier to recruit too, because there are no geographic limitations and participants don’t have to be available at a specific time.

You plan unmoderated research much like you do moderated research: set your research goal, select an appropriate method to answer your open questions, determine participants, and craft your research plan. The difference in unmoderated sessions is that you need to be especially careful about setting expectations and providing clear directions, because you won’t be there during the session. Trial runs are especially important in unmoderated sessions to catch unclear wording and confusing tasks.

You can also conduct remote research, which means that you’re not physically in the same place as your participant. You can use video conferencing tools to see each other’s faces and share screens. Remote sessions are planned in a similar vein to in-person sessions, but you can often reach a broader set of people when there are no geographic limits.

A few limitations

Any time you conduct sessions remotely or choose unmoderated methods, you run the risk of missing out on observing context or reading body language. With unmoderated sessions, can’t dig deeper when someone has an interesting piece of feedback. That’s still better than not collecting data, but you should take it into consideration when you’re analysing your data and making conclusions.

Low fidelity prototypes

If you want to invest less effort upfront, and iterate quickly, low fidelity prototypes are a good option.

In this scenario, you forego fully functional prototypes or live sites/applications and instead use digitally linked wireframes or static images.

You can even use paper prototypes, where you sketch a screen on paper and simulate the interaction by switching out which piece of paper is shown.

Getting started

Low fidelity prototypes, especially paper, are less time consuming to make than digital prototypes, which makes them inexpensive to produce and easy to iterate. This sort of rapid cycling is especially useful when you’re in the very early conceptual stages and trying to sort out gut reactions.

You run a usability test with a low fidelity prototype just like you would run any other usability test. You come up with tasks and scenarios that cover your key questions, recruit participants, and observe as people perform those tasks.

A few limitations

For this guerrilla technique, you have to be especially careful to ask participants to think aloud and not lead or bias them, because there can be a huge gap in their expectations and yours. For paper prototypes in particular, a moderator must be present to simulate the interactions. I recommend in-person sessions for any sort of test with low fidelity prototypes.

Keep in mind that you can get false feedback from low-fidelity wireframe testing. It can be difficult for participants to imagine what would really happen, and they may get stuck on particular elements or give falsely positive feedback based on what they imagine. Take this into consideration when analysing the results, and be sure that you conduct multiple rounds of iterative research and include high-fidelity prototypes or full beta tests in your long-term research plan.

Wrapping up

When in doubt about the results of any guerilla research test, I recommend running another study to see if you get the same results.

You can execute the exact same test plan, or even try to answer the same question with a complementary method. If you arrive at similar conclusions, you can feel more confident, and if not, you’ll know that you need to keep digging. When you’re researching guerilla style, you can always find more time to head back to the jungle for more sessions.

Take a look at my article linked below for tips on reducing scope, and the best times to use guerilla methods. Happy researching!

Further reading

The post Getting Started with Popular Guerrilla UX Research Methods appeared first on UX Mastery.

]]>
https://uxmastery.com/popular-guerrilla-ux-research-methods/feed/ 0 61865
Going Guerrilla: How to Fit UX Research into Any Timeframe https://uxmastery.com/guerrilla-ux-research/ https://uxmastery.com/guerrilla-ux-research/#respond Thu, 19 Oct 2017 04:51:29 +0000 http://uxmastery.com/?p=61304 As more and more companies realise the value of UX research, “guerilla” methods have become a popular way to squeeze research into limited budgets and short timelines. This often means reducing scope and/or rigor. The key to successful guerilla research is to strike the right balance to hit time and budget goals, but still be rigorous enough to gather valuable feedback.

So when is the best time to tackle your research guerilla style?

The post Going Guerrilla: How to Fit UX Research into Any Timeframe appeared first on UX Mastery.

]]>
As more and more companies realise the value of UX research, “guerrilla” methods have become a popular way to squeeze research into limited budgets and short timelines. Those of us working in agile sprints often have even less dedicated time for research.

When I say guerrilla research, I don’t mean go bananas or conduct jungle warfare research. Guerrilla research is really just a way to say that you’ve taken a regular UX research method and altered it to reduce time and cost.

To do so, you often end up reducing scope and/or rigour. The key to successful guerrilla research is to strike the right balance to hit time and budget goals, but still be rigorous enough to gather valuable feedback.

Read on for a framework for reducing any research method and an overview of the best time to use guerrilla tactics.

If you’re looking for practical advice on using guerilla research methods, take a look at my second article: Getting Started with Popular Guerrilla UX Research Methods

Crafting your guerilla plan

You can “guerrilla-ise” any UX research method, and there’s almost never one single correct way to do so. That said, qualitative techniques like usability tests and interviews lend themselves especially well to guerrilla-isation.

The easiest way I’ve found to plan guerrilla research is to start by determining how you’d do the research if you had desired time and budget. Then work backwards to find the elements you can toggle to make it work for the situation. The first place I look to cut is scope of the research question.

Let’s say your team is working on a new healthcare application and wants to assess the usability of the entire onboarding process. That’s an excellent goal, but pretty broad. Perhaps you could focus your study just on the first few steps of the signup process, but not the follow-up tutorial, or vice versa.

Once you’ve narrowed down your key research goals, you can start looking at what sorts of methods will answer your questions. The process for choosing a research method is the same, regardless of whether you’re trying to go guerrilla or not. For a great summary of choosing a method, take a look at Christian Rohrer’s excellent summary on NNG’s blog or this UX planet article.

Besides narrowing the scope of your research goal, think about the details that make up a study. This includes questions such as:

  • What do you need to build or demonstrate?
  • How many sessions or participants do you need?
  • How will you recruit them?
  • What’s the context of the studies?

Then you can take a look at all those elements, identify where your biggest time and money costs are, and prioritise elements to shift.

Reducing scope

Let’s say, for example, that you determine the ideal way to test the onboarding flow of your new app is to conduct 10 one-hour usability sessions of the fully functional prototype. The tests will take place in a lab and you’ll have a participant-recruitment firm find participants that represent your main persona.

There are many ways you could shift to reduce time and costs in this example.

You could:

  • Run test sessions remotely instead of in a lab
  • Reduce the number of sessions overall
  • Run unmoderated studies
  • Build a simpler wireframe or paper prototype
  • Recruit participants on social media
  • Intercept people in a public location
  • Or a combination of these methods

To decide what to alter, look at what will have the biggest impact on time, budget, and validity of your results.

For example, if working with a recruiting firm will be time consuming and expensive, you’ll want to look for alternative ways to recruit. Intercepting people in public is what many of us envision when we think of guerrilla research. You could do that, or you could also find participants on social media or live-intercept them from a site or web app.

You may even decide to combine multiple guerilla-ising techniques, such as conducting fewer sessions and doing so remotely, or showing a simple prototype to people who you intercept.

Just remember, you don’t want to reduce time and effort so much that you bias your results. For instance, if you’re doing shorter sessions or recruiting informally, you may want to keep the same overall number of sessions so you have a reasonable sample size.

Best uses for guerrilla research

So, when is the best scenario to use guerrilla tactics in your research?

  • You have a general consumer-facing product which requires no previous experience or specialty knowledge OR you can easily recruit your target participants
  • You want to gather general first-impressions and see if people understand your product’s value
  • You want to see if people can perform very specific tasks without prior knowledge
  • You can get some value out of the sessions and the alternative is no research at all

And when should you avoid guerrilla methods?

  • When you’ll be researching sensitive topics such as health, money, sex, or relationships
  • When you need participants to have very specific domain knowledge
  • When the context in which someone will use your product will greatly impact their usage and you can’t talk to people in context
  • When you have the time or budget to do more rigorous research!

Guerrilla research is a great way to fit investigation into any timeframe or budget. One of its real beauties is that you can conduct multiple, iterative rounds of research to ensure you’re building the right things and doing so well.

If you have the luxury of conducting more rigorous research, take advantage, but know that guerrilla research is always a better option than no research at all.

Read the next article on getting started with common guerrilla techniques.

The post Going Guerrilla: How to Fit UX Research into Any Timeframe appeared first on UX Mastery.

]]>
https://uxmastery.com/guerrilla-ux-research/feed/ 0 61304
How to Turn UX Research Into Results https://uxmastery.com/how-to-turn-ux-research-into-results/ https://uxmastery.com/how-to-turn-ux-research-into-results/#comments Wed, 31 May 2017 00:00:37 +0000 http://uxmastery.com/?p=54493 We’ve all known researchers who “throw their results over the fence” and hope their recommendations will get implemented, with little result. Talk about futility! Luckily, with a little preparation, it’s a straightforward process to turn your research insights into real results.

The post How to Turn UX Research Into Results appeared first on UX Mastery.

]]>
We’ve all known researchers who “throw their results over the fence” and hope their recommendations will get implemented, with little result. Talk about futility! Luckily, with a little preparation, it’s a straightforward process to turn your research insights into real results.

To move from your research findings to product changes, you should set yourself two main goals.

First, to effectively communicate your findings to help your audience process them and focus on next steps.

Secondly, to follow through by proactively working with stakeholders to decide which issues will be addressed and by whom, injecting yourself into the design process whenever possible. This follow-through is critical to your success.

Let’s look at an end-to-end process for embracing these two main goals.

Effectively communicating your findings

Finding focus

When you have important study results, it’s exciting to share the results with your team and stakeholders. Most likely, you’ll be presenting a lot of information, which means it could take them a while to process it and figure out how to proceed. If your audience gets lost in details, there’s a high risk they’ll tune out.

The more you can help them focus and stay engaged, the more likely you are to get results. You might even consider having a designer or product owner work with you on the presentation to help ensure your results are presented effectively – especially if your associates were involved in the research process.

Engaging with your colleagues and stakeholders

You should plan to present your results in person – whether it’s a casual or formal setting – rather than simply writing up a report and sending it around. This way, your co-workers are more likely to absorb and address your findings.

You could present formally to your company’s leadership team if the research will inform a key business decision. Or gather around a computer with your agile teammates to share results that inform specific design iterations. Either way, if you’re presenting – especially if you allow for questions and discussion – you’re engaging with your audience. Your points are getting across and design decisions will be informed.

Why presentations matter

Here are a few ways your presentation can help your team focus on what to do with the findings:

  • Prioritise your findings (Critical, High, Medium, Low). This helps your audience focus on what’s most important and chunk what should be done first, second and so on. An issue that causes someone to fail at an important task, for example, would be rated as critical. On the other hand, a cosmetic issue or a spelling issue would be considered minor. Take both the severity and frequency of the issue into consideration when rating them. Remember to define your rating scale. Usability.gov has a good example. Other options are to use a three-question process diagram, a UX integration matrix (great for agile), or the simple but effective MoSCoW method.  
  • Develop empathy by sharing stories. We love to hear stories, and admire those among us who can tell the best ones. In the sterile, fact-filled workplace, stories can inspire, illuminate and help us empathise with those we’re designing for. Share the journeys your participants experienced, the challenges they need to overcome. Use a sprinkling of drama to illustrate the stakes involved; understanding the implications will help moderate the conversations and support UX decisions moving forward.
  • Illustrate consequences and benefits. Your leadership team will be interested if they know they will lose money, customers, or both if they don’t address certain design issues. Be as concrete as you can, using numbers from analytics and online studies when possible to make points. For example, you might be able to use analytics to show users getting to a key page, and then dropping off. This is even more effective if you can show via an online study that one version of a button, for example, is effective all the time, whereas the other one is not understood.
  • Provide design recommendations. Try to strike a balance between too vague and too prescriptive. You want your recommendations to be specific and offer guidance about how an interaction should be designed, without actually designing it. For example, you could say “consider changing the link label to match users’ expectations” or “consider making the next step in the process more obvious from this screen.” These are specific enough to give direction and serve as a jumping off point for designers.
  • Suggest next steps. It can help stakeholders to see this in writing, especially if they’re not used to working with a UX team. For example:
    • Meet to review and prioritise the findings.
    • Schedule the work to be done.
    • Assign the work to designers.

Presentations are an important first step, but your job as a researcher doesn’t end there. Consider your presentation an introduction to the issues that were found, and a jumping-off point for informing design plans.

The proactive follow through

You’ve communicated the issues. Now it’s time to dig in and get results.

Getting your priorities straight

Start by scheduling a discussion with your product manager – and possibly a representative each from the development and design teams – to prioritise the results, and put them on the product roadmap. It can be useful to take your user research findings – especially from a larger study – and group them together into themes, or projects.

Next, rate the projects on a grid with two axes. For example:

  • how much of a problem it is for customers could display vertically; and
  • how much effort it would be to design or redesign it (small, medium and large) could display horizontally.

Placing cards or sticky notes that represent the projects along these axes helps you see which work would yield the most value

Then compare this mapping to what’s currently on the product roadmap and determine where your latest projects fit into the overall plans. Consider that it often makes more sense to fix what’s broken in the existing product – especially if there are big problems – than to work on building new features. Conducting this and additional planning efforts together will ensure everyone is on the same page.

Working with your design team

Once it’s time for design work, participate in workshops and other design activities to represent the product’s users and ensure their needs are understood. In addition to contributing to the activities at hand, your role is to keep users’ goals and design issues top of mind.

Since the focus of the workshop – or any design activity – early on is solving design problems, it could be useful to post the design problems and/or goals around the room, along with user quotes and stories. A few copies of complete study findings in the room, plus any persona descriptions, are useful references. The workshop to address design problems could be handled several ways – storyboarding solutions, drawing and discussing mockups, brainstorming. But the goal is to agree on problems you’re trying to solve, and come up with possible solutions to solve them.

As the design team comes up with solutions, remember to iteratively test them with users. It’s useful for designers to get regular feedback to determine whether they’re improving their designs, and to get answers to new design questions that arise throughout the process. All of this helps designers understand users and their issues and concerns.

Achieving your end game

One key to getting your results implemented is simply remembering to consider stakeholders’ goals and big picture success throughout the research and design process. The best way to do this is to include them in the research planning – and in the research observations – to make sure you’re addressing their concerns all along. When presenting, explain how the results you are suggesting will help them meet their design and business goals.

Always remember that as the researcher you hold knowledge about your users that others don’t. Representing them from the presentation through the next design iteration is one key to your product’s success.

How do you make sure your hard-won research insights makes it through to design? Leave a comment or share in our forums.

Catch up with more of our latest posts on UX research:

The post How to Turn UX Research Into Results appeared first on UX Mastery.

]]>
https://uxmastery.com/how-to-turn-ux-research-into-results/feed/ 1 54493
Transcript: Ask the UXperts: Usability Testing — with Cindy McCracken https://uxmastery.com/transcript-usability-testing/ https://uxmastery.com/transcript-usability-testing/#comments Thu, 01 Dec 2016 21:58:43 +0000 http://uxmastery.com/?p=49468 If you missed our session with Cindy McCracken on Usability Testing, fear not – here is a full transcript plus a bonus list of handy resources.

The post Transcript: Ask the UXperts: <em>Usability Testing</em> — with Cindy McCracken appeared first on UX Mastery.

]]>
Yesterday, by popular demand, we hosted a session in our Slack channel on the subject of usability testing. Our UXpert was Cindy McCracken and she did a fantastic job.

As well as answering questions and handing out valuable advice, Cindy compiled this handy list of resources:

Reading List

Tools for all kinds of testing

User Research Techniques

Guidelines for prioritising study findings

If you didn’t make the session because you didn’t know about it, make sure you join our community to get updates of upcoming sessions. If you have follow up questions for Cindy, you can ask them over on our community forums.

If you’re interested in seeing what we discussed, or you want to revisit your own questions, here is a full transcript of the chat.

Transcript

hawk
2016-11-21 17:27
Session starts at 3 pm Wednesday 30 November PDT (or 10 am Thursday 1 December AEST)
You can use the command /tz help to get time zone conversion assistance here on @slackbot3

hawk
2016-11-29 22:55
The beginner’s guide to usability testing: http://uxmastery.com/beginners-guide-to-usability-testing/

jacqui_dow5
2016-11-30 21:56
I love usability testing!

hawk
2016-11-30 22:27
For anyone killing time before the session: http://uxmas.com/

hawk
2016-11-30 22:57
Quick overview of how these things go down:

hawk
2016-11-30 22:58
– I’ll introduce @cindy.mccracken
– Cindy will give an overview of the topic and run through some definitions
– We’ll throw it open to you for questions

hawk
2016-11-30 22:59
If things get crazy, I’ll queue your questions in a back channel and Cindy will answer them as she gets through them

jasmine
2016-11-30 23:00
Hey just heard that there’s a session happening today? How do I join?

jonny.bennett
2016-11-30 23:00
you’re in it @jasmine!

hawk
2016-11-30 23:00
We’re just about to kick off so your timing is perfect :slightly_smiling_face:

jasmine
2016-11-30 23:00
Oh! Haha, this is my first time. :smile:

lukcha
2016-11-30 23:01
Welcome @jasmine!

jasmine
2016-11-30 23:01
Thanks! Looking forward to it! :slightly_smiling_face:

hawk
2016-11-30 23:01
Ok, show time!

lukcha
2016-11-30 23:01
We’re all in for a treat. :slightly_smiling_face:

hawk
2016-11-30 23:01
First up, a huge thanks for @cindy.mccracken for your time today – we really appreciate it

hawk
2016-11-30 23:01
And thanks to the rest of you for joining us :slightly_smiling_face:

hawk
2016-11-30 23:02
Cindy recently published a great beginner’s guide on usability testing for us – you can find it pinned in this channel

hawk
2016-11-30 23:02
The formal intro: Cindy McCracken has worked in UX more than 10 years and is in her element when planning studies, conducting research, and analysing data.

hawk
2016-11-30 23:02
Currently consulting with User-View, Inc., focused on UX in the medical and financial fields, she has worked as a senior user researcher at Fidelity Investments, BB&T and iContact. Cindy earned a master’s degree in information science from the University of North Carolina-Chapel Hill.

hawk
2016-11-30 23:03
When she’s not working, you can find her hanging out with her 9-year-old daughter, reading historical fiction, or winding down in a yoga class.

hawk
2016-11-30 23:03
We asked her to come today to talk usability testing because it’s something that you (our community) requested.

hawk
2016-11-30 23:03
So @cindy.mccracken – over to you for an intro to the topic

cindy.mccracken
2016-11-30 23:03
Thanks for the intro, @hawk!

cindy.mccracken
2016-11-30 23:04
Hi everyone! I’m excited to have a conversation today about the details of usability testing.

cindy.mccracken
2016-11-30 23:04
I did my first usability tests for a non-profit when I took over for their webmaster in 2003.

cindy.mccracken
2016-11-30 23:04
I read the book “Designing Web Usability” by Jakob Nielsen, conducted tests, and was hooked.

cindy.mccracken
2016-11-30 23:05
So usability testing became a focus of my career.

cindy.mccracken
2016-11-30 23:05
Usability testing is a very useful technique for learning how well your designs work for people, but the more you get into it, you realize there’s always room for improvement.

cindy.mccracken
2016-11-30 23:05
You can always get better results.

cindy.mccracken
2016-11-30 23:05
There’s a lot to getting the right participants, writing effective tasks, and observing while remaining neutral.

cindy.mccracken
2016-11-30 23:06
The majority of usability testing is called formative testing – where you’re mainly concerned with learning how your product could be improved.

cindy.mccracken
2016-11-30 23:06
You come up with goals to test, and write realistic tasks for participants to get answers to your questions.

cindy.mccracken
2016-11-30 23:07
Of course, there are a lot of variations on the traditional in-person study.

cindy.mccracken
2016-11-30 23:07
For example, remote moderated testing. That gives you a lot more options for recruiting.

cindy.mccracken
2016-11-30 23:07
And then there’s the testing of mobile apps, which is very important these days.

cindy.mccracken
2016-11-30 23:08
And remote unmoderated testing – where you set up tasks in an online tool like UserZoom for hundreds of participants.

cindy.mccracken
2016-11-30 23:08
But for all types of usability testing, you need five things:

cindy.mccracken
2016-11-30 23:08
a design to test, participants, a test plan, a moderator (or test tool), and findings

cindy.mccracken
2016-11-30 23:09
For the design, remember to test throughout the process – from paper sketches to high-level prototypes. That lets you catch issues early.

cindy.mccracken
2016-11-30 23:10
For recruiting, consider who your target users are and where they are, then ask screener questions if you need to be specific

cindy.mccracken
2016-11-30 23:10
The test plan should include all the details of the study, such as the goals, session information (such as how observers can log in), all the tasks, etc.

cindy.mccracken
2016-11-30 23:11
When moderating, of course follow the test plan. Remember to stay neutral in words and actions; make participants feel comfortable; and remind people to talk if they’re not so you can learn.

cindy.mccracken
2016-11-30 23:12
Finally, with findings and presentations, prioritize your recommendations (low, medium, high), and definitely meet with stakeholders in person

cindy.mccracken
2016-11-30 23:12
In person is important so you can make sure the results are being interpreted correctly, and also so you can discuss results with business goals, etc.

cindy.mccracken
2016-11-30 23:13
Remember that in usability testing you are finding problems, not solutions.

cindy.mccracken
2016-11-30 23:13
So don’t get into talking about solutions at your debrief session.

nat
2016-11-30 23:14
There are many types of testing that you can do, do you have a key few you tend to go for first?

cindy.mccracken
2016-11-30 23:14
These are all high-level ideas. I’m curious why you’re here today … what you’re interested in.

cindy.mccracken
2016-11-30 23:14
Yes, I think moderated testing – either in-person or remote – is the best first way to get qualitative feedback

cindy.mccracken
2016-11-30 23:14
You can get great qualitative results about “why” your designs are and aren’t working well.

cindy.mccracken
2016-11-30 23:15
You can do a cafe study to be even quicker.

jacqui_dow5
2016-11-30 23:15
Do you find the findings in remote testing and in person testing give you the same validity?

cindy.mccracken
2016-11-30 23:15
If you do a cafe study, it should be very brief – like 3-5 minutes.

cindy.mccracken
2016-11-30 23:15
Just ask one or two most important tasks.

cindy.mccracken
2016-11-30 23:16
I have found remote testing to be very useful … a lot of times it’s been the only way I could get the right participants.

jacqui_dow5
2016-11-30 23:16
We are having the same

cindy.mccracken
2016-11-30 23:16
The main thing you miss is seeing people’s expressions. Try to make sure they’re talking a lot.

cindy.mccracken
2016-11-30 23:17
You can try to get people to share their faces with their webcams, but sometimes that can feel invasive or awkware

cindy.mccracken
2016-11-30 23:17
awkward

jacqui_dow5
2016-11-30 23:17
We’ve had it where multiple people have been doing the test at once, should we try and prevent this?

cindy.mccracken
2016-11-30 23:17
Yeah, remote is a great way to get more access to people. The incentive can be lower too.

cindy.mccracken
2016-11-30 23:17
You mean two or three people at a computer.

jacqui_dow5
2016-11-30 23:18
Yes! We had one a few weeks ago where we think there were 4 people in the same room around the PC

cindy.mccracken
2016-11-30 23:18
It can just get confusing … but I’ve had that happen too when it was a very import person who wanted others involved.

mpcnat
2016-11-30 23:18
How do you handle usability testing of a mobile app that is being tested by a remote user, and due to logistics and comms issues the facilitator is an account manager from your own company. What tips do you suggest to get some success out of the process?

jacqui_dow5
2016-11-30 23:18
I was worried if one was a ‘boss’ it may influence the others

cindy.mccracken
2016-11-30 23:18
You can try to get them to do it separately if it makes sense

cindy.mccracken
2016-11-30 23:18
Or if it doesn’t, ask that just one person be in control and talking.

lynne
2016-11-30 23:19
Can you explain what you mean by a cafe study? I can make a guess, but it’s the first time I’ve heard this term…

cindy.mccracken
2016-11-30 23:19
so … the remote testing is in person, but the AM is the facilitator?

cindy.mccracken
2016-11-30 23:19
oh wait, remote

cindy.mccracken
2016-11-30 23:19
how is it being done remotely? what method?

cindy.mccracken
2016-11-30 23:20
@jacqui_dow5 – that’s why if they could do it separately it would be better.

cindy.mccracken
2016-11-30 23:20
it’s like in a focus group … where one person might take over

hawk
2016-11-30 23:20
Note: I’ll acknowledge questions that have been queued with a :grey_question:

cindy.mccracken
2016-11-30 23:21
@lynne – a cafe study is where you set up in a cafeteria, or at a mall or somewhere with your design.

cindy.mccracken
2016-11-30 23:21
You have an intro and a task or two for people.

jacqui_dow5
2016-11-30 23:21
Is cafe study the same as geurilla testing?

cindy.mccracken
2016-11-30 23:21
Then you have people do the task on your computer or device, and pay them with a small gift card or something.

cindy.mccracken
2016-11-30 23:22
It’s just usually to get quick feedback on a particular interaction.

cindy.mccracken
2016-11-30 23:22
yes

kaydeecarr
2016-11-30 23:22
What is your technique for analyzing the data after you’ve done the testing?

cindy.mccracken
2016-11-30 23:22
@kaydeecarr – I spend time making sure I take clean notes that are organized and will be easy to analyze.

cindy.mccracken
2016-11-30 23:22
columns for participants / rows for questions.

cindy.mccracken
2016-11-30 23:23
Then I go through and add rows to count the way people behaved (accomplish task, etc.)

cindy.mccracken
2016-11-30 23:23
and add those (this is all in excel)

cindy.mccracken
2016-11-30 23:23
Also, just keeping track of big issues related to goals as we go through the test sessions.

cindy.mccracken
2016-11-30 23:23
Then at the end go through to find data to back up those findings.

cindy.mccracken
2016-11-30 23:24
but I go through everything in case I missed something.

cindy.mccracken
2016-11-30 23:25
@mpcnat How do you handle usability testing of a mobile app that is being tested by a remote user, and due to logistics and comms issues the facilitator is an account manager from your own company. What tips do you suggest to get some success out of the process? (remote participant, local account manager facilitator)

cindy.mccracken
2016-11-30 23:25
just repeating this question.

cindy.mccracken
2016-11-30 23:25
what is the process? can you see the device?

cindy.mccracken
2016-11-30 23:26
are they using webex on an ipad? something like that?

cindy.mccracken
2016-11-30 23:26
or holding their hands in front of a laptop’s webcam?

cindy.mccracken
2016-11-30 23:26
@mpcat – what issues / concerns are you noticing with the facilitator being an AM?

jacqui_dow5
2016-11-30 23:26
I’ve seen examples of where people attach a make shift camera holder to a mobile to record the screen? But would rely on the user making this and having a camera!

nik
2016-11-30 23:27
Is there a big difference between reports/presentations that are based on moderated VS unmoderated tests? It’s my impression that most insights from unmoderated tests comes from direct quotes whereas moderated tests include a lot of other factors, such as facial expressions, body language etc. And how does this affect the conclusions of a study? It seems to me that moderated tests will depend more on “personal interpretation” which can be more difficult to support scientificly.

cindy.mccracken
2016-11-30 23:27
Huh! That sounds interesting.

lukcha
2016-11-30 23:28
Mr Tappy!

cindy.mccracken
2016-11-30 23:28
@nik – In unmoderated tests, using UserZoom or Loop 11, I think most of the results are quantitative

mpcnat
2016-11-30 23:28
It’s a prototype app on a mobile device that the AM has in front of the customer, they have a facilitation script to assist them, that they take notes against whilst the customer performs tasks. The issue is that an AM isn’t familiar with usability Testing, but is great with the relationship, and they feel out of their depth, so do we cut down the tasks to test, but then then that means we lose insight with that customer, that we only see on an irregular basis

cindy.mccracken
2016-11-30 23:28
You would do them if you wanted greater certainty that certain things were problems, or to compare two different designs quantitatively.

cindy.mccracken
2016-11-30 23:29
So I would expect the results to be more charts, that sort of thing.

cindy.mccracken
2016-11-30 23:30
@nik – I don’t know if I’m answering your question. But in moderated testing, you still have findings – like whether people could complete tasks, where they got stuck, etc. So you have real facts to report. In addition to frustrations.

cindy.mccracken
2016-11-30 23:31
@mpcat – Is it in person then? It would help if someone else were taking notes so the AM could focus on facilitating.

jacqui_dow5
2016-11-30 23:31
I have used whatusersdo for unmoderated it was good as the users had a webcam so you got a similar video to moderated testing

cindy.mccracken
2016-11-30 23:31
It gets to be a lot to juggle.

cindy.mccracken
2016-11-30 23:32
Even better, have the AM introduce them to a UXer and have the UXer conduct the test. It can get overwhelming.

cindy.mccracken
2016-11-30 23:32
@jacqui_dow5 – Very cool! I’m going to take a look at whatusersdo. Thanks!

mpcnat
2016-11-30 23:32
yep it’s in person, sometimes there is someone else taking notes, but the feedback from the AM is “can’t you just give me the top 10 things you want to see, there is too much to watch out for”, which I get but since we may only get one go at it with them, it seems like we miss out, the script has tasks and also things to watch for (reactions) from the user on particular screens, this is where they feel overloaded.

cindy.mccracken
2016-11-30 23:33
Have you tried http://usertesting.com? You can create a mobile test for their testers to do on mobile.

hawk
2016-11-30 23:33
If you’re new to these sessions, you can jump in with questions at any time – they don’t have to be in context.

kaydeecarr
2016-11-30 23:33
Do you have any suggestions on what to do when users ask you questions? Like “how do I get to the homepage”?

robbin
2016-11-30 23:34
What’s a good way to practice writing up some questions? I’m new, so my worry is that I’m accidentally going to ask some leading questions.

cindy.mccracken
2016-11-30 23:34
@mpcat – Yeah, that’s definitely a lot to manage. When others take notes, and he doesn’t have to worry about writing down all the reactions, that doesn’t help?

cindy.mccracken
2016-11-30 23:35
@kaydeecarr – You can remind them that you’re there to see how they would do things if they were in their own environment. Ask where they think it would be … or where they would expect to find it

mpcnat
2016-11-30 23:35
It helps but sometimes it’s an AM by themselves on a site in the back of remote QLD :slightly_smiling_face: Thanks for the answers

lynne
2016-11-30 23:35
@mpcnat – can you get the AM to do a practice run with other people in the office? Might make them feel more comfortable with the task.

cindy.mccracken
2016-11-30 23:35
@robbin – definitely don’t ask users what they want … they don’t know.

mpcnat
2016-11-30 23:35
@lynne yep that’s how we are going forward with it. Thanks

cindy.mccracken
2016-11-30 23:36
know what important tasks for people to accomplish are … and ask tasks around that. be specific – like find a vacuum cleaner that gets good ratings and is in a particular price range.

cindy.mccracken
2016-11-30 23:36
What questions are you worried might be leading?

cindy.mccracken
2016-11-30 23:37
also you can follow up each task with a rating question – like how easy or difficult was that task (on a scale)?

robbin
2016-11-30 23:37
When I was practicing with the UX team, I was really tempted to ask things like, “But how do you think you would go back to the homepage?” and when I listened in, I also heard things like, “What do you think this button would do? Go back to the homepage?”

cindy.mccracken
2016-11-30 23:38
Try to keep them focused on accomplishing the tasks. If they’re looking at a button and not commenting, you can say, “What are you thinking?”

cindy.mccracken
2016-11-30 23:39
You’re right – that was leading. :slightly_smiling_face:

cindy.mccracken
2016-11-30 23:39
That is tempting … just try to stay calm and think before you speak. It’s hard to have restraint. I know.

nik
2016-11-30 23:39
You said “prioritize your recommendations”. What do you base this type of evaluation on? Especially, as an external consultant. Do you need access to certain data (e.g. visitors and clicks on specific websites or similar)?

robbin
2016-11-30 23:40
That’s what I thought :smile: I also wondered – how do you help the user feel a little more at ease? When I watch people, sometimes they feel really sheepish and I’m unsure if they’re being truthful. I’ve also been the tester before, so I’m used to saying exactly what’s on my mind because I personally don’t feel weird about it.

cindy.mccracken
2016-11-30 23:40
@nik – that’s a good question. I’ve always had a guide as to what makes something a critical issue vs. high or low, etc.

cindy.mccracken
2016-11-30 23:41
After this session, I can come up with something for you.

cindy.mccracken
2016-11-30 23:41
@robbin – Just try to relax and be friendly, and have a little chitchat at first. And definitely start with easy background questions.

cindy.mccracken
2016-11-30 23:41
Maybe offer them a drink. something like that.

cindy.mccracken
2016-11-30 23:42
@robbin – why don’t you think they’re being truthful?

cindy.mccracken
2016-11-30 23:42
do you think they’re trying to please you?

cindy.mccracken
2016-11-30 23:42
(that was leading)

robbin
2016-11-30 23:42
Hahah! Yeah, I think so – they sort of keep looking at me as if to ask, “Was that right?”

nik
2016-11-30 23:42
@cindy.mccracken That would be very interesting to see! I find myself struggling a bit with this and sometimes spend too much time on issues that might not be very critical

cindy.mccracken
2016-11-30 23:43
@robbin – Yes, that does happen! People think there are right answers. So you can definitely remind people that there really are no right or wrong answers.

cindy.mccracken
2016-11-30 23:43
… that you’re testing the software, not them.

hawk
2016-11-30 23:43
@cindy.mccracken I’d be interested in hearing about some ways to present your findings

cindy.mccracken
2016-11-30 23:43
You want to make sure the design is going to work for them, so you want their open and honest feedback. Really.

cindy.mccracken
2016-11-30 23:44
@hawk – I have done presentations – or results – several ways. One way that didn’t work well in my environments was a long report.

cindy.mccracken
2016-11-30 23:44
I don’t recommend that.

cindy.mccracken
2016-11-30 23:45
I tend to come up with a template for a PowerPoint report, and use that. I love using images and things like callouts to point out findings in the interface.

cindy.mccracken
2016-11-30 23:45
Sometimes color coding helps people see good vs. bad findings too.

cindy.mccracken
2016-11-30 23:45
I’ve tried bulleted lists in email, which can sort of work for agile.

cindy.mccracken
2016-11-30 23:45
But no matter what, it’s critical to present results in person, even if it’s in a casual setting.

jacqui_dow5
2016-11-30 23:45
We’ve had an issue where we are redesigning a system. Our current users have some preconceptions (we are hard to use, so things long winded) so when they test the new software it throws them when something is easy to do and they’ve started doubting themselves thinking we are tricking them! Any way of handling this?

cindy.mccracken
2016-11-30 23:46
You want to make sure everyone’s interpreting correctly and is on the same page about the findings – and agree on how they’ll proceed.

cindy.mccracken
2016-11-30 23:47
@jacqui_dow5 – Tricking them? I’ve never heard of that. I would just try to explain at the beginning of the test that you’ve heard issues and you’re redesigning because of them.

cindy.mccracken
2016-11-30 23:47
Maybe that will help their expectations? Or did you already try that?

jacqui_dow5
2016-11-30 23:48
Yes one comment was along the lines of ‘it’s a lot harder normally, I feel like I need to do more’

hawk
2016-11-30 23:48
There is 10 mins left in the session. If you’re sitting on a question, now is the time to ask!

jacqui_dow5
2016-11-30 23:48
We explain at the start why we are doing it and that this will be new and improved on what they’re used to!

nik
2016-11-30 23:49
Is it important to include “good findings” as well as “bad findings”? Or is it good enough to focus on problems in a report

cindy.mccracken
2016-11-30 23:49
@jacqui_dow5 – interesting to take note on that. that could be a good quote. Oh – which I forgot to mention … I love quotes in presentations! They can really be effective at conveying attitudes.

cindy.mccracken
2016-11-30 23:49
@nik – I actually always try to do both. Definitely.

cindy.mccracken
2016-11-30 23:49
The designers have worked hard on the product, and also they just need to know what IS working well so they keep those things.

cindy.mccracken
2016-11-30 23:49
It shouldn’t be all doom and gloom.

nik
2016-11-30 23:51
@cindy.mccracken That makes sense, I’ll keep that in mind. I guess including some bright sides will also make the team more inclined to want to involve you in a project again some other time :slightly_smiling_face:

bleke
2016-11-30 23:51
@cindy.mccracken Steve Krug talks a lot about getting decision makers to view the sessions. Is it something you normally do?

cindy.mccracken
2016-11-30 23:51
@jacqui_dow5 – Huh. Maybe follow up their – I need to do more – with a question. “Why is that?” or just get them to elaborate so you can learn more from it.

cindy.mccracken
2016-11-30 23:51
@bleke – I’m glad you brought that up!

jacqui_dow5
2016-11-30 23:52
Yes, good idea! It completely threw up in the test!

cindy.mccracken
2016-11-30 23:52
Yes, definitely have observers – as many as possible. It really does help with buy-in and general understanding of the problems when people observe.

jacqui_dow5
2016-11-30 23:52
Threw us*

cindy.mccracken
2016-11-30 23:52
You can simply share your screen and have people log in to the screen-share tool. Have them ask questions of the note-taker or through a different channel so participants aren’t distracted.

cindy.mccracken
2016-11-30 23:53
@jacqui_dow5 – I wondered what that meant.

jacqui_dow5
2016-11-30 23:53
Sorry it’s getting late here!!

cindy.mccracken
2016-11-30 23:53
@bleke – Definitely encourage people to come to as many sessions as they can attend.

cindy.mccracken
2016-11-30 23:53
@jacqui_dow5 – That’s OK! I’m glad you came!

bleke
2016-11-30 23:54
@cindy.mccracken Ok, thanks! :slightly_smiling_face:

lynne
2016-11-30 23:54
I recently attempted some usuability tests in a classroom setting, with multiple high school students using our site at once. It was very hard to manage – we got some good data, but I’m wondering if you have any suggestions on best practices for a session like this?

cindy.mccracken
2016-11-30 23:54
Re: observers – have a few minutes at the end where you ask observers if they have questions, and then ask them of the participant.

cindy.mccracken
2016-11-30 23:54
You don’t want to be interrupted by observers.

cindy.mccracken
2016-11-30 23:54
@lynne – was there a reason for multiple students using the site at once?

cindy.mccracken
2016-11-30 23:55
Like, is that how they normally use it/

cindy.mccracken
2016-11-30 23:55
?

nik
2016-11-30 23:55
One last question from me: Often management views usability testing as a box that should be ticked at the end of product development. Any advice for how to convince management that it should be done as early as possible in a project?

nik
2016-11-30 23:56
I’m hoping there’s one Magic Argument that always works

cindy.mccracken
2016-11-30 23:56
@lynne – It is hard to capture exactly who’s saying what. But if it’s necessary for multiple to be using, maybe focus on their interactions and how they’re doing it.

lynne
2016-11-30 23:56
Yes, it is used by schools in class. But the main reason was because we had an opportunity to do so and it’s hard to get into schools so we tried to make the best of it.

frankenvision
2016-11-30 23:57
Is it ok to make changes after a single usability test? If they are obvious blunders?

cindy.mccracken
2016-11-30 23:57
@nik – hmmm… I wish that too! More or less, do usability testing early as often as possible – even if very simple, affordable ones – to prove how it works and how important it is.

cindy.mccracken
2016-11-30 23:57
In other words, show don’t tell.

cindy.mccracken
2016-11-30 23:57
That worked really well for me at one company where they weren’t convinced of the value until we proved it.

cindy.mccracken
2016-11-30 23:58
If not that, find some good case studies to share …

cindy.mccracken
2016-11-30 23:58
And a lot of data that shows starting early is a best practice. If you wait too late, you can’t do anything with the data.

cindy.mccracken
2016-11-30 23:59
@frankenvision – Usually you will read that you shouldn’t make changes to design … but I think it’s OK if it’s something that’s not working but should have been, and you’re having to talk around it … that sort of thing.

cindy.mccracken
2016-11-30 23:59
That’s not a design change then; it’s fixing a bug.

robbin
2016-12-01 00:00
(Just wanted to say thank you for doing this! This is super helpful!)

cindy.mccracken
2016-12-01 00:00
@lynne – so you were trying to get a lot of kids’ feedback at once to make the most of your time? Here’s one idea … you could have them use the program individually .. and observe as much as you can. Then have a focus group of sorts where they discuss.

cindy.mccracken
2016-12-01 00:00
It’s a thought!

cindy.mccracken
2016-12-01 00:00
Thanks, @robbin! I’ve really enjoyed it.

frankenvision
2016-12-01 00:00
Ok thanks – how many questions should we use for remote usability tests? I think they’re supposed to take 15min for users to complete…

frankenvision
2016-12-01 00:01
What a good question to measure how trustworthy a company is?

cindy.mccracken
2016-12-01 00:01
It might depend on how long it will take to do the tasks. I’d say I’ve seen about 10 tasks on average. Best idea: test it with some people to see how long it takes before sending it out.

lynne
2016-12-01 00:01
That’s kind of what we did, they each had a computer, we gave them tasks to do, and discussed as a group at the end. They were a bit shy about the discussion part.

cindy.mccracken
2016-12-01 00:02
One idea for shyness that I like is first having them write down answers to your questions.

lynne
2016-12-01 00:02
The biggest challenge was trying to observe 20 different people at once. I couldn’t figture out if it was better to choose one or two students to focus on, or roam around the room.

cindy.mccracken
2016-12-01 00:02
Then, having thought it through first, people tend to be more confident.

lynne
2016-12-01 00:02
Nice idea about writing it down – i’ll try that next time!

cindy.mccracken
2016-12-01 00:03
great!

cindy.mccracken
2016-12-01 00:03
@lynne – maybe if that happens again, it could help if you had multiple observers??

cindy.mccracken
2016-12-01 00:03
that’s a tough one,.

hawk
2016-12-01 00:04
Ok all – after this answer that’s a wrap

frankenvision
2016-12-01 00:04
Is it best practice to run a single test to see how it runs or run 3-5 remote usability tests first go?

hawk
2016-12-01 00:04
Remember that if you have follow up questions you can ask at http://community.uxmastery.com

hawk
2016-12-01 00:04
Someone is there pretty much around the clock

seyonwind
2016-12-01 00:04
Thank you for all the usability advice @cindy.mccracken!
As always, thank you @hawk for hosting :slightly_smiling_face:

cindy.mccracken
2016-12-01 00:05
Thanks everyone! I really enjoyed this.

jacqui_dow5
2016-12-01 00:05
Thank you so much guys! This has been great!

hawk
2016-12-01 00:05
Thanks so much again for your time @cindy.mccracken – you rocked it!

lukcha
2016-12-01 00:05
This has been a great session_thanks so much for your advice and tips @cindy.mccracken :slightly_smiling_face:

cindy.mccracken
2016-12-01 00:05
Of course. And thank you for hosting,

lynne
2016-12-01 00:05
Thanks @cindy.mccracken and @hawk!

hawk
2016-12-01 00:05
And thanks for all the great questions

hawk
2016-12-01 00:05
Have a great morning/afternoon/evening/night all

cindy.mccracken
2016-12-01 00:05
Indeed.

nik
2016-12-01 00:06
Thanks a lot @cindy.mccracken . Good night

nat
2016-12-01 00:08
Thanks!

lukcha
2016-12-01 00:12
Thanks everyone!

The post Transcript: Ask the UXperts: <em>Usability Testing</em> — with Cindy McCracken appeared first on UX Mastery.

]]>
https://uxmastery.com/transcript-usability-testing/feed/ 1 49468
How to Conduct Usability Testing from Start to Finish https://uxmastery.com/beginners-guide-to-usability-testing/ https://uxmastery.com/beginners-guide-to-usability-testing/#comments Tue, 29 Nov 2016 21:10:53 +0000 http://uxmastery.com/?p=49367 Usability testing is a critical part of the user-centered design process, and comes in many forms. From casual cafeteria studies, to formal lab testing, remote online task-based studies and more. Whether you're new to this part of UX research, or just need a refresher, Cindy McCracken walks us through the essentials of effective usability tests.

The post How to Conduct Usability Testing from Start to Finish appeared first on UX Mastery.

]]>
You are not your users. But if you can find your users and learn from them as you design, you’ll be able to create a better product.

Usability testing comes in many forms: casual cafeteria studies, formal lab testing, remote online task-based studies and more. However you choose to carry out your testing, you’ll need to go through these five phases:

  • Prepare your product or design to test
  • Find your participants
  • Write a test plan
  • Take on the role of moderator
  • Present your findings

That’s it. A usability test can be as basic as approaching strangers at Starbucks and asking them to use an app. Or it can be as involved as an online study with participants responding on a mobile phone.

Usability testing can be as simple as listening to people as they use a prototype of your app for a few minutes in a cafeteria.

Usability testing is effective because you can watch potential users of your product to see what works well and what needs to be improved. It’s not about getting participants to tell you what needs adjusting. It’s about observing them in action, listening to their needs and concerns, and considering what might make the experience work better for them.  

Early on, usability tests in computer science were conducted primarily in academia or large companies such as Bell Labs, Sun, HP, AT&T, Apple and Microsoft. The practice of usability testing grew in the mid-1980s with the start of the modern usability profession, and books and articles popularised the method. With the explosion of digital products, it’s continued to gain popularity because it’s considered one of the best ways to get input from real users.

A common mistake in usability testing is conducting a study too late in the design process. If you wait until right before your product is released, you won’t have the time or money to fix any issues – and you’ll have wasted a lot of effort developing your product the wrong way. Instead, conduct several small-scale tests throughout the cycle, starting as early as paper sketches.

Create a design or product to test

How do you decide what to test? Start by testing what you’re working on.

  • Do you have any questions about how your design will work in practice, such as a particular interaction or workflow? Those are perfect.
  • Are you wondering what users notice on your home page? Or what they would do first? This is a great time to ask.
  • Planning to redesign a website or app? Test the current version to understand what’s not working so you can improve upon any issues.

Once you know what you’d like to test, come up with a set of goals for your study. Be as specific as possible, because you’ll use the goals to come up with the particular study tasks. A goal can be broad, such “ Can people navigate through the website to find the products they need?” Or they can be specific, such as “Do people notice the link to learn more about a particular product on this page?”

Sometimes a paper sketch is enough to get you started with testing.

You also need to figure out how to represent your designs for the study. If you’re studying a current app or website, you can simply use that. For early design ideas, you can use a paper “prototype” made from pencil sketches or designed through software such as PowerPoint.

If you’re farther along in your ideas and want something more representative of the interactions, you can create an interactive prototype using a tool such as Balsamiq or Axure. Whatever you create, make sure it will allow participants to perform the tasks you want to test.

Find your participants

When thinking about participants, consider who will be using your product and how you can reach those people.  

If you have an app that targets hikers, for example, you could post your request on a Facebook page for hikers. If your website targets high school English teachers, you could send out a request for participants in educational newsletters or websites. If you have more money, hire a recruiting firm to find people for you (don’t forget to provide screener questions to find the right people). If you have no money,  reach out to friends and family members and ask if they know anyone who meets your criteria.

Screeners like this one help you connect with the right participants.

Be prepared. Participant recruiting is often one of the lengthier parts of any usability study, and should be one of the first things you put into action. This way as you’re working on other parts – like writing your tasks and questions – the recruitment process will be progressing concurrently.

You might also wonder how many participants you will need. Usability expert Jakob Nielsen says testing five people will catch 85% of the usability issues with a design – and that you can catch the remaining 15% of issues with 15 users. Ideally then, you should test with five users, make improvements, test with five more, make improvements, and test with five more. (As a general rule, recruit at least one more participant than you need, because typically one person will not show up.)

No matter who you’re testing, you’ll want to offer some sort of incentive, such as cash or a gift card, for participants’ time. The going rate is different in different parts of the world. Generally, you should charge more if the test in-person (because participants have to travel to get there), and less if it’s remote, through a service such as WebEx. Audiences that are hard to reach – such as doctors or other busy and highly trained professionals – will require more compensation.

Write a test plan

To keep yourself organised, you need a test plan, even if it’s a casual study. The plan will make it easy to communicate with stakeholders and design team members who may want input into the usability test and, of course, keep yourself on track during the actual study days. This is a place for you to list out all the details of the study. Here are potential study plan sections:

  • Study goals: The goals should be agreed upon with any stakeholders, and they are important for creating tasks.
  • Session information: This is a list of session times and participants. You can also include any information about how stakeholders and designers can log into sessions to observe. For example, you can share – and record – sessions using WebEx or gotomeeting.
  • Background information and non-disclosure information: Write a script to explain the purpose of the study to participants; tell them you’re testing the software, not them; let them know if you’ll be recording the sessions; and make sure they understand not to share what they see during the study (having participants sign a non-disclosure agreement as well is a best practice). Ask them to please think aloud throughout the study so you can understand their thoughts.
  • Tasks and questions: Start by asking participants a couple of background questions to warm them up. Then ask them to perform tasks that you want to test. For example, to learn how well a person was able to navigate to a type of product, you could have them start on the home page and say, “You are here to buy a fire alarm. Where would you go to do that?” Also consider any follow-up questions you might want to ask, such as “How easy or difficult was that task?” and provide a rating scale.
  • Conclusion: At the end of the study, you can ask any observers if there are questions for the participant, and ask if the participant has anything to else they’d like to say.

It might help to start your test plan with a template.

Take on the role of moderator

It’s your job as moderator – the one leading usability sessions – to make sure the sessions go well and the team gets the information they need to improve their designs. You need to make participants feel comfortable while making sure they proceed through the tasks, and while minimising or managing any technical difficulties and observer issues. And stay neutral. You can do this!

The test plan is your guide. Conducting a pilot study – or test run – the day before the actual sessions start also helps your performance as a moderator because you get to practice working through all the aspects of the test.

Observe and listen

As you go through the study with participants, remember that it’s your job to be quiet and listen; let the participants do the talking. That’s how you and your team will learn. Be prepared to ask “why?” or say “Tell me more about that” to get participants to elaborate on their thoughts. Keep your questions and body language neutral, and avoid leading participants to respond a certain way.

During the sessions, someone will need to take notes. Ideally, you’ll have a separate note-taker so you can focus on leading sessions. If not, you’ll need to do this while moderating. Either way, set up a note-taking sheet in a spreadsheet tool (I use Excel) to simplify the process both now and when analysing the data. One organised way to do this is to have each column represent a participant, and each row a question or task. Learn more about writing effective research observations.

In addition to taking notes, plan to record the sessions using a tool such as WebEx or Camtasia as a backup, just in case you miss something. You’ll find a useful list of tools here.

Make sure you prepare for things to go wrong (and something always does). Consider the following:

  • Some participants will be a few minutes late. If they are, but you still want to use them, what are the lowest priority tasks or questions that you will cut out?
  • The prototype software could stop working or have a bug. Try to have a backup – such as paper screenshots – if you think this is a possibility.
  • In a remote study, some participants will have difficulty using the video conferencing tool. Know in advance how the screen looks to them, what they should do, and common things that can go wrong so you can guide them through the experience.

Remote testing

If you’re conducting a remote unmoderated study, a remote tool – such as UserZoom or Loop11 – takes the place of the moderator. Because of the extra distance, you need to write your introduction to set the tone and provide background information about the study; effectively present the tasks; and keep users on track. It’s important to do as much as possible to test remote studies before launching them to prevent technical difficulties as well.

Present your findings

As you’re going through your sessions, it’s a good idea to jot down themes you notice, especially if they’re related to the study’s goals. Consider talking with observers after each session or at the end of each day to get a sense of their main learnings. Once the sessions are over, comb through your notes to look for more answers to the study’s stated goals, and count how many participants acted certain ways and made certain types of comments.  Determine the best way to communicate this information to help stakeholders.

Callouts are useful to draw attention to users’ quotes or points in the presentation of results.

Consider these methods of documenting your findings:

  • If your audience is an agile team that needs to start acting on the information right away, an email with a bulleted list of findings may be all you need. If you can pair the email with a quick chat with team members, the team will process the information better.
  • A PowerPoint presentation can be a great way to document your findings, including screenshots with callouts, and graphs to help make your points stand out. You can even include links to video clips that illustrate your points well.
  • If you’re in a more academic environment, or your peers will read a report, write up a formal report document. Don’t forget to include images to illustrate your findings.

Where to next?

These resources can serve as excellent references on usability testing:

  • Don’t Make Me Think and Rocket Surgery Made Easy by Steve Krug
  • Observing the User Experience by Mike Kuniavsky
  • Handbook of Usability Testing by Jeffrey Rubin and Dana Chisnell

Once you master the basics of usability testing, you can expand into other types of testing such as:

  • Remote moderated testing (same as lab testing, only your participants are somewhere else, and you communicate through WebEx or a similar tool and a phone)
  • Remote unmoderated testing (usability testing with hundreds of people through a tool such as UserZoom or Loop11)
  • A/B testing (testing two designs against each other to see which performs better)
  • competitive testing (pitting your design against your competitors’ designs)
  • benchmark studies (testing your site or app’s progress over time)

Usability testing is a critical part of the user-centered design process because it allows you to see what’s working and what’s not with your designs. Challenge yourself to get more out of your sessions by using at least one new idea from this article during your next – or first – round of testing.

The post How to Conduct Usability Testing from Start to Finish appeared first on UX Mastery.

]]>
https://uxmastery.com/beginners-guide-to-usability-testing/feed/ 5 49367
How to Run an Unmoderated Remote Usability Test (URUT) https://uxmastery.com/how-to-run-an-unmoderated-remote-usability-test-urut/ https://uxmastery.com/how-to-run-an-unmoderated-remote-usability-test-urut/#comments Tue, 06 Oct 2015 21:14:48 +0000 http://uxmastery.com/?p=29600 We published this article in which Chris Gray explains how unmoderated remote usability testing (URUT) a while back, but we're so excited to be including a new animated video that we've decided to republish it.

Enjoy!

The post How to Run an Unmoderated Remote Usability Test (URUT) appeared first on UX Mastery.

]]>
As UXers we practice in exciting times. 

Design is in demand, and the tech sector is at the forefront of business innovation. It is also a time where we have access to a huge number of tools and techniques that enable us to innovate and adapt our practice for a broad range of scenarios.

Usability testing is a cornerstone of UX practice. Perfect for evaluating the designs we create, flexible for collecting a range of information about customers and easy to combine with other techniques. Usability testing is a technique where representative participants undertake tasks on an interface or product. The tasks typically reflect the most common and important activities and participant’s behavior is observed to identify any issues that inhibit task completion.

Usability testing is a super flexible technique that allows for the assessment of a variety of aspects of an interface including the broad product concept, interaction design, visual design, content, labels, calls-to-action, search and information architecture. It is a proven technique for evaluating products, and in some organisations is used as a pre-launch requirement.

  • It’s relatively time consuming; lab based study is typical completed with between 5 and 12 participants. Assuming each session takes 1 hour, with one facilitator running the sessions this would take between 1 and 3 days.
  • Recruiting participants to attend the sessions takes time and effort; via a recruitment agency it would take minimum a week to locate people for a round of testing.
  • Due to the time-intensive nature and cost of in-person usability testing, most studies are conducted with relatively small samples (i.e. less than 10). While a small sample is often adequate for exploring usability and iterating a product, some stakeholders have less confidence in these small sample sizes. This is often due to exposure to quantitative market research where samples in excess of 500 people are common.
  • They are conducted in an artificial environment. In person tests are often lab-based or in a corporate setting that may not reflect real world use of the product.

One of the ways these downsides can be overcome is the use of unmoderated remote usability test (URUT).

Let’s take a look at some of the basics of running URUTs.

What is URUT?

URUT is a technique that evaluates the usability of an interface or product; that is, the ease of use, efficiency and satisfaction customers have with the interface. It is similar to in-person usability testing however participants complete tasks in their own environment without a facilitator present. The tasks are pre-determined and are presented to the participant via an online testing platform.

There are two broad methods for URUT with varying ways for collecting participant behaviour and these are dictated by the technology platforms.

URUT utilising video recordings of participants interacting with interfaces. These studies are more qualitative in nature with participants thinking aloud during the recording to provide insight.

URUT where the behavior is captured via click-stream data and is run more like a survey. These studies are more quantitative in nature because larger sample sizes are practical and the systems automate tracking of user behaviour.

Both methods are designed to evaluate the usability of a product and both have strengths and weaknesses. Video based sessions require more time to identify the findings and lend themselves to smaller samples however by listening to participants and observing their behavior more information can be collected regarding the design. Click stream methods allow for larger sample sizes and tend to be faster to compete due to the automation of data collection.

Note that some tools support both methods; click stream for large samples and video is collected for a subset of the sample to be able to explore specific aspects of the design in more detail. More on the tools below.

When to use URUT?

Common scenarios where URUT is value include:

  • Obtaining a large sample and/or a high degree of confidence is required: A small sample of in person usability tests may be all that is required from a design perspective but if your stakeholders are used to seeing large samples and buy-in with a small sample is difficult then using big numbers may be simpler than trying to convince them of the value of the small sample. Further, where a new design is critical for an organisation or will have will have a substantial impact, the confidence gained from a large sample study can be valuable.
  • Where the audience is geographically dispersed or hard to access: The audience for some products are geographically spread and can be hard to access without travelling great distance, imagine a health case management system for remote communities in the Kimberly. Also consider trying to access time poor senior executives, they may be able to complete a 15 minute online study late at night in a time convenient for them but not during the day or in a specific location.
  • Where speed is critical: Everyone working in the digital industry will have worked on a project with tight timelines or is running behind schedule. Also, in today’s Agile workplaces, getting usability testing conducted quickly may be the only option. An URUT study can be run in entirety in a couple of days whereas a typical in-person study would take more than one week, if not longer.
  • Where a specific environment is critical: Some products will be used in environments, which cannot be replicated in a lab or where their context of use is critical. For example, an app used outdoors in snow bound towns.
  • Where budgets are tight: Running 6 usability testing with a video recording technique especially where the sample is fairly generic, can be inexpensive.
  • In cases where you need to compare 2 or more products or interfaces: URUT is perfect for benchmarking studies comparing either competitor products or different iterations of your product. The ability to capture large sample sizes means that statistically significant differences between interfaces can be identified.

URUT tends to be less appropriate for more exploratory style usability testing because it is not possible to change tasks mid stream or ask impromptu questions. Click-stream tools tend to provide lots of data on what is happening however tend to provide less insight on why the behavior is occurring. Video based studies can be frustrating when there is a core questions that you would love to ask but hadn’t planned for. For early stage low fidelity prototypes in-person usability testing tends to be preferable because the facilitator can provide more context for participants regarding the intended functionality of the interface.

How to run an URUT

Before you start testing: You need to fully understand why the research is being conducted. Like all UX research techniques this comes back to defining the objectives of the study. All good research requires a clear understanding of:

  1. The objectives of the project.
  2. Identification of the research questions, which spell out how we will explore the objective.
Research ObjectivesResearch Questions
Evaluate the effectiveness of the booking processDo participants understand the field labels?
Do error messages support participants to progress?

Exploring these objectives and research questions with stakeholders at the outset will help with designing the study and provide a reference point for subsequent discussions. Spending the time up front to get this right will save time down the track and help ensure a successful study.

Audience

In order to run an URUT is important to identify who will complete the study. Ideally the sample would be representative of the product audience. There are a number of options for sourcing participants:

  1. Emailing the study to a database of existing customers. This assumes that you have customers.
  2. An intercept can be run on a website with existing customers. That is, a pop-up on your site invites people to participate in the study. An advantage of this approach is that the sample is likely to be representative.
  3. A panel is another option, especially when you don’t have an existing customer base. A panel is a database of people who have indicated that they would like to participate in research. Usually panel databases can be segmented to target a specific audience however you typically pay for the convenience. Some URUT tools have an integrated participant filtering which can be used to improve the representativeness of the sample.
  4. Social media can be another means to locate sample especially for organisations who have an engaged following. With social media it is important to ensure that the sample is representative of your audience.

Offering some form of incentive may be required to motivate participants to compete the study such as gift voucher prize. Audiences that are more engaged with the organization tend to require smaller incentives and those that are less engaged a greater incentive.

Tasks

It is crucial to get the tasks right for URUT. It needs to be very clear to the participants what is required of them. Provide enough detail for the participant to compete the task on their own and try to include any information they would require to complete the task. For example if a task requires credit card details providing fictitious card details will be necessary.

Avoid adding extraneous information in a task, which may confuse participants. Also avoid clues and telling the participant what to do, for example avoid including the wording of a call-to-action in the task, which will give the task away.

And finally, ensure that the interface supports participants to actually complete the task and for them to be aware that they have done so. In a prototype this may require adding specific content. An example task: Imagine you have decided to stay in Cairns for the first week in September. Use this site to reserve accommodation and pay.

Include questions

It is recommended that survey questions be provided as part of a study.

  • Include closed questions after each individual task to measure ease of task completion. This will provide insight on which tasks are harder to complete than others. Also including open-ended questions will allow participants to describe their experience and any issues they encounter.
  • Questions can also be provided after the test as a whole, to allow an overall assessment of the experience. This could include metrics such as customer satisfaction with the product, Net Promoter Score and System Usability Scale, which can be used to benchmark the product over time and against competitors. Again open-ended questions should be used to allow participants to provide feedback and to understand why issues are occurring.
  • Questions can also be included with the intention of profiling participants. These can be helpful to understand the audience and/or to check that the sample matches a known audience.
  • Finally, questions can be used to understand whether participants have understood a task. This can be especially valuable on content sites. For example if you were testing the Australian Tax Office website, the task could be to find the tax rate for a given salary and then follow up with a question to ask what the rate is.

Test assets

What are you actually testing and how will the URUT tool and participants access the interface? Consider how you are going to set-up the URUT tool and the prototype or interface being tested. The responsiveness of the interface you are testing can impact participant’s experience of the product. It is important to make sure that the participant doesn’t need any set-up from their end; barriers to people completing the study will reduce the completion rates. Try to ensure that the interface can be accessed from any computer or device the participant may be using.

Piloting

Testing the study with either a subset of participants or in a preview mode will allow issues with the prototype, technology, tasks or questions to be ironed out. Piloting the study will protect against wasting sample you are paying for or using up a small limited sample.

Tools

There are a number of different tools out there and more coming onto the market all the time. It is recommend that before running a study you explore some of the different options out there. Tools that support video recordings of participants include:

Tools that track click stream data include:

A tool like User Zoom collects both video and click-stream data.

Field-work

While the survey is being conducted it is important to monitor the data and be available for offering help to participants. Monitoring the data will ensure you see everything is working as planned and that you are receiving the data you need to meet your study objectives. Being available via email or phone helps manage the relationship with customers and to provide help where it is required.

Analysis

Once you have collected your results it is analysis time. To begin with look at some overarching metrics such as overall task completion and customer satisfaction. These can be automatically calculated in tools that measure click-stream like UserZoom. This will provide an overall feel for the effectiveness of the product. For video based tools you will need to watch the sessions and note whether the participants have been able to complete each of the tasks.

With an overall feel for the product look into the individual tasks and identify those that are causing issues. Next you need to find out why. With video based tools, watch video of specific tasks to observe behavior to identify the elements of the interface that are causing the issues. For clickstream services focus in on a combination of the pages visited during the task to identify behavior during the tasks and where the issues have occurred (i.e. which screens). Also view open-ended feedback.

Tips for running URUT

Choose the testing platform after you have identified the objectives of the study. It is crucial to select a tool that is fit for purpose and will support your study objectives. Some platforms do not support specific technologies such as flash and have limitations in the way they measure user behavior. As an example I worked on a study recently that was evaluating a single page app. In order to be able to measure user interaction we needed to get our developers to insert additional code to measure some interaction because the tool tracked the URL which did not change when users navigated a variety of content.

Set clear expectations for participants. Obtaining useful data is dependent on participants understanding what is expected of them. Setting clear expectations up front (during recruitment and at the start of the survey) about what participants are required to do and why the study is being conducted will help ensure success.

Remember that participants won’t receive any assistance during the study. It is crucial to ensure that tasks are clear, user friendly and that help is available. Consider how much assistance is available within the URUT tools for participants during the study.

Avoid bias. While all bias cannot be avoided, it is important to remove as much as possible. Randomise the order of tasks, which means that learning the interface during study will not influence performance on latter tasks.  Task wording can also introduce bias. As discussed, pay attention to task wording to ensure that they effectively test the product.

Keep participants engaged: Avoid participants quitting your study. Participants are more likely to complete the study if they feel like their feedback is valuable, if the tasks are interesting and the study isn’t too long.

Case study

A large corporate was about to implement a significant change to their site. Multiple rounds of in-person usability testing had been conducted and indicated that the new design would be a success. Due to the scale of the change the organisation wanted a high degree of confidence that the new design would enhance the experience. We ran a study which involved benchmarking the task completion rates, perceived ease of use and advocacy on the live site. We then repeated these on a prototype of the new design. By utilizing larger sample sizes, we had tight confidence intervals on core metrics that provided an accurate picture of the performance of the new design in comparison to the old.

Wrap-up

URUT is a technique that can offer quick, inexpensive and robust usability testing. Of particular value can be the ability to use the technique for benchmarking and context-sensitive studies. It is a great tool to have in your bag of research techniques and can be a great compliment to in-person methods. Exploring the different tools on offer and experimenting with the technique is the best way to learn and develop expertise.

Make it clear what is expected of participants, keep your research objectives in mind, and avoid bias. Good luck!!

The post How to Run an Unmoderated Remote Usability Test (URUT) appeared first on UX Mastery.

]]>
https://uxmastery.com/how-to-run-an-unmoderated-remote-usability-test-urut/feed/ 3 29600
How to Test an Information Architecture https://uxmastery.com/testing-information-architecture/ https://uxmastery.com/testing-information-architecture/#respond Thu, 03 Jul 2014 22:37:19 +0000 http://uxmastery.com/?p=16061 Usability Testing is a vital part of the Information Architecture process.

This excerpt from "A Practical Guide to Information Architecture" will help you understand when to test, how testing works, what to prepare, and how to interpret your results.

The post How to Test an Information Architecture appeared first on UX Mastery.

]]>
This is an excerpt from A Practical Guide to Information Architecture, 2nd Edition by Donna Spencer, published by UX Mastery.

So, you’re over the main hump of your Information Architecture (IA) design process. You’re happy with the IA you’ve come up with, it fits your content, and should work well for the users.

Before you continue, it’s a great idea to test that assumption. Instead of just thinking and hoping it will work for the users, make sure it actually will.

This is called usability testing. It basically involves putting a draft of something in front of people, asking them to use it to do things they’d normally do, and checking it works for them. When you perform a usability test on something before you start to build it, you can find out what works, what doesn’t work, and what you need to fix. It lets you see things that aren’t going to work and make changes before it’s too late.

Of course, usability testing can be done for anything. Although it’s used a lot for software and websites, I’ve heard of retail outlets setting up test stores specifically to check changes to store layout.

For your project, you probably won’t be doing usability testing on anything quite so large. You’ll want to test your draft IA – your groupings and labelling. Eventually these will form the main way people find information or do tasks, so it’s important to get them right, and to know they’re right.

Usability testing at this point won’t check everything you need to check for your final project. You’ll want to do it again more thoroughly when you’ve designed the navigation, page layouts and content. However, the more things you try to test at once, the harder it is to figure out what aspect wasn’t working – was the label obscure, or was the page so busy that people couldn’t see the navigation bar at all? Usability testing just your IA lets you know that your groups and labels are working well. The other reason for testing your IA is that navigation and content have to work with it, so it’s best to find any mistakes before you start working on these.

Usability testing isn’t about checking whether the people can use your website. It is about checking that your website lets them do what they need to do. It’s a subtle but important difference, and one to keep in mind when you’re testing. You are testing your work, not people’s abilities.

What you Want to Learn

Before you start testing your IA, think about what you want to get out of your test. This will help you decide how to run it and who to involve.

The main thing you’ll be trying to learn is that your groups are sensible and your labels are good. You may want to check that your overall approach is okay (e.g. if you’ve used an audience-based classification scheme, that people expect to see your information in that way and understand the audience groups). Or you may not want to test the IA as a whole, but instead dive deep and just check a part of the IA that was hard to design or where stakeholders couldn’t agree on the approach.

When to Test

One of the biggest advantages of the approach I’m about to describe is that it’s very easy to do. It doesn’t take a lot of preparation and you get results quickly. Depending on your situation, you could set up a test one morning and have results that afternoon. It really is that easy to quickly test your IA. It’s great to test quickly, make some changes and test again. So you really can test as soon as you have a draft IA you’re happy with.

If you didn’t get a chance to do good user research earlier in your project (and I know this happens for all sorts of reasons), you could take this opportunity to test your IA and gather some extra research at the same time. Even if you undertook some research early on, chances are some things came up in between that you wish you knew more about, or assumptions that you’d like to follow up. Combine an IA test with some simple research to help you make better decisions later in the project.

You could time your test with something else. (It takes so little time that it’s easy to slot in with other activities.) If you’re going to be communicating with your audience in some other way – perhaps a stakeholder or staff meeting to gather requirements for another part of the site – you could include a simple test of the IA alongside this.

How it Works

This type of testing is quite simple. You’re going to ask people how they would do a particular task or look for particular information using your new IA.

For example, if I wanted to test the IA of my conference website, I could ask people to find out some key things like how much it costs, what events are on particular days, and what they’ll get out of the conference. I’m not really asking them to find the answers, as the content won’t be available yet, but asking them where they would look for the information. I show them the IA step-by-step and ask them to indicate where they’d look – it’s that easy.

This testing method works best for a hierarchy pattern (see chapter 16), simple database structures and combination hierarchy/databases (particularly for checking the top couple of levels). If you’ve used the subsites structure, you may want to test from the top levels, or test one or more of the subsites. It’s also great for testing the focused entry points structure, as you can see which way different people approach information.

It’s unlikely to work for most wiki structures – they don’t really have an IA to test. You can test these later when you have some content ready (see chapter 24 for more about this).

It can work well for the top couple of levels of a catalog, but not so much for the deeper levels – especially ones that contain a lot of products.

Preparation

Before you start, you need three things:

1. A Draft IA

The first thing you’ll need is your draft IA.

It’s okay if your IA is still in draft form. This is a perfect way to check what parts of the IA work and what parts don’t. You don’t have to test just one draft IA – you could try out different versions or different approaches.

It’s also okay for your draft IA to list things in more than one place if you’re unsure where you’ll put them. This helps people feel like they’ve found the ‘right answer’, and gives you good information about the paths people will follow.

2. Scenarios

The second thing you’ll need is a set of tasks or a list of things you know people may need to look for. In usability testing we call these scenarios, and they represent what people will do and look for during the test.

When you’re writing these out, make sure they use the terminology people use and provide some sort of background or context to make sense of the task. In particular, you should avoid using the same terms as you have in your IA. If you do, it just becomes a treasure hunt for the exact word in your scenario.

For example:

  • If you were designing a furniture website don’t say “You need a new bookcase”, because people will start looking explicitly for bookcases. Say “You’ve just moved house and have books in boxes everywhere” instead, which gets them thinking about what they’d do about that problem.
  • If you have an intranet, don’t say “Find the maternity leave policy”. Instead, say something like “You’ve just found out you’re going to have a new baby. What are you entitled to?”.
  • On your accounting software, don’t say “Raise a receivable invoice”. Try something like “You just delivered a report to your client and it’s time to ask them for money”.

Using these more realistic scenarios helps ensure people think about the task they may need to do, rather than just hunting for the exact word you’re giving them.

There’s no right number of scenarios to use. You want enough to cover a good proportion of your IA (or the part you are most interested in exploring) and a good proportion of the tasks people will do (so you can test that the main tasks are easy to complete).

As people go through the scenarios they’ll see parts of the IA, and remember where they saw things. This is okay, as people do learn and remember in real life. But after a while people start to focus more on remembering where they saw something than thinking about where they would look for it.

I’ve found the best approach to balancing these two issues is to have enough scenarios to cover your content and normal tasks, and enough participants so they don’t complete every scenario – they just do enough to keep the activity fresh.

It’s also good to randomise your scenarios (if you write them out on index cards as I describe below, just shuffle them) so each person does them in a different order. This will make sure your test isn’t biased by the order.

3. Participants

The third thing you’ll need is a group of people who can give a small amount of time to be involved. As we discussed in chapter 6, the participants should be people who will be using your information.

The most important thing to note for this type of usability testing is that it only takes a small amount of time and commitment. Even a couple of minutes of input from people will give you some very valuable data. So the way you arrange your participants will depend a lot on your project situation. For example, when I do intranet work, I do face-to-face tests and basically arrange to see people at their desks. For website work, I’ll set up an online test and invite people via email or via the website in question. If you can keep it low effort for people, it will be much easier to get them involved.

For more information about getting people involved, see the section on recruiting participants in chapter 6.

Method

The two main ways of running this type of usability test are face-to-face or online (yes, you can do both). The advantage of running face-to-face testing is you can talk to people. Just like other user research activities, you can ask people why they made particular decisions and what terms meant for them. This type of feedback help you understand why your IA is working (or not working) instead of just letting you know it is working.

Of course, it isn’t always easy working face-to-face, which is where online testing can be particularly useful. The main advantage of online testing is you can involve people you wouldn’t otherwise be able to meet face-to-face. And you can often involve more people than you would ever reach face-to-face, giving you a lot of useful information. Each method will involve different preparation.

On paper

Figure 19 – 1. Scenario cards and hierarchy cards

Figure 19 – 1. Scenario cards and hierarchy cards

If you decide on face-to-face testing, you’ll need to prepare your IA and your scenarios. My favourite way of doing this is with a set of index cards.

For the IA:

  • Write the top level categories on an index card. If you have more categories than will fit on one card, just continue on another. Write large enough that you can read it at arm’s length. Number each category 1, 2, 3, 4, 5, etc.
  • For each top level category, write down the second level categories in the same way. Again, if they don’t fit on one card, just continue on another. It can be handy to use a different coloured card (so you can handle them more easily) but it’s by no means essential. Again number these, this time using the number that represents the category above, followed by a number for the current level (e.g. 1.1, 1.2, 1.3).
  • Continue for all categories and all levels.
  • For the scenarios, write each scenario on one card (it can be handy to use a different coloured card for these too). I also usually label these as A-Z in the corner of the card. (The numbering and lettering system helps with record keeping and analysis.)

Come up with a way to explain the exercise. This helps you introduce the activity efficiently and helps people understand what you’re about to ask. I usually say something like:

“Thanks for agreeing to help us out today. We’ve been working on improving [whatever your site is] and would like to check that what we’ve come up with is sensible for the people who will have to use it.

I’m going to ask you to tell me where you would look for particular information. On this set of cards I have a list of things people do with this [site]. I’m going to show you one of these, and then show you a list of words that may end up being navigation on the [site]. I’ll ask you to tell me where you would look first then show you another level and again ask you to tell me where you’d look. If it doesn’t look right we’ll go back, but we won’t do more than two choices – after all, it’s not a scavenger hunt. Don’t worry if this sounds strange – once we’ve done one, you’ll see what I mean. And if there are tasks on those cards (the scenarios) you wouldn’t do, or that don’t make sense, just skip them.”

Bundle up your cards (plus some blanks and a marker) and you’re ready to go.

Running the Test

When you’re with a participant, you pretty much run through the test the same way you described in your introduction. So first show them a scenario (or read it out if you like), then show them the top level card. Ask them to choose a group. For that group, show them the next level card, and so on until there is nowhere further to go.

If they choose a group and feel as if they’ve made the wrong choice (usually this will happen as they don’t see anything that helps at the next level), go back one level and ask them to choose again. But just as you outlined in the introduction, only do this twice. After all, you want to know where they would look, not get them to hunt down the ‘answer’ to the scenario.

Run through the scenarios you have planned for this participant. If you feel like they are trying to remember where they saw answers instead of thinking about what they’re looking for, that’s a good time to wrap up.

If your participant needed to go backwards at any step, you may like to ask them what happened. Ask if they remember why they chose the particular group and what they thought would be in it. Be very careful not to make them think they’ve made a ‘mistake’ – remember, you’re checking how good the IA is, not how good the participant is. But by asking you’ll learn very useful information about what people think groups are about and how they expected to look for information.

When you’re finished, thank the participant for their help and let them know what happens next.

When I’m using this method to test an IA, I sometimes notice a consistent problem in the IA – usually a label that just doesn’t make sense. That’s why I carry spare index cards and a marker. Rather than continue testing something I know isn’t working, I’ll change the label (by writing a new card) and continue testing to see if the new label works any better.

Recording Results

As you work, record the participant’s answers (this is why we put numbers and letters on the index cards). I like to have someone with me taking notes, as it can be tricky juggling cards and writing down selections (the test moves pretty quickly). All you need to do is write down the path for each scenario.

For example:

  • A: 1, 1.2, 1.2.1 (no), 1.2, 1.2.6 (happy)
  • B: 7, 7.6, 7.6.5

A helper can also write down the comments people make as they do the test, which are usually both interesting and useful.

After the test, I usually record the results in a big spreadsheet. I put the scenarios across the top, and the IA down the sides. Then I simply go back through all the results and tally where people looked for each scenario. Because I let people look in two places, I usually mark first choices with a big X and second choices with a small x.

It’s a simple process, but it very quickly shows you patterns. For some scenarios, you may find there was a consistent approach. For some, there may be less consistency. Sometimes you’ll find consistent answers that were quite different to what you thought would be the ‘right’ one.

Online

Preparation using an online tool will vary, depending on the tool. At the time of writing I know of two online tools that focus specifically on IA testing:

The first two let you test the IA as a hierarchical tree. I usually create an IA before I know anything about navigation (something we discuss in chapter 23) so the tree approach works well for me, but it would be easy to mock-up a navigation approach and test with a bit more context.

All follow the same idea as the face-to-face test. You upload a hierarchy (your IA) and a set of scenarios. You write an introduction test and send it out via email or include a link to it on your website.

Figure 19—1. Treejack (https://optimalworkshop.com/treejack.htm)

Figure 19—2. The results from a Treejack test

Interpreting Results

It’s usually pretty easy to interpret the results of this activity. The spreadsheet I described above, or the outputs from the tools, show a fairly clear picture of what works and what didn’t. When interpreting the results, think not only about what happened but why it happened.

First, think about whether the overall approach worked well. Did the test show you’ve chosen a good basic approach to your IA (particularly if you’ve chosen something like an audience or task-based classification scheme)?

Identify the scenarios where people looked in the same place you thought the content would be. You can probably be confident this will be a good location for that content.

For the parts that worked well, think about what made them work well and check that they worked well because the IA is suitable. You need to be a bit cynical here and make sure the scenarios didn’t work well because you used leading language and people word matched, or because they always did that scenario 5th and had learned the IA.

When the results weren’t what you expected, think about what happened:

  • Did everyone think the information would be in a different place to where you thought you’d put it? If so, consider putting the information there. You may need to tweak the IA a little to fit it.
  • Were some of your labels just not clear enough, or did people interpret them in a way different to how you intended? You may need to revisit the labels, the placement of content within them, or the categories themselves.

When you’ve finished analysing what worked and what didn’t, go back to the process we discussed in the last chapter. Tweak the IA (or toss it out entirely and take a new approach) and test it again. Keep doing this until you’re happy you have something that’s going to work.

Download your copy of the complete ebook “A Practical Guide to Information Architecture: 2nd Edition” now from the UX Mastery store. Available in mobi, ePub and PDF formats.

The post How to Test an Information Architecture appeared first on UX Mastery.

]]>
https://uxmastery.com/testing-information-architecture/feed/ 0 16061
Transcript: Ask the UXperts—Usability Testing with Gerry Gaffney https://uxmastery.com/ask-uxperts-usability-testing-gerry-gaffney-transcript/ https://uxmastery.com/ask-uxperts-usability-testing-gerry-gaffney-transcript/#respond Wed, 18 Jun 2014 05:03:07 +0000 http://uxmastery.com/?p=14223 “Ask the UXperts” is a casual chat-based session featuring a different expert each time. It is One Hour. One Expert. All your Questions Answered.

In today’s session we featured the wonderful Gerry Gaffney and we discussed Usability Testing. Here’s a transcript of the session.

The post Transcript: <em>Ask the UXperts</em>—Usability Testing with Gerry Gaffney appeared first on UX Mastery.

]]>
Today I hosted an interesting addition to our series of Ask the UXperts sessions. If you haven’t heard of these sessions before,  Ask the UXperts can be summarised as One Expert. One Hour. All your questions answered. And that was certainly the case today.

Our expert was the wonderful Gerry Gaffney, and we discussed Usability Testing

Gerry is the company founder and lead consultant at infodesign.com.au and has lectured undergraduate and postgraduate students in various universities on usability and user experience design. He is passionate about the need to improve the quality of the relationship between humans and the products we use and runs the User Experience Podcast uxpod.com. Gerry was the ultimate expert today, not only answering questions, but asking them in return. The chat was informal and relaxed, but a lot was covered.

Timezones are the bane of my life and they mean that someone always misses out, so if that was you this time (or if you’d just like to revisit what we discussed or check out some of the resources mentioned), you’ll find the full transcript below. If you’d like further information about future sessions, make sure you join our community where I’ll keep you in the loop.

For those of you that would like to see what we talked about today, here is a full transcript for your perusal. Grab a coffee, it’s a long one…

HAWK
Gerry Gaffney: Hey!
 
A tweet will go out shortly and people will start joining us
HAWK
Justin: If you want Gerry’s undivided attention on Usability Testing, now would be a good time to take advantage of that ;)
 
I’m about to undertake my first usability test
Gerry G.
do you do usability testing?
 
awesome!
Justin T.
I’m 6 months into a gig – part time
Gerry G.
are you excited or nervous about doing it?
Justin T.
I’ve created a prototype and through dropbox im going to host it.
 
Not particularly nervous. More about the knowledge requirement
Gerry G.
is it one-on-one, or remote testing?
Justin T.
Remote
 
remote through a closed FB group and in person
 
Trying out the gurilla testing as the client doesn’t have the budget for professional testing
Gerry G.
For the remote will it be moderated (i.e. will you be online as people complete tasks) or unmoderated?
 
Budget is often the issue!
Justin T.
Still to be worked out
Gerry G.
I think though that the single most powerful way of selling usability testing is having someone in the business observe a test in person
Justin T.
There will have to be some sort of script that the users will just read vs a chat system of sorts using free tools
Gerry G.
Whatever works is my rule of thumb.
Justin T.
yeah – This is one of the first cases of testing being involved in a ‘design’ project
 
I’m all about getting things done and learning from it
Gerry G.
Make sure you do a pilot because you’ll likely find something you’ve overlooked.
 
at least, i always overlook something!
Justin T.
yeah – I’m going to test and run a complete session in house first. Polish then set loose
Gerry G.
Hi Keith T. Justin T and I are just chatting about remote testing.
Gerry G.
“Polish then set loose”. Love it!
Keith T.
Thanks
Justin T.
Are you aware of any software to ‘heatmap’ or track through clicks of an Axure prototye?
Gerry G.
Gosh, I dont’ know Justin T.
Justin T.
reading your review materials on your website Gerry
Keith T.
LOL – Just trying to finish off a last minute call
Gerry G.
Have you had a look at the book “Beyond the Usability Lab”?
Keith T.
I’ll be with you as soon as I can
Justin T.
seems like it must have been someone else asking the question before me
 
not yet
Justin T.
I usually ask the question (of the internet) then build it
Gerry G.
In that book the authors talk about specific tools and compare them.
Justin T.
I made the first ‘paper prototype’ as a clickable pdf, then was told it must be clickable – so I learnt Axure and made one
HAWK
Justin Thor Hoyer: Ask the UXperts next week is with a guy that knows all about heat mapping, so he may know the answer to that one
Justin T.
cool
 
ahh, thanks
Gerry G.
Axure is great. There are so many tools out there now. My most recent prototype was using HTML/CSS and Twitters Bootstrap framework
 
But becuase I’m not really technical it took longer than a “real programmer” would do it in.
 
Thanks Hawk, that’s cool, we’re just chatting really… Happy for you to make it more formal at your leisure!
Justin T.
drag and drop is definatly the future. Especially as we want to have the speed of paper, but the interactivity of the web
 
Is there an agenda for this session?
Gerry G.
I still use paper for the early stuff. It’s just so quick and valuable to get feedback early
HAWK
We don’t need to make it official at all, if you guys are happy. All I usually do is ask the UXpert to give us a brief bio and an intro to the subject. Then I throw it open to the group to ask questions. If things get busy, I queue them for Gerry.
Justin T.
cool
Gerry G.

Sure! I’ve been running usability tests for more than 15 years now.

I do UX work in commercial and government, and occasionally lecture (less so now).

I have a company called Information & Design and I run the User Experience podcast (uxpod.com)

HAWK
And a brief intro to UT would be useful for the transcript
Gerry G.
Usability testing is probably the think that taught me more than anything else I’ve done!
 
Usability testing is a technique for ensuring that the intended users of a system can carry
out the intended tasks efficiently, effectively and satisfactorily.
 
that’s a very old definition I wrote maybe 10 years ago.
 
There are lots and lots of variants on the technique.
HAWK
Great definition though. KISS.
Gerry G.
And you can be as infomral or formal as you like.
HAWK
Do you have a preference, or does it depend on the audience?
Gerry G.
There are hardly any really serious mistakes you can make, in my opinion
 
Depends on the circustances, for me
 
I like to go for the lowest cost option
 
that will do the job effectively
 
and the lowest resolution appropriate for the tasks. So that might be paper, might be a full-fledged prototype, might be somewhere in between
Justin T.
what ‘package’ do you give the client as a summary of results of the testing? Documentation, Video, or just a one pager
HAWK
For those of you that just joined, this is an informal session and you are welcome to jump in with questions at any time. You don’t have to wait for a gap – I’ll queue questions if necessary
Gerry G.
Depends on the client. In a very formal enviornment,a very formal report listing issues and recommendations and describing the methodlogy
 
For a client I know well, something much more punchy
 
And if I’m in an Agile development process, migth be a one-pager and briefing to the team
 
My preference is short & sweet
 
rather than a big document that sends everyone to sleep.
Justin T.
Minimum viable in – most important change out
Gerry G.
exactly
 
I think it can be a mistake to either over-engineer or under-engineer your report.
Justin T.
How long did it take you to specialise in your career from ‘All UX’ to usability testing?
Gerry G.
Justin Thor Hoyer, actually it was kind of the other way around
 
Usability testing was one of the first things I did.
 
It was a real eye-opener for me, as well.
Justin T.
Gerry Gaffney:
 
nice ux pod site
Gerry G.
Justin Thor Hoyer, thanks!
Justin T.
how so?
Gerry G.
Well, before that I’d only informally watched people using technology, so I knew there were problems but I didn’t really understand the extent of how bad things were for most people
 
Ordinary people were unable to use technology that was supposedly designed to help them.
 
And people at the time I think were more likely to blame themselves
Justin T.
What’s the most interesting project you’ve worked on recently?
Gerry G.
We’d hear a lot of “I’m no good with computers”
Justin T.
It’s all just a mental model and relatable context isn’t it.
Gerry G.
Justin Thor Hoyer, most interesting project is I just finished the design of a new Jury Management System
 
That involved staff screens on desktop, forms, leters and mobile-first for jurors
Justin T.
thinking about taking on the election process next? we need it
Gerry G.
Justin Thor Hoyer: Whitney Quesenbery in US has started a “center for civic code” and she’s an expert on election UX
 
It’s her thing.
 
Might be worth looking her up: wqusability
Justin T.
someone should own it
Gerry G.
Yeah :>)
Justin T.
it’s an interesting thing. What problem is worth solving vs what problem can you solve today?
HAWK
Gerry Gaffney: So I have a question. You said that there are ‘hardly any’ serious mistakes that a person could make. Can you tell us what one of those mistakes would be?
Gerry G.
Justin Thor Hoyer: that’s right
 
HAWK: OK, the number 1 is… (drum roll)
 
testing with the wrong participants.
 
I’ve had quite a few projects where people claimed to have tested the product but it turned out to be colleagues or people who were experts or otherwise inappropriate.
 
The key mantra is “representative users”
Kory
have you ever done ux testing where you used virtual reality goggles?
Gerry G.
Um, no. I’ve done a little bit of testing with eye-trackers, but not enought to really talk about.
 
Kory, have you got such a proejct? sounds like fun!
Amando
Any rule of thumb for the quantity of representative users?
Gerry G.
@AM the quantity of users is a real political question in many areas. here’s my take:
Kory
have a project coming up related to the auto industry & we need to simulate a driving experience
Gerry G.
If you’re testing a working prototype, around 6 people is *generally* good enough to identify the marjotity of issues
 
However, if you have wildly differeint user groups (say an airline booking site that has to support novices and expert travel agents) you may need to do more.
 
Kory: I’ve use Monash Uni’s driving simulator at MUARC (Monash Uni Accident Research Centre).
 
that might be worth looking into…
 
But even testing with 1 user is good
 
And for very early design there’s a fine line between testing thats summative and testing that’s formative
 
So summative or evaluative testing will be about: Is this product working as we expect?
 
and formative will be about: What can I learn from my users to help me design this product?
 
I find in reality that slipping between the two “modes” can be useful
 
although there are times you want do do one or the other.
Kory
Good research article on how many users to test with: http://www.simplifyinginterfaces.com/wp-content…
Gerry G.
Kory: I know there’s lots of research in driving and UX, both from a safety perspective and from in-car technology perspective
Amando
At what stage do you allow rep users to test? I assume you use a prototype tool? What tool do you use for wire framing prior to design prototyping?
Baldur K.
Any pointers in user testing using paper mock-ups?
Gerry G.
@Aman I get representative users from day 1, if at all possible
Kory
We need to do this testing in-house. Looking into driving simulators.
Gerry G.
Amando: @Baldur for prototypes, I use different tools
 
Axure, Balsamiq
 
Sometimes just clickable PDFs made in Powerpoint
 
I love Omingraffle
 
ALso HTML/CSS
 
To answer @Baldur K, testing with paper…
Todd
proto.io is a great tool to make high interaction fidelity prototypes
Justin T.
I know it’s a print tool, but to me Indesign is much better/faster than omnigraffle.. Thoughts?
Gerry G.
I think paper is a great way to go. A lot of people say otherwise, but my experience is that it’s quick, it’s cheap and it’s really effective.
Todd
full iOS app simulation. no coding
 
i use it to test with users and use reflector to mirror on my screen so they can use the app on their own phone, naturally
Gerry G.
Paper allows you to do cool things like mockups of physical devices. I’ve given people “phones” that are just foam board with screens made of paper covered in acetate and they buy right into the experience and give really good feedback because it’s obviously exploratory and not a “final” version.
 
I haven’t used proto.io
 
We’re kind of spoiled for choice.
Todd
totally.
 
more my week it seems
Gerry G.
I find I’m learning a new tool at least every 6 months.
 
which on the upside is great
 
and one the downside is a pain :>)
 
I really enjoyed making an IVR mockup using Twitter’s bootstrap framework
 
But to me, the real value in usability testing is not the tools…
 
It’s observing what I call “real people” interacting with our stuff
Todd
the more motion design becomes important, the more that prototyping interactive animation becomes important to get feedback on how the app feels.
Gerry G.
It’s so powerful to see the assumptions you’ve made being validated or disproved
 
Todd, absolutely, and you can’t do those sorts of interactions with paper
Baldur K.
That’s great idea ( the foam phone ) … definitely gives them more “natural” sense of using the product
Gerry G.
Dan Saffer spoke about those sorts if interactive designs, and how you can’t do them on paper, inhttp://uxpod.com/microinteractions-an-interview…
 
Baldur Kristjánsson: K, yeah the foam phone is awesome
 
Same for tablet, I’ve done an android app using that tecnique and it got really good feedback
 
One thing about physical mockups….
 
is that I was really really surprised, on several projects, about how much they impress management. It’s like, suddenly they realise, “Oh, so *that’s* what these guys are doing! Now I get it!”
 
So great for getting buy-in
 
Kory, have you chosen a driving simulator yet?
Gerry G.
Another technique I really like is “RITE” –
 
rapid iterative testing & evaluation. Whereby you do a prototype, test it, refine it immediately and test next day with next user/s
Justin T.
Gerry G.
There’s an article on RITE here: http://www.usabilityprofessionals.org/uxmagazin…
 
People want to talk about a particular aspect or problem? (otherwise i can just to stream-of-consciousness…)
 
OK, so I’ll go on to some general thoughts and people can jump in with questions or whatever…
HAWK
I have another question… what are the pros and cons of remote vs one on one testing?
Justin T.
Process > Case studies > Trends… whatever works for you
Gerry G.
I think having a formal plan is good.
 
HAWK: I think you can’t beat one-on-one for depth
 
But remote of course allows you to a) get to people who you couldn’t otherwise (I did some testing with mining engineers and ued this)
 
and b) you can get a lot more partcipants if you go for unmoderated
 
example I recnetly tested a tree structure using Optimal Workshop’s “Treejack”
 
really easy to set up, and you can send out invitations to dozens or hundreds of people and voila, you’ve tested your categorisation scheme
 
(You still have to fix it of course!)
 
I think a key problem with remote is if people need to do setup
 
For example, if people have to install somethign or enable somethign, you can almost guarantee you will have some problems.
 
So there’s definitely a place for remote – see the book “Beyond the Usability Lab” for great info
Justin T.
Could you not have a setup (front page) then click to proceed, and a ‘Thanks for participating – now click here to claim your prize’ type scenario?
 
not so in depth, but doable
Gerry G.
Justin Thor Hoyer: indeed you can
 
Although it’s hard to know what’s *actually* going on.
 
So it’s great for tests where you want to know where people went, or what task completion rates were
 
but you miss out on the one-on-one interactions
Justin T.
yeah – heat map and tracking links…
 
so do both (time and funds allowing)
Gerry G.
http://uxpod.com/beyond-the-usability-lab-an-in… is worth a read/listen on remote testing
 
Justin Thor Hoyer:
 
On any given project, I might mix both methods, if I have both the need and the resources.
Yasmine
What online tools/software do you use to help with remote usability testing?
Gerry G.
I guess i tend to feel that oen-on-one is so much more valuable that it tends to be my fvaourtie
Justin T.
yeah – I know (I’m excitable)
Gerry G.
Yasmine: I’ve been farily simple in my approach – Treejack that I already mentioned, but “GoToMeeting” or any tool that allows screen-share is great if you want to to remote *moderated* testing
Justin T.
Thanks heaps Gerry – gotta run. I’ve followed you on twitter and I’ll check out teh podcasts.
Gerry G.
That way you have someone in a remote location but you can observe their interaction in real time and talk to them about what they’re doing.
 
Cheers @Justin T
Yasmine
Gerry – Thanks for the response, came in late, and can’t see the conversation that had happened before I came in.
Gerry G.
Ah, I know that @HAWK is putting up a transcript later
HAWK
Yasmine: I’ll post a full transcript up on uxmastery later today so you can see what you missed
Yasmine
I appreciate it!
Gerry G.
any other specific questions?
 
Want to talk about planning, scripts, questionnaires, tasks, etc…
 
disasters…
George
What do you think about a short “screening” pre chat with candidates to check they’re right for the full testing session?
 
Is that always necessary / useful / essential / difficult for user recruiters?
Gerry G.
George, hi George, screening is important. Depends on the cirucmstance, but I firstly use an agency to recruit whenever possible
 
Not alwasy necessary to use recruiters
 
But it saves an awful lot of time and effort, and you don’t end up gettign people’s cousins!
George
What have you found to be the strongest alternatives to using recruiters? (Apologies if covering previous ground!)
Gerry G.
George: well, connections can be good, but if you’re not careful you can end up with non-representtative users – and that’s a real no-no in my opinion
 
Well, non-repsresentative can sometimes be OK, but if you test with experts you may well get a false sense of security
 
you think your product is OK but in the real world it bombs
George
Yes, that makes sense.
Gerry G.
But you can’t always use recruiters
Todd
Gerry, …, I need to head out, but thanks so much for your insights. btw, here’s the link tohttp://www.proto.io to check out for creating such high interaction fidelity prototypes.
Gerry G.
for example, I needed some fairly senior judges recently, and an agency wasn’t going to get them
George
And I would imagine that successful relationships with recruiters depend on good briefs to them. Any tips on that?
Gerry G.
Todd: thanks!
 
George: for the brief…
 
1. Write down the key criteria (might be age, might be employment area, usage of a particular service or whatever), make a table of those things, make sure your client is across it and then send it off to the recruiter.
George
Cool, thanks.
Gerry G.
Make sure you have a phone conversation with the recruiter, and also review and script they are using to recruit
 
and then i normally have a questionnaire during testing that validtes whetehr we got the right people
Amando
If developing a native mobile app, do you prototype both android and windows for testing?
Gerry G.
Generally I’ve found that if you recruit 10 people, 9 will be really useful and one will be weird but intereseting!
George
Thanks, Gerry.
Gerry G.
Amando: I wouldn’t (generally) see the need to test on multiple platforms to a *great* extent
 
For example…
Harlequin
Funny, I’m in the middle of finding a new opportunity…trying to make yourself that one interesting person =)
Gerry G.
Our jury management service is HTML5/CSS and nominally runs on any mobiel device. I used an Android (rooted) to record so that was my first choice
 
Harlequin :>)
Matthew M.
I love creating highlights reels from usability testing sessions (for internal use of course). I always make sure I include one of the “weird” ones ;-)
Gerry G.
But I also tested on iPad a few times.
 
as a kind of backup.
 
Matthew Magain, love highlight videos!
 
But I try to make sure that I can put my hand on my heart and say “here is representative footage”
 
Sometimes the weird ones are just too weird
 
Incidentally Tedesco & Tranquada have a new book called “The Moderator’s Survival Kit” which talks about a lot of the things that can happen!
Matthew M.
Indeed! With great power comes great responsibility. Responsible curation can be challenging!
Gerry G.
They’ll be on uxpod.com in a few weeks.
Harlequin
Q: If you’re coming into a company that hasn’t had any designer as its lead UX/UI person what’s a good way to sell yourself with your “Here’s how I can help you” quote, unquote “speech”…
Gerry G.
Harlequin: I think the key thing is to do something useful.
 
That might be help review a screen, or something fairly mundane, but demonstrative value is more powerful than any spiel
Harlequin
Noting, this is also a product I would have not seen until said interview…it’s not public yet…
Gerry G.
A client talked about this recently on: http://uxpod.com/ux-a-client-perspective/
Harlequin
I do have examples of previous work for sure…a portfolio and stories behind them are important…
 
I’ll check it out, thx
Gerry G.
Harlequin: and another powerful thing is offer to run a test… say with only 1 or 2 people, and have key staff observe
 
there’s nothing as pwerful in my opinion as having people observe real humans interacting with their product
 
That’s the real fun in usability testing
Harlequin
As in a “what I’ve seen so far” with their product? A quick, non-rude feedback session?
Gerry G.
It can be tedious setting up and planning and doing the number crunching afterwards…
 
Harlequin: as long as they respect you, then yes, but there’s also a danger if they see it being in any way an attack
 
Not that you’d attack them of course!
 
But it can put people on the defensive.
Harlequin
Yeah, keeping it on the down-low for sure :)
Gerry G.
What about “this is a great product, let’s test it with a few users”?
 
Do it really really cheaply so nobody can object on budget
danielle.v
Watching users struggle does make a big impact. I’m finding it hard to improve even small things such as horrendous error messages or instructions. The IT or project team feel like they make sense and don’t understand why it would be confusing to Joe Bloggs.
Harlequin
I like that…
Gerry G.
i mean, you can run a test with a couple of hundred dollars TOPS.
 
unless you need to recruit nuclear engineers
 
I think we’re about at wrap-up time – any last-minute comments?
Harlequin
Adding you to Twitter, realized I saw this in a retweet..lol
HAWK
That was a really interesting chat, thanks so much for your time Gerry.
Matthew M.
Thanks Gerry, thanks all!

The post Transcript: <em>Ask the UXperts</em>—Usability Testing with Gerry Gaffney appeared first on UX Mastery.

]]>
https://uxmastery.com/ask-uxperts-usability-testing-gerry-gaffney-transcript/feed/ 0 14223