Beyond the Usability Lab
Conducting Large-scale Online User Experience Studies
- 1st Edition - December 21, 2009
- Authors: Bill Albert, Tom Tullis, Donna Tedesco
- Language: English
- Paperback ISBN:9 7 8 - 0 - 1 2 - 3 7 4 8 9 2 - 8
- eBook ISBN:9 7 8 - 0 - 0 8 - 0 9 5 3 8 5 - 4
Usability testing and user experience research typically take place in a controlled lab with small groups. While this type of testing is essential to user experience design, more… Read more
Purchase options
Institutional subscription on ScienceDirect
Request a sales quoteUsability testing and user experience research typically take place in a controlled lab with small groups. While this type of testing is essential to user experience design, more companies are also looking to test large sample sizes to be able compare data according to specific user populations and see how their experiences differ across user groups. But few usability professionals have experience in setting up these studies, analyzing the data, and presenting it in effective ways. Online usability testing offers the solution by allowing testers to elicit feedback simultaneously from 1,000s of users. Beyond the Usability Lab offers tried and tested methodologies for conducting online usability studies. It gives practitioners the guidance they need to collect a wealth of data through cost-effective, efficient, and reliable practices. The reader will develop a solid understanding of the capabilities of online usability testing, when it’s appropriate to use and not use, and will learn about the various types of online usability testing techniques.
- The first guide for conducting large-scale user experience research using the internet
- Presents how-to conduct online tests with 1000s of participants – from start to finish
- Outlines essential tips for online studies to ensure cost-efficient and reliable results
Usability specialists, user experience researchers, market researchers, information architects, interaction designers, business analysts, and managers who are looking to learn about the capabilities of, and gain experience with, online usability testing
1. Introduction to online usability methods a. What is online usability, and how it differs from traditional usability methods b. Examples of different types of online usability studiesc. Pros and cons of online and non-online methodsd. When to use (and not use) online methodse. Combining online studies with lab testingChapter 1 provides an overview to online usability testing. Special attention will be paid to how it differs from traditional usability methods (including remote testing). There will be an in-depth discussion of the pros and cons of online testing, and when to use and not use online methods. We will provide real-world examples to highlight the value of this method. We will also discuss ways to complement traditional usability testing with online testing. Our intention is that the reader will be in the position to determine if an online usability study is appropriate for their organization. 2. Planning your study a. Study goalsb. Budgets and timelinec. Technology options d. Participant recruiting and panels e. Sample sizef. Panel optionsg. Sampling strategyh. Study durationi. Participant compensationChapter 2 focuses on all the activities and decisions that need to be made place prior to actually putting the survey together. The first three activities (goals, budgets/timelines, and technology options) are all essential to accurately scope an online study. The next part of this chapter focuses on finding the right number of targeted participants. This includes a discussion of research panels, sample size determination, and sampling strategies. The chapter will conclude with a discussion of estimating study duration and participant compensation. 3. Designing your study a. Introducing the surveyb. Screener questions c. Starter questions d. Constructing Taskse. Post-task questions and metricsf. Post-session questions and metricsg. Branchingh. Progress indicators and navigationi. Speed trapsj. Question typesChapter 3 is devoted to developing the study design. The first half of the chapter (topics a through g) are the various sections that are typically included in an online usability study. For each section, we will review best practices and common pitfalls. We want to give the reader the confidence for putting together an effective online study. The last part of this chapter (topics h through k) deal with common techniques that are used in various parts of a study. They include topics such as branching, navigation, speed traps, and question types. 4. Launching your study a. Piloting and Validatingb. Timing the launchc. Phased launchesd. Monitoring resultsChapter 4 deals with issues around launching an online study. This includes all the activities that happen after a study has been developed until the final data are available. This chapter discusses how to set up a pilot test and validate the study, timing a launch to maximize participation and quality results, and phased launches. The chapter concludes with a discussion on how to monitor results. This includes both participation rates as well as data quality. 5. Data preparation a. Fraudulent participantsb. Consistency checksc. Data reliabilityd. Outlierse. Recoding variablesChapter 5 will help the reader prepare their data for the analysis stage. There some very important activities that need to take place prior to data analysis that must be done to ensure valid results. Topics in this chapter will include how to identify fraudulent participants, running consistency checks on the participant responses, and identifying outliers in the data that may need to be removed from the analysis. The chapter will conclude with a brief discussion of how to recode variables that will be most useful in the analysis stage. 6. Data analysis and presentation a. Verbatim responsesb. Task-based metricsc. Segmentation analysisd. Post-session analysise. Behavioral dataf. Combining datag. Identifying usability issuesh. Presentation tipsChapter 6 covers all the information the reader will need to know about how to analyze and present data derived from an online study. Each section of this chapter covers one type of data that are typically captured in an online study. Verbatim analysis focuses on how to derive meaningful and reliable findings from open-ended responses. Task-based metrics include success, completion times, and ease of use ratings. Segmentation analysis includes ways to identify how distinct groups performed and reacted differently. Post session analysis involves looking at metrics such as SUS scores, overall satisfaction and expectations, and ease of use ratings. Behavioral data analysis includes metrics such as clicks paths, page views, and time spent on each page. Combining data from more than one metric is a very important step in analysis. Methods for identifying usability issues from all the data will be described and examples given. This chapter will be very practically oriented, giving step-by-step direction on how to perform each type of analysis. Many examples will demonstrate different ways to present the results. 7. Building your own online study a. Approaches to creating your own online studyb. Presenting tasks and prototypesc. Capturing task completion statusd. Capturing task time datae. Capturing self-reported dataf. ExamplesChapter 7 shows readers how to create relatively simple online studies themselves. Approaches to presenting tasks and prototypes will be described, as will techniques for collecting task success, times, and various kinds of self-reported data, including rating scales, open-ended questions, and the System Usability Scale (SUS). While some examples of HTML and JavaScript will be shown, we will describe them in such a way that even someone new to those technologies could understand and use them. Complete examples will be shown that readers could easily adapt. Code samples will also be provided on a companion website.8. Online solutions a. Keynoteb. RelevantViewc. User Zoomd. MindCanvase. Survey Monkeyf. Opinion Labg. ACSIh. OthersChapter 8 reviews the common online tools that can be used for running online testing. While the ?Do-It-Yourself? reader may want to use the techniques described in Chapter 7, others may want to use a commercial tool like those described in this chapter. Most of the chapter will be devoted to those tools that used most often to collect behavioral data such as Keynote, Relevant View, and User Zoom. There will also be a discussion of online tools that do not collect performance data such as Survey Monkey, ACSI, and Opinion Lab. Comparisons of the tools, including what kinds of data can be collected with each, will be included. The chapter will conclude with a brief discussion of other possible solutions such as agencies that specialize in online testing. Readers will also be referred to our companion website to keep up with updates and emerging software solutions.9. Ten tips for a successful online study a. Planning for metricsb. Deciding on the right toolc. Choosing the right participantsd. Writing clear taskse. Piloting your studyf. Checking data g. Comparing to other data sourcesh. Being creative with the datai. Allow enough time for analysisj. Presenting only the top line resultsChapter 9 provides a summary of some of the key points made throughout the book. This summary will be in the form of the top ten tips that someone should know when conducting their own online study. These tips will be very practical in nature.
- No. of pages: 328
- Language: English
- Edition: 1
- Published: December 21, 2009
- Imprint: Morgan Kaufmann
- Paperback ISBN: 9780123748928
- eBook ISBN: 9780080953854
BA
Bill Albert
William (Bill) Albert is Senior Vice President and Global Head of Customer Development at Mach49, a growth incubator for global businesses. Prior to joining Mach49, Bill was Executive Director of the Bentley University User Experience Center (UXC) for almost 13 years. Also, he was Director of User Experience at Fidelity Investments, Senior User Interface Researcher at Lycos, and Post-Doctoral Researcher at Nissan Cambridge Basic Research. He has more than twenty years of experience in user experience research, design, and strategy. Bill has published and presented his research at more than 50 national and international conferences, and published in many peer-reviewed academic journals within the fields of User Experience, Usability, and Human-Computer Interaction. In 2010 he co-authored (with Tom Tullis and Donna Tedesco), “Beyond the Usability Lab: Conducting Large-Scale Online User Experience Studies,” published by Elsevier/Morgan Kauffman.
Affiliations and expertise
Director, Design and Usability Center, Bentley University, USATT
Tom Tullis
Thomas S. (Tom) Tullis retired as Vice President of User Experience Research at Fidelity Investments in 2017. Tom was also an Adjunct Professor in Human Factors in Information Design at Bentley University since 2004. He joined Fidelity in 1993 and was instrumental in the development of the company’s User Research department, whose facilities include state-of-the-art Usability Labs. Prior to joining Fidelity, he held positions at Canon Information Systems, McDonnell Douglas, Unisys Corporation, and Bell Laboratories. He and Fidelity’s usability team have been featured in a number of publications, including Newsweek, Business 2.0, Money, The Boston Globe, The Wall Street Journal, and The New York Times.
Affiliations and expertise
Senior Vice President of User Experience, Fidelity Investments, USADT
Donna Tedesco
Donna Tedesco is a Senior User Experience Specialist with over ten years of user research experience. She has published and presented at local, national and international conferences, and is co-author with Bill Albert and Tom Tullis of the book, "Beyond the Usability Lab: Conducting Large-Scale Online User Experience Studies." Donna received a BS in Engineering Psychology/Human Factors from Tufts University School of Engineering and a MS in Human Factors in Information Design from Bentley University.
Affiliations and expertise
Senior Usability SpecialistRead Beyond the Usability Lab on ScienceDirect