HCI International 2016
Toronto, Canada, 17 - 22 July 2016
The Westin Harbour Castle Hotel
Navigation Menu

T18: How to Test Software with Users

Tuesday, 19 July 2016, 14:00 - 17:30

Anna Wichansky (short bio)
Oracle, USA

 

Objectives:

The objective of this tutorial is to teach you basic methods to conduct summative usability testing on software, that are relatively quick and statistically clean. Such testing is conducted when the software is in beta or near-production release. The methods reflect real-world laboratory and remote testing processes used in industry today, and are adapted from ISO25062:2006, Common Industry Format for Usability Testing. It is recommended that this course be taken after the related tutorial, How to Create User Requirements for Software, which is also offered at HCI International 2016.

Content & Benefits:

  1. Introduction: objectives, key takeaways, instructor’s bio, agenda
    1. What is usability testing: ISO definition, key elements, benefits
    2. Formative vs. summative usability testing and reasons for each type
    3. Special domains: mobile apps, medical products
  2. Usability test planning:
    1. Specifying product goals: context of use, stakeholders
    2. Identifying users: target groups, selection criteria
    3. Specifying tasks: goals, task validity, examples of task step click paths
    4. Core performance & satisfaction matrix: use of "before" data, and expert benchmarks
    5. Importance of controls and repeatability of method; reliability
    6. Moderator script
    7. Participant guide
    8. Audio/video recording, automatic data collection
    9. Test design and data analysis plan
    10. Group exercise 1: Instructor-led review of a sample test plan
  3. Participant recruitment:
    1. sourcing users
    2. sample sizes
    3. screening questionnaires
    4. scheduling tips
  4. Conducting a usability test:
    1. Discount usability testing and customer feedback session definitions
    2. Test set-up: test environment, materials, product readiness
    3. Moderator introduction: test objectives, informed consent
    4. Group exercise 2: Instructor-led review of sample moderator script
    5. Running the participant: importance of consistency within and between participants
    6. Ending the test: participant closure, payment
    7. Role Play Exercise 3: Instructor leads small groups of students playing common testing roles in two tasks, using a released website & prepared materials, followed by group discussion
  5. Data collection and recording methods
    1. On-line screen recording
    2. Video and audio recording
    3. Automatic performance data collection
    4. Manual methods
    5. Surveys
  6. Analyzing usability test data: comparison to goals, benchmarks, "before" data; use of quotes
  7. Effective reporting strategies for usability test results: for developers, stakeholders, customers
  8. Tips & Tricks for effective usability testing:
    1. ethical considerations
    2. providing assists
    3. managing stakeholder expectations and perceptions
    4. correcting design problems
    5. special considerations for mobile apps
  9. Public domain usability testing resources
  10. Questions & Answers

Benefits:

This course teaches how to measure software product usability in both quantitative and qualitative terms. It provides effective strategies and standards for planning and scheduling usability tests, collecting and analyzing the data, and reporting to customers, developers and other stakeholders. The hands-on exercises give a “feel” for what it is like to conduct a real test. The instructor brings a wealth of industrial research experience to coach you through a sample test and answer your questions about specific challenges of real-world usability testing.

Target Audience:

Intermediate-level UX research professionals, software designers, product managers, marketing managers, and consultants will benefit most. This course assumes students are familiar with user requirements development and formative testing, and are ready to go to the next level of usability assessment methodology.

Relevant links:

Bio Sketch of Presenter:

Anna Wichansky Ph.D CPE is an applied experimental psychologist who specializes in the study of how users interact with new technology. She has an M.S. and Ph.D in human factors from Tufts University, Medford, Massachusetts, USA and A.B. from Harvard University, Cambridge, Massachusetts, USA in psychology. She has researched, developed, and tested user interfaces for transportation, telecommunications, space exploration, electronic instrumentation, computer hardware, software, graphics, and media products. She has a patent for a remote control for interactive television. She worked at the U.S. Department of Transportation Research and Special Programs Administration, Bell Laboratories, Hewlett-Packard, and Silicon Graphics, where she founded the Customer Research and Usability group. At Oracle, she founded and directed the Corporate Usability Labs and the Advanced User Interface Research group. She is currently Senior Director of Applications User Experience. Anna is a Fellow of the Human Factors and Ergonomics Society and director emerita of the Board of Certification of Professional Ergonomists. She is on the editorial board of the international scientific journal Ergonomics. She is a frequent presenter at HCII, ACM SIGCHI, and HFES annual meetings.
 
 

follow us Icon Link: Follow us on Facebook Icon Link: Follow us on Twitter Icon Link: Join us on linkedin
Last revision date: December 22, 2024 by web@hcii2016.org