Measurement is important but statistics can be intimidating. In this Practical Statistics for User Experience (UX) Course we will present approachable concepts and lots examples for generating statistical solutions to common questions in user research. The presentation includes many graphical representations and a "What test do I use?" decision tree.
Is Design A more usable than Design B? Do more users convert on the new design? Is our Net Promoter Score statistically better than last year?
Learn to use and interpret the right statistical tests on small and large sample user-data using just Excel. We will cover:
This course is course number E 60.2 from a comprehensive curriculum on User experience (UX) currently under development at The Online User eXperience Institute (OUXI).
In this lecture we'll cover:
Included in this course is a lite version of the Usability Statistics Package which will allow you to follow along with the video lectures.
We continue to review the normal distribution as a statistical concept, it's properties, the empirical rule and understand how it applies to UX data.
Reviews the normal distribution and introduces the concept of confidence intervals.
Confidence intervals tell us the plausible range of the unknown user population average or proportion we estimate from our sample data. We can use confidence intervals on small and large sample sizes.
We continue to work through the concept of confidence intervals and calculate them using an Excel calculator. The lecture points to parts in the companion book (available for free with this course) that provides more practice and how to use the R statistics package.
More practice with confidence intervals and generating confidence intervals around binary (yes/no) data. We review a method called the Adjusted-Wald interval which generates accurante intervals even for very small sample sizes.
Probably one of the most useful things you can compute is the binomial confidence interval. We get plenty of practice computing this. You can use the free online calculator at http://www.measuringusability.com/wald.htm to get the same results as the Excel Calculator.
Task time tends to be positively skewed and requires special treatment to generate more accurate confidence intervals. We will cover the log-transformation, the geometric mean and how to report time-on-task averages.
We will review the concept of confidence intervals, 10 Things you need to remember about them and do a few more practice exercies to get you comfortable with this valuable method.
This is where the rubber meets the road. In this lecture we introduce you to the concept of how sample means fluctuate and how we are able to determine if the difference between means or proportions are statistically significant.
The lecture includes detailed animation and minimal formuals so you can grasp the important concept of hypothesis testing and the central limit theorem as applied to comparing two means.
Rejecting the Null
Understanding the sort of backward thinking of Null Hypothesis Statistics Testing (NHST)
Just because there is a statistical difference doesn't mean the difference is always meaningful. We revisit our friend the confidence interval to understand how large of a difference we can excpect with our sample size.
Practice comparing two means in the Excel Calculator.
More practice comparing two means using the Excel calculator and interpreting p-values, confidence intervals and statistical significance.
Free online calculators are also available:
A very common calculation is comparing two binary variables: convert/didn't convert, purchase/didn't purchase and is used in A/B testing. We show how this works for small and large sample sizes and how to interpret the results.
Free online calculators are also available :
A review of the hypothesis testing framework:
Type I and Type II errors
Rejecting and Failing to Reject the Null Hypothesis
Some final thoughts and where to get more information and resources to help make better decisions with data.
Go to MeasuringUsability.com and contact Jeff Sauro with any questions.
Jeff is a Six-Sigma trained statistical analyst and pioneer in quantifying the user experience. He specializes in making statistical concepts understandable and actionable. Jeff has published over fifteen peer-reviewed research articles and presents tutorials and papers regularly at the leading Human Computer Interaction conferences: CHI, UPA and HCII and HFES. He is author of four books including: Quantifying the User Experience: Practical Statistics for User Research (Morgan Kaufmann).
He has worked for GE, Intuit, PeopleSoft and Oracle and has consulted with dozens of Fortune 500 companies.
Jeff received his Masters in Learning, Design and Technology from Stanford University with a concentration in statistical concepts. Prior to Stanford, he received his B.S. in Information Management & Technology and BS in Television, Radio and Film from Syracuse University. While at Syracuse he completed a two-year thesis study on web-usability.
Jeff is a Six-Sigma trained statistical analyst and pioneer in quantifying the user experience. He specializes in making statistical concepts understandable and actionable. Jeff has published over fifteen peer-reviewed research articles and presents tutorials and papers regularly at the leading Human Computer Interaction conferences: CHI, UPA and HCII and HFES. He is author of the forthcoming book: Quantifying the User Experience: Practical Statistics for User Research (Morgan Kaufman, March 2012). He has worked for GE, Intuit, PeopleSoft and Oracle and has consulted with dozens of Fortune 500 companies.