Heads up! To view this whole video, sign in with your Courses account or enroll in your free 7-day trial. Sign In Enroll
Well done!
You have completed Evaluating Design!
You have completed Evaluating Design!
Preview
In this video, we will analyze the results of your user surveys.
Tool used in the video demo
Make meaning of your data
- Get rid of bad data
- Calculate the means
- Compare
- Categorize open-ended responses
Related Discussions
Have questions about this video? Start a discussion with the community and Treehouse staff.
Sign upRelated Discussions
Have questions about this video? Start a discussion with the community and Treehouse staff.
Sign up
In addition to writing
good survey questions,
0:00
how do you make sure that your
results will be meaningful?
0:03
First, you'll need to calculate
the proper sample size.
0:07
SurveyMonkey has a useful sample
size calculator to do this.
0:11
The link is in the teacher's notes for
your reference.
0:15
Here is the SurveyMonkey
sample size calculator.
0:19
The survey that we designed in the last
section was meant to represent the people
0:25
who have at least saved a custom
T-shirt design, all 1000 of them.
0:30
In order to be 95% confident in
our results, as shown right here,
0:35
and if you feel comfortable
with a margin of error of 5%,
0:40
Then your sample size should
be at least 278 people.
0:46
This means you should have at least
278 people complete your survey.
0:51
Once your data is in, take a few simple
steps to make meaning of your data.
0:57
First, get rid of bad data.
1:03
If your survey provides an incentive,
1:06
some people may provide bogus
responses just to get that incentive.
1:08
You'll need to discard responses
from those participants.
1:13
Common red flags are nonsensical
open-ended responses, patterning,
1:16
which can look like providing
the same answers to all questions, or
1:21
unrealistically fast survey completion.
1:26
I've provided a link to a source
describing the behaviors to watch for.
1:29
Second, calculate the means.
1:34
Take all the Likert scale questions,
assign a numerical value to each option,
1:36
for example, very satisfied would be 5 and
very dissatisfied would be 1.
1:42
With that in mind, you'll be able to
calculate a mean for each question.
1:47
Make comparisons.
1:51
Sometimes, it could be hard to know
if a satisfaction score of four, for
1:53
example, is good, or not.
1:57
This is where it helps to start
tracking your data over time so
1:59
that seeing the scores go up and
down begins to have meaning.
2:03
If you have data from a similar service,
2:07
comparing those scores
can be useful as well.
2:09
Four, categorize open ended responses.
2:12
Just like we did with our
usability test findings,
2:16
group similar responses together
until you see a pattern.
2:19
You can use an automated text analysis
tool to help you do this at scale.
2:23
That's all for
our discussion about surveys, and
2:28
also completes our course
on evaluating design.
2:32
We've covered a wide range of topics.
2:35
To understand qualitative methods,
2:38
we created our very own
usability study for Amazon.com.
2:41
For our quantitative methods lesson, we
learned about the basics of AB studies and
2:46
then went on to create
our own user survey.
2:52
Following this course, I hope you feel
equipped to take a critical eye to your
2:55
designs and to be able to evaluate what
you and your team have come up with.
3:00
I've provided a link to
a list of other great UX
3:04
research resources if you want
to learn more about this topic.
3:07
Good luck.
3:11
You need to sign up for Treehouse in order to download course files.
Sign upYou need to sign up for Treehouse in order to set up Workspace
Sign up