SIGIA-L Mail Archives: SIGIA-L: testing yourself
SIGIA-L: testing yourself
From: AD (adillon_at_indiana.edu)
Date: Thu Apr 19 2001 - 11:22:56 EDT
I love this discussion about the role of IA as testers and the value of
testing your own designs (and tend to agree with Jess McMullin). If I may be
permitted to contribute as one of the academics here (ouch- those digs are
beginning to hurt - maybe our failure to live in the 'real world' makes us
soft :) there are a couple of points I'd like to raise that have been hinted
at and pointed to in previous posts.
1-The major problem with testing yourself comes from concerns with
objectivity. Despite how much we try, as humans we are not well equipped to
be unbiased, rational and detached when we have an investment in the
process. It's that emotional response again (the one we never seem to want
to discuss when we talk about interaction and usability). Evaluators need to
observe, note and then consider the results in order to maximise the
benefits of the evaluation. When it is your design that is being used, you
are not observing, noting and considering in a clear-headed, unbiased
fashion (despite what you think) but are probably thinking ahead about how
to change, or worse -- you are explaining what you see away in terms of some
characteristic of the user (they are not typical, they are novices, they are
not taking this seriously etc.). This may or may not have serious
implications for your evaluation but you will not know that - hence the
recommendation to have others do it.
2- Evaluation is not just common-sense. If it was, we all would do it well
most of the time (even common-sense has flaws!). There are several usability
methods that tend to get used, though I suspect this discussion is
concentrating on user-based tests. To be blunt - many of the user tests I
have seen performed on commercial products are so obviously flawed to my
eyes that I am never surprised when new problems emerge upon release. I
consider it the responsibility of good UE folks to minimize such
'surprises', and good evaluators are able to do this.
3- But isn't some test better than no test? Often yes, but not always. If
the test is poorly conducted and gives a false sense of confidence in the
results, you may face a lot of difficulty explaining to a client later why
your evaluation failed to find some crucial problems. Most of us anticipate
that even a flawed test will at least get the major problems but I would not
be so quick to conclude this. Five users testing a site is usually better
than none, but five users in a poorly designed test can mislead as much as
inform. Back to point 2.
4. So if you cannot get UE folks involved, should you run the test yourself?
Despite all I have said above, probably 'yes'. BUT you really need to
constantly check what you are doing, develop a method and a script, stick to
procedures, put in place a predetermined plan, watch for your own reactions,
and give yourself some breathing space after the test to revisit the data in
a considered frame of mind. I have seen 'trained' UE folks conduct worse
evaluations than you can imagine, so I know that while good training helps,
attitude contributes a lot to the process. However, attitude is not a
replacement for method. As I teach people, there should be no surprises in
running the test, but the results should always contain a surprise for you.
If not, are you sure you tested it correctly?
This is most timely, I am writing about the role of usability in IA
generally for my latest column in the ASIST bulletin. Thanks for all the
msgs to date.
ps - and it was good to see so many of you all at the CHI conference.
List archives are available at:
To subscribe or unsubscribe, send mail to majordomo_at_asis.org with the
appropriate command from the list below in the body of the message:
This archive was generated by hypermail 2.1.2
: Sun Nov 23 2003 - 22:54:37 EST