SIGIA-L Mail Archives: Re: SIGIA-L: testing yourself
Re: SIGIA-L: testing yourself
From: Chris Farnum (crfarnum_at_yahoo.com)
Date: Fri Apr 20 2001 - 12:52:04 EDT
I understand your concerns about having designers doing test facilitation.
However, as someone who has committed that particular sin numerous times I
can tell you that I've found it very helpful to do both.
It's great to have the direct exposure with real users, and it often gives
me ideas for IA designs I wouldn't have otherwise had. It also allows me
the opportunity to ask follow-up questions that help me to improve my
I think it's also important to remember that early in a project, user
interviews aren't necessarily done to test a design, but to find out more
about how the target audience thinks and works. This may involve a
combination of both measurable techniques (like card sorting) and
qualitative exploration (open ended interview questions). These kinds of
sessions give me the chance to ask research questions that will be valuable
when I do start making decisions about an IA approach.
Other ways to minimize the potential bias:
-Work with one or more partners to conduct the test who can keep you honest.
It can be really helpful to have two people trading roles between
observing/note-taking and facilitating. It helps minimize the bias and
helps one make it through a marathon of 2-3 days of continuous testing.
-I agree that getting experienced UE folks involved is extremely valuable.
If you are lucky enough (as I was) to have the advice and guidance
usability/IA geniuses like Keith Instone and Larry Rusinsky at hand for
helping to develop and analyze your tests, you can avoid many of the worst
kinds of bias.
Chris Farnum, Information Architect
Date: Thu, 19 Apr 2001 09:22:56 -0600
From: AD <adillon_at_indiana.edu>
Subject: SIGIA-L: testing yourself
I love this discussion about the role of IA as testers and the value of
testing your own designs (and tend to agree with Jess McMullin). If I may be
permitted to contribute as one of the academics here (ouch- those digs are
beginning to hurt - maybe our failure to live in the 'real world' makes us
soft :) there are a couple of points I'd like to raise that have been hinted
at and pointed to in previous posts.
1-The major problem with testing yourself comes from concerns with
objectivity. Despite how much we try, as humans we are not well equipped to
be unbiased, rational and detached when we have an investment in the
process. It's that emotional response again (the one we never seem to want
to discuss when we talk about interaction and usability). Evaluators need to
observe, note and then consider the results in order to maximise the
benefits of the evaluation. When it is your design that is being used, you
are not observing, noting and considering in a clear-headed, unbiased
fashion (despite what you think) but are probably thinking ahead about how
to change, or worse -- you are explaining what you see away in terms of some
characteristic of the user (they are not typical, they are novices, they are
not taking this seriously etc.). This may or may not have serious
implications for your evaluation but you will not know that - hence the
recommendation to have others do it.
2- Evaluation is not just common-sense. If it was, we all would do it well
most of the time (even common-sense has flaws!). There are several usability
methods that tend to get used, though I suspect this discussion is
concentrating on user-based tests. To be blunt - many of the user tests I
have seen performed on commercial products are so obviously flawed to my
eyes that I am never surprised when new problems emerge upon release. I
consider it the responsibility of good UE folks to minimize such
'surprises', and good evaluators are able to do this.
3- But isn't some test better than no test? Often yes, but not always. If
the test is poorly conducted and gives a false sense of confidence in the
results, you may face a lot of difficulty explaining to a client later why
your evaluation failed to find some crucial problems. Most of us anticipate
that even a flawed test will at least get the major problems but I would not
be so quick to conclude this. Five users testing a site is usually better
than none, but five users in a poorly designed test can mislead as much as
inform. Back to point 2.
4. So if you cannot get UE folks involved, should you run the test yourself?
Despite all I have said above, probably 'yes'. BUT you really need to
constantly check what you are doing, develop a method and a script, stick to
procedures, put in place a predetermined plan, watch for your own reactions,
and give yourself some breathing space after the test to revisit the data in
a considered frame of mind. I have seen 'trained' UE folks conduct worse
evaluations than you can imagine, so I know that while good training helps,
attitude contributes a lot to the process. However, attitude is not a
replacement for method. As I teach people, there should be no surprises in
running the test, but the results should always contain a surprise for you.
If not, are you sure you tested it correctly?
This is most timely, I am writing about the role of usability in IA
generally for my latest column in the ASIST bulletin. Thanks for all the
msgs to date.
ps - and it was good to see so many of you all at the CHI conference.
List archives are available at:
To subscribe or unsubscribe, send mail to majordomo_at_asis.org with the
appropriate command from the list below in the body of the message:
This archive was generated by hypermail 2.1.2
: Sun Nov 23 2003 - 22:54:37 EST