Re: SIGIA-L: testing yourselfFrom: Bonnie Becker Ramsey (rginteractive_at_earthlink.net)
Date: Fri Apr 20 2001 - 10:19:47 EDT
As a designer, I tend to agree that I shouldn't do
Bonnie Becker Ramsey
----- Original Message -----
From: Chris Farnum
Sent: 4/20/2001 12:52:04 PM
Subject: Re: SIGIA-L: testing
I understand your concerns about having designers doing test
However, as someone who has committed that particular sin numerous times
can tell you that I've found it very helpful to do both.
It's great to have the direct exposure with real users, and it often
me ideas for IA designs I wouldn't have otherwise had. It also
the opportunity to ask follow-up questions that help me to improve my
I think it's also important to remember that early in a project, user
interviews aren't necessarily done to test a design, but to find out
about how the target audience thinks and works. This may involve
combination of both measurable techniques (like card sorting) and
qualitative exploration (open ended interview questions). These
sessions give me the chance to ask research questions that will be
when I do start making decisions about an IA approach.
Other ways to minimize the potential bias:
-Work with one or more partners to conduct the test who can keep you
It can be really helpful to have two people trading roles between
observing/note-taking and facilitating. It helps minimize the
helps one make it through a marathon of 2-3 days of continuous
-I agree that getting experienced UE folks involved is extremely
If you are lucky enough (as I was) to have the advice and guidance
usability/IA geniuses like Keith Instone and Larry Rusinsky at hand
helping to develop and analyze your tests, you can avoid many of the
kinds of bias.
Chris Farnum, Information Architect
Date: Thu, 19 Apr 2001 09:22:56 -0600
From: AD adillon_at_indiana.edu
Subject: SIGIA-L: testing yourself
I love this discussion about the role of IA as testers and the value
testing your own designs (and tend to agree with Jess McMullin). If I may
permitted to contribute as one of the academics here (ouch- those digs
beginning to hurt - maybe our failure to live in the 'real world' makes
soft :) there are a couple of points I'd like to raise that have been
at and pointed to in previous posts.
1-The major problem with testing yourself comes from concerns with
objectivity. Despite how much we try, as humans we are not well equipped
be unbiased, rational and detached when we have an investment in the
process. It's that emotional response again (the one we never seem to
to discuss when we talk about interaction and usability). Evaluators need
observe, note and then consider the results in order to maximise the
benefits of the evaluation. When it is your design that is being used,
are not observing, noting and considering in a clear-headed, unbiased
fashion (despite what you think) but are probably thinking ahead about
to change, or worse -- you are explaining what you see away in terms of
characteristic of the user (they are not typical, they are novices, they
not taking this seriously etc.). This may or may not have serious
implications for your evaluation but you will not know that -
recommendation to have others do it.
2- Evaluation is not just common-sense. If it was, we all would do it
most of the time (even common-sense has flaws!). There are several
methods that tend to get used, though I suspect this discussion is
concentrating on user-based tests. To be blunt - many of the user tests
have seen performed on commercial products are so obviously
eyes that I am never surprised when new problems emerge upon release.
consider it the responsibility of good UE folks to minimize such
'surprises', and good evaluators are able to do this.
3- But isn't some test better than no test? Often yes, but not always.
the test is poorly conducted and gives a false sense of confidence in
results, you may face a lot of difficulty explaining to a client later
your evaluation failed to find some crucial problems. Most of us
that even a flawed test will at least get the major problems but I would
be so quick to conclude this. Five users testing a site is
than none, but five users in a poorly designed test can mislead as much
inform. Back to point 2.
4. So if you cannot get UE folks involved, should you run the test
Despite all I have said above, probably 'yes'. BUT you really need to
constantly check what you are doing, develop a method and a script, stick
procedures, put in place a predetermined plan, watch for your own
and give yourself some breathing space after the test to revisit the data
a considered frame of mind. I have seen 'trained' UE folks conduct
evaluations than you can imagine, so I know that while good training
attitude contributes a lot to the process. However, attitude is not a
replacement for method. As I teach people, there should be no surprises
running the test, but the results should always contain a surprise for
If not, are you sure you tested it correctly?
This is most timely, I am writing about the role of usability in IA
generally for my latest column in the ASIST bulletin. Thanks for all
msgs to date.
ps - and it was good to see so many of you all at the CHI conference.
List archives are available at:
To subscribe or unsubscribe, send mail to majordomo_at_asis.org with the
appropriate command from the list below in the body of the message:
--- RavynGrae Interactive
--- 410 952 8287
This archive was generated by hypermail 2.1.2 : Sun Nov 23 2003 - 22:54:37 EST