May 1, 2009
The Misinformed Misleading the
Uninformed -- A Bit About Blind Listening Tests
Lets say I handed you a glass of wine or a soft drink
and asked you to taste it, then tell me whether or not it was good. Assuming you could
trust that I wasnt going to poison you, wed both know that it would be absurd
if you replied, I cant possibly tell you whether its good or not without
first knowing who made it, what it is, and how much it costs. In audio reviewing,
though, this happens all the time.
Not too long ago, The Abso!ute Sounds editor,
Robert Harley, wrote an editorial, "The Blind (Mis-)Leading the Blind," which
has been reprinted on the magazines website. Read it and youll see that it is yet another article
frowning on using blind tests in the reviewing of audio components. In particular, Harley
says, "The answer is that blind listening tests fundamentally distort the listening
process and are worthless in determining the audibility of a certain phenomenon." I
feel hes wrong, and although Harley encourages readers to participate in their
forum, it seems more fitting that I write about it here.
Blind testing refers to the practice of concealing from the
reviewer the identity of the product under test, in order to eliminate the bias associated
with knowing the products make, model, price, appearance, etc. Blind testing is
commonplace in everything from wine tasting to medical experiments; in the scientific
community, its the only way tests can be done whose results can be accepted
with any degree of credibility. Youd think it would be common among audio reviewers
as well, but thats not the case -- a situation reinforced by the kind of article
written by Harley. Unfortunately, such articles are, in my opinion, examples of the
misinformed misleading the uninformed.
Blind testing is a good way to reduce bias to make a more
honest assessment, and has been used time and again to improve the audio products we use
today. Blind tests are at the core of the decades worth of research into loudspeaker
design done at Canadas National Research Council (NRC). The NRC researchers knew
that for their results to be credible within the scientific community and to have the most
meaningful results, they had to eliminate bias, and blind testing was the only way to do
so. Many of the companies -- Axiom, Energy, Mirage, Paradigm, PSB, Revel, etc. -- that
participated in the NRCs research achieved great success as a result, and use blind
testing in their own processes of research and product development. Such firms know that
researchers and reviewers arent the only ones susceptible to bias -- everyone in a
company, especially designers, can succumb to bias and thus skew the results. Presumably,
Robert Harley has reviewed products made by some of these companies. One has to ask: If
blind testing is suitable for them, why not for him?
Probably the most eye-opening take on this comes from
Harman Internationals Sean Olive, who recently wrote about it in a blog entry titled
"The Dishonesty of Sighted Listening Tests." Harman performed
tests to see if there was a disparity in results between their blind and sighted tests,
which Olive summed up as follows: "The psychological biases in the sighted tests were
sufficiently strong that listeners were largely unresponsive to real changes in the sound
quality caused by acoustical interactions between the loudspeaker, its position in the
room, and the program material. In other words, sighted tests produce dishonest and
unreliable measurements of how the product truly sounds." The idea that sighted
tests, not blind tests, are highly unreliable should send shockwaves through the reviewing
community, and lay articles like Harleys flat on their backs. It should also have
consumers eyeing with suspicion all product reviews based on sighted listening.
Im biased toward blind listening tests because
I know they work. Ive participated in blind listening tests at the NRC, as well as
at some of the manufacturers mentioned above. I and some reviewers have also set up blind
experiments in my listening room to help us assess the performance of certain products. I
find blind listening actually easier than sighted listening because I dont have to
concern myself with anything about the product other than its sound. Blind listening
allows me to better focus on that sound. Whats more, theres rarely a case
where I cant hear differences with the sound, which runs counter to
Harleys argument that blind testing distorts the listening process.
A component such as the Classé Audio CAP-2100
integrated amplifier helps makes the testing of source components and cables easy because
you can match the levels of the various inputs, and turn off any inputs youre not
using. Provided the listener has not set up the system, and cant see which source is
hooked up to which input, the listener can then "blindly" select those
inputs/sources, and then assess only what is actually heard.
That said, these days Im part of a
minority among audio reviewers. For a long time, Ive wondered: Why are so many
reviewers dismissive of blind testing and reluctant to have any part in it, particularly
when it can be shown to be highly effective? After more than 13 years of reviewing, and of
seeing what goes on in the reviewing community, I think it comes down to two things: a
lack of knowledge and fear.
From what I can tell, those who dismiss blind testing have
actually never participated in a well-designed blind listening test. When I hear reviewers
dress the practice down, their knowledge seems to come from what they "know" of
or presume about blind testing, not what they themselves have experienced of it.
Therefore, I have to assume that they simply lack a clear understanding of how a blind
test works. Perhaps if they had this knowledge, theyd know that, in a well-designed
blind test, its quite easy to distinguish between products, provided theres
something to distinguish them.
Then theres fear. Its not farfetched to
think that some reviewers "golden ears" may not seem so golden if
its disclosed to readers that they have trouble arriving at the same conclusions
under blind conditions that they do in a sighted test. Right now, reviewers are operating
like card dealers who not only have the odds of the house on their side, but also the
ability to see the cards before theyre dealt. Im pretty sure that reviewers
who are unsure of their ability to hear with only their ears have a vested interest in
keeping blind tests away from their work, lest the world find out what their ears are
really made of.
I believe that if those opposed to blind listening were
privy to a well-set-up blind test that allowed them to listen at their leisure the way
they do in sighted tests, the only difference being that they wouldnt know the
identity of the product they were listening to, the results might surprise them. I also
believe that if blind testing were relied on more than sighted testing, it would make for
fairer reviews and more useful results. After all, Im sure that all reviewers --
even Robert Harley -- without knowing anything more about it, can tell you whether or not
they like the taste of a drink. Why cant they do the same with audio gear?
The downside: Although I believe in blind testing and the
good it can bring, its not always practical to do, which is why you dont see
much of it in SoundStage! Network reviews. Next month, Ill talk about the challenges
involved in actually conducting blind tests, and what were attempting to do to
overcome those challenges so that we can institute more such tests for future GoodSound!
. . . Doug Schneider