The Pain and Gain of Taxonomy User Testing

As a taxonomy consultant, I always recommend (rather, urge with great gravitas) to my clients that they reserve some time and budget for adequate user testing. As they say, the proof is in the pudding: there’s nothing better than quantitative data to tell you whether you’ve built a structure that really resonates with your core audiences and facilitates their tasks. Creating a taxonomy without testing is putting a lot of faith in guessing – albeit, usually pretty good guessing, based on industry experience and knowledge of best practices if you have a good taxonomist.

Having done user testing on taxonomies I’ve built a few times, I compare the feeling to what I imagine it’s like a being an actor or actress watching yourself in a film.

Imagine you are Scarlett Johanssen (I often do), watching yourself in a film… You cringe at the sight of yourself on the big screen, pick apart your performance, thinking “I could have done it this way…”, hear the audience laugh (or not) at jokes… And the pressure of the success of the movie weighs heavily – will it be a bomb? Will the production company go bezerk? Will I ever work again?

Ok, so taxonomy user testing is not quite that dramatic, but there are parallels. Watching users navigate a taxonomy I’ve built is always teeth clenching for me – I am constantly thinking, why are they clicking there? Are they blind? I should have gone with my first idea for that label… The client is going to freak when they find out that one of their star products is unfindable… Will I ever work again?

Of course, that’s the whole point of user testing – to prevent the taxonomy from bombing.

I am always surprised at what I discover in these tests. Categories or labels that I thought were no-brainers can turn out to be black holes of findability. Often this is related to how the taxonomy plays out as a whole in a users’ eyes, rather than the specifics of a particular category. When we build taxonomies, we can get very focused on individual labels and categories and neglect the interplay between different terms across the entire structure. The “stickiness” (or lack thereof) of a particular concept or label can severely affect the performance of other seemingly unrelated categories – a ripple effect of sorts.

For example, a recent test on a toy taxonomy had an interesting ripple effect… One of the labels in the taxonomy included the word “pet”. This term turned out to be so sticky that users looked for any product that even remotely resembled an animal in this category, regardless of how well it matched the product.

User testing also tends to enlighten stakeholders around the dangers of using internal terminology or organizing principles on the customer-facing web. Sometimes clients can be very reluctant to change categories or terms that they have been using for a long time or that represent how they understand their product line internally. They believe customers think the same way and can’t commit to change until hard data proves that users don’t care, don’t understand, or are just plain wrong about particular terms.

I am big fan of Steve Krug’s testing motto: test early and test often. From the taxonomist’s point of view, user testing can be a mix of cringe-worthy moments and fist-pumping “I knew it”s. But there is nothing better than getting real users to point out the flaws and successes in your taxonomy.

For more on taxonomy usability testing, you can download our recent webinar on the topic from our site.

Leave a comment