UXesSeo lundi 21 décembre 2009

User tests in usability are costly but have a high and irreplaceable ROI in terms of data quantity and/or quality. This is what we say to our clients. Now, when they ask for a cheap, quick and still valuable user test plan (don't they?!), we may be given into the temptation of

  • using lambda test monitors with no usability background, 
  • reducing the number or participants to the minimum key 5 users (see Nielsen), 
  • take notes instead of recording think aloud,
  • working with several  test monitors running simultaneous tests
This is the perfect combination for not reaching the supposed 85% of usability problems identified with 5 users (according to Nielsen). Why ?

The test monitor should have an average usability knowledge because he has to be able to ask the participant to think aloud about why he hesitates here or there, if he did see that option, whether he would prefer the interface to be adpated in this or that way, etc. A first problem interpretation phase can start during the test itself. A usability professional costs more but brings key value to the test.

On the other hand, reducing the number of participants is not by default problematic. It depends on the type of test plan (repetitive or not), the target audience (one single profile or not), the interface homogeneity (many different interfaces or not), etc. The problem raises when combined with other risk factors below.

Actually, if you also choose to avoid recording the test, but ask the test monitor to take paper note instead, you will miss many other key information. First of all, you cannot take note and follow the test with sufficient attention to the participant body language and mouse movements. Furthermore, people naturally filter what they will write down, and this is dangerous if combined with few participants and multiple monitors.

Now, if you also split the test plan into several test monitors in order to save time, each monitor will still filter what he decides to write down. One single monitor could decide to write down something he dropped in a previous test, just because he heared that same comment for the third time. If he only monitors one or two tests, he may drop a comment that is actually repetitive accross the participants - and would worth a note. The person in charge of merging all notes and extract main user problem trends may also miss key data only mentionned once instead of more. You can still organize a post-test brainstorming with all monitors, but each one will naturally focus on what he wrote down before.

Do you still expect to reach the 85% of problem detection with 5 users in such conditions ? I had the experience of a test plan run by two monitors : one with paper note, the other with video/audio record. Many times, the paper monitor agreed, at the stage of merging annotations, that he missed key data that the video/audio monitor did write down when rewatching the video/audio record. The video/audio monitor did also ask more useful and detailed questions to the participants during the test scenarios.

So take care when ensuring the client that he will still have the same result quality with such a 4-means quicker and cheaper test plan !

PS: Did J. Nielsen read in my mind when writing the just published article "Anybody can do usability" ?! Now you already know my opinion about that assertion. What is your ?

UXesSeo dimanche 13 décembre 2009

Did you ever wonder what is that small search field below a few search results in Google ?

This is the "secondary search field". For e-commerce sites, it brings some advantages:
  1. user attention is catched by a layout break-down : a form field in the middle of the result list
  2. a form field immediately calls for action, which is especially interesting for e-commerce
  3. a (performant) search feature is the quickest way to find a product in a e-store; this shortcut to the product also attracts the potential customer
According to Google itself (Google blog announcement on March 08), it is based on the high match potential rate of the result compared to the query, and to the kind of website targeted (width, page rank, etc).

Now the question is : why does this search field not always appear for the same query and same site target? For instance, a query on 'bebo' (the social networking), not always shows that search field (depending on the computer on which I tested). For the same query, it sometimes shows, sometimes not. Any idea of the user settings that could be involved in this behaviour?

Anyway, this feature that already exists from nearly two years now in Google UK, only shows up rarely in Google BE. Did it lack of the expected success in UK ? I found no update about that on the Google blog, nor others...

UXesSeo samedi 12 décembre 2009

A l'heure de l'internet en temps réel, chacun tweete les articles intéressants pour en faire profiter sa communauté. Les options de 'retweet' sont à mon sens objet d'un telle dérive que l'on en arrive à recevoir une dizaine de fois la même information via des abonnements à des comptes différents.

Le nettoyage du bruit constitué par les tweets personnels ("j'ai acheté un nouveau chien", "j'ai trouvé mon menu de Noël", etc.) est plus ou moins précis si l'on s'équipe des gadgets utiles. Mais là où rien n'est fait ou si peu, c'est dans le nettoyage des doubles tweets. L'add-on à Greasemonkey ne marche pas vraiment, et Twitter, comme à son habitude, laisse faire le travail de perfectionnement par les autres (n'est-ce pas d'ailleurs trop risqué ? A quand un concurrent qui intégrera les fonctionnalités actuellement externes à Twitter et pas toujours 100% compatibles ?).