March 13, 2012

Usability 2.0: Using Data to Drive Usability Findings

Paul Newbury_230Paul Newbury

Diagram for teamwork, team, organization, social meadia, and network.

Paul Newbury explains how to get the best value out of your data

Whereas traditional usability sessions often consist of tens of people looking at a website in an often staged environment, it is now imperative that businesses learn how the amount of data available online, including huge sources of social media data, not only provides a robust source of usability data with a very significant sample size, but also gives insight into the preferences of user groups and even individuals in the way they want to interact with an online brand.

Breaking the tradition

Historically, it has been considered essential for all new design work or website changes to be tested in a laboratory environment with a group of individuals (often paid) being used to offer feedback on proposed changes and, on occasion, make suggestions themselves as to what could change.

Whilst this approach provides valuable and in-depth insight, it is often based on a small sample size, and at significant financial cost. At the same time, there is an abundance of online data, sourced from real visitors to the site which can be used to supplement usability when approached in the right manner.

Advanced analytics strategies

With advances in implementation techniques and supporting analytics tools, large amounts of information can be captured that would previously have been the preserve of usability sessions. By implementing analytics strategies in the correct manner, this type of data is now available within most incumbent analytics solutions, and the sample size is proportionate to the volume of traffic received by your websites. Furthermore, correctly designing and implementing A:B and multivariate tests online allows comparative data to be gathered, again with a significant sample size, for existing and proposed website changes.

Some example implementations we have completed at Yard Associates have supported this process by providing:

Form field interactions

Based on a user changing a form field value, analysis can then be performed on individual forms to show the fields most obviously causing conversion issues.

Page location analysis

By implementing tags to identify key areas of the page, analysis can be performed to show exactly where customers are interacting with the site (not just where their eyes are drawn).

Banner/Content A:B and MVT testing

By capturing the data in the correct manner and implementing tags thoroughly on the site, rudimentary A:B and MVT testing can be performed without the need for specialist tools. Furthermore, different versions can be tested and correlated against all on-site interactions (including purchase, registration, sign-ups, etc) to give a well formed view on the actual success of each variant.

Click-tracking, heat-mapping and data overlays

The majority of analytics tools in the marketplace provide an element of this functionality, and often the data capture and configuration are the only real issues in the availability of this insight. Implementing this data capture allows further analysis of customer interactions, including the ability to highlight clicks on elements that weren’t clickable, again based on trends across a significant sample size.

This list is not exhaustive; however, it gives a flavour of some of the classic usability information that can be achieved from existing toolsets and databases. The key is just to implement the necessary analytics capabilities to allow this data to flow into your incumbent analytics tools.

Using customer visit recording & replays

Further to the use of analytics for this purpose, a number of products have become available to allow the capture and replay of an entire visit. This approach offers a similar but generally more detailed view of the online visitor data, and allows the capture of (amongst other things):

• A timed view of the scroll position of the page.

• A view of the form field entries and the order in which they were completed.

• The ability to choose customers within a segment based on behaviour and witness the visit-steps leading to that behaviour.

Utilising this level of data provides almost all of the same data as a traditional usability session, but over a much larger sample size. Of course, the ability to replay these sessions typically consists of a significant level of data capture and storage, and so this option is often a costly one, although the insights provided can be vital.

Social media & other referring data

In addition to the onsite data, a large source of social media data typically exists about a brand and their online procedures, which supplements the online statistics by providing voice-of-customer type results in a digital and collectible resource. However, this data does not only provide this additional feedback on a company’s website; the use of social media data also provides valuable insight into the preferences and requirements of individuals who are potential future visitors or customers, and if used in the correct context, could provide a very personalised experience to any of them that lands at the website.

At its simplest, this could be an awareness that users referred to the site via Twitter campaigns will have a specific expectation, and landing pages and messages can be tailored to match that expectation. Similarly, links sent to individuals via Direct Message in Twitter will already have context in the overall conversation between an individual and a brand, and so could point to pages optimised to continue that conversation on the website itself in the most efficient manner. It is in this content-personalisation, where the site itself can be tailored to match individual or group expectations and preferences, that a business needs to truly use large volumes of data to highlight the available usability discoveries.

The drawbacks

Whilst the focus of this article has been to highlight alternatives to usability testing, that’s not to say that there isn’t a place for more traditional usability studies to take place. It is clear that there are situations where even A:B testing on a live customer base can be dangerous, not least should the results end up in lost sales or opportunities. Additionally, for new designs or functionality, data capture and screengrabs mean that early designs could be ‘leaked’ to the marketplace with little effort.

The key is to ensure that the available data is maximised and used to supplement more traditional methods. That way, designs themselves can be based on analysed data and findings, websites can be tailored to meet the specific expectations of customers, and usability studies themselves will prove more efficient since many of the required findings will already be known.

Paul Newbury is Director at Yard Digital