Wrong pricing problem – identified by Website Monitoring

Date: 5th October 2010
Author: Deri Jones

As a website performance tester, a lot of the meetings we have are pretty detailed and in-depth – helping eCommerce organisations to get some meaningful metrics of performance for User Journeys that make sense both at a business level as well as at the technology under -the-bonnet level – and applying the insight into the complexities of software or infrastructure tweaks.

But today’s meeting was a nice simple one,  quick in and out.

Despite the business severity of the issue – after all, a pricing problem can quickly become the keynote for an ongoing profit problem, if it isn’t contained.

The company in question have been using our dynamic User Journey monitoring for about a month, and already have demonstrated internally the ROI that they have obtained, from seeing how user experience on the site varies 24/7: and the value gained in timing marketing campaigns where possible to avoid overloading already busy periods on the website.

But better than that, the reason for the big smiles  around the table today, was that we’d managed to nail down one of those niggling, sporadic problems that had been in the shadows for months but never really identified or tracked down.

The Call Centre had reported occasional complaints  – some browsers had reported symptoms, but no one had ever been able to reproduce the specific problem or identify it – so it was never possible to expect the tech team to be able to resolve it.  Like a ghost ship it would appear through the fog now and again and disappear once more.  Most people had never seen it. Those that had seen it, had not seen it often, and maybe not recently. Everybody had kind of learned to ignore it.  Lots of folk didn’t even believe it was real.

The problem was exposed through one of the dynamic User Journeys we’d set up as part of their website monitoring – this journey acted like a regular site visitor – it finds a product  by navigating through the menus, making  a random choice of category and subcategory and final product – right down to size and colour choices. And then places that product in the basket. Five or six pages altogether.  A different route and a different product every time, running every 5 minutes night and day.

Our monitoring showed that now and again, the price of the product when seen on the product page, was not the price shown when in the basket!  It could vary by as much as a fiver (five British pounds).  Immediately the client remembered the occasional complaints from the past – maybe they were real after all!

Sure enough they were, we scripted our journey to send out a specific error whenever the price mismatch was spotted, and it soon became apparent that although sporadic the error was happening  rather too often  – around six or a dozen errors a day, which against a  sampling rate of  every 5 minutes, is a few percent too many.

The client’s tech team have been able to dig down to work out the root causes, now there were plenty of samples for them to look at and see what was happening.

It turns out, that the root cause is something I talked about last week in this blog – the issue of Caching of web content.

Last time it was holiday websites and the pleasures and pains of web caching; today was a different sector, but again the client’s tech team were using a cached product list and price list during the day, updated infrequently from their master backend merchandising systems.

If prices on the main system changed, the user wouldn’t know because web pages didn’t track.  But they would find out, when the product was put in the basket, because  as part of a pre-order check in which the master database was queried for availability info, the price would change on screen.

Sporadic error detection is a keynote for the Dynamic User Journey approach we use at SciVisum when measuring a site, confidence for the tech team that a problem is real and needs looking at is easily gained once they can see the frequency it which it occurs.  Our monitoring tools  provide the extra level of detail evidence that really saves the tech time:  the server ID to show which server in the farm (or all) is producing the problem,  the actual raw HTML and AJAX calls of all the pages that contain the problem; which products are being affected and what date and time etc.

So a pretty good meeting, and there was even time to plan a little further ahead than usual, looking at website load test plans for a new platform being readied for launch in 8 months time.

It’s not everyday that highlights such an obvious benefit of the dynamic user journey approach –  it’s a real contrast to the less than best practise approach of simple siteuptime kind of URL monitoring that is all some teams have to rely on when trying to find out how the website technology (and it’s bugs) is impacting real users and real sales.

A wicked thought crossed  my mind  when chatting to my colleague Gomez afterwards, maybe if we’d kept quiet about the problem, and run some scripts that would only buy products at the cheaper prices, we could have had a  nice little earner, reselling on eBay in Del Boy Trotter style!

Top