User Journeys: How not to do web monitoring

Date: 3rd March 2010
Author: Deri Jones

It’s funny sometimes how much effort is spent on trying to make a web brand handle the traffic peaks associated with major marketing campaigns – and yet how little real progress can be seen.

Poor old Nat West / RBS hit the papers last week, when their online banking was down for over half a day. It can happen to the biggest of the best of London. (well the biggest anyway).

Heard about this apocryphal Grimms Fairy tale of an eCommerce story this week.
For these big brand guys’ web site, confidence that it could handle the expected future traffic hammering was vital.

So the Business Team rightly called for a Web Load test. All agreed it was good thing to do. And the Tech team were tasked to arrange it.

A month later the testing was said to be done and satisfactory. It took a few asks and some nagging before the actual test report managed to hit anyone’s desk. What it said was, that the site could handle up to 30,000 concurrent users.

At first, this gave the eCommerce director of the site confidence that it would be OK. It was a big number, he thought.

But when he asked how many Orders per minute that would mean – nobody could tell him: the testing hadn’t been at that level he was told, it was more abut CPU and page hits.

He asked for more: what would the tech team do to handle more throughput – their response seemed fairly generic: they would have to go away and look at new hardware. Any software tweaks possible? Not really it seemed.
So no concrete plans to improve capacity came about.

And two months later on, the big marketing day came, and all the marketing money that was planned got spent, and the visitors came and the site was busy.

But sadly many visitors found the site slow and error prone that week.
It never crashed as such, but was less than smooth. With the problems on the site, confidence ebbed and the Marketing team at one moment even pondered a mailshot that kind of said sorry. But canned that idea anyway.
Certainly the Conversion ratio was considerably lower than average. Even the Chairman said his wife had found the site a problem at the weekend.

The eCommerce director asked his team how bad the site had been that week. It must never happen again: “so what actually did happen” he asked.
Someone said that there was some performance web monitoring in place, some kind of website performance analyzer data stored on the Intranet – so that would show the numbers. Nobody had the details to hand. Anybody was willing to talk to the guy that was supposed to have control over the outsourced web monitoring.
“If we don’t know exactly what happened on our site, confidence in the next campaign is going to be hard to find” they all agreed.

Finally, the details and numbers of the web monitoring they had in place for that week were found, and the graphs were pored over.

Oh dear, why is this Journey’s graph yellow all week? Ah, it seemed that that Journey wasn’t really following a journey like a site customer would: it was just a series of URLs, pre-fixed and hit in sequence. And the product ID set in the URLs, ran out of stock on the 2nd day of the big week, so the Journey couldn’t finish, so it marked up as a yellow for Warning

And this is strange – this other Journey didn’t show any extra errors or slowdowns at all that week? How come. The other two graphs do show noticeable slow downs.

Another bit of digging around with the monitoring suppliers. It seemed that although this Journey was called ‘Add to Basket via Search’, that it also was just a list of fixed URLs. And that list had been put together 6 months ago : i.e. before the site upgrade earlier in the year. The old URls that customers six months ago would have used, did still work. But they were not the URls that customers today follow: the Search button on www.company.com used to be handled at a URL of www.our2ndBrand.com/search.do; but on the site now, the Search Button actually takes users to www.company.com/newSearch.do.

Oh dear, the Add to Basket via Search Journey was not even hitting the major brand site any more! It was not a dynamic Journey, looking into the pages that the users sees at all.

That’s why it was showing all clear during the week when the real site was struggling.

The eCommerce director said, “when it comes to knowing 24/7 what User Experience is like on our site, confidence and words fail me. It’s been the wolf in bed all along, not Grandmother, and we really didn’t realise what our web site performance was like.”

And the Chairman asked the wood-chopper to come out of the forest, and help the eCommerce Director negotiate his redeployment with HR.

The Motto of the tale: next time someone says you have User Journey monitoring in place – ask them exactly what you’re getting: whether it is dynamically following what users do, or just a pre-cooked static list of URLS.

Top