4 analytics mistakes you can learn from – without repeating them

Posted on Tuesday, 18. April 2017 in category Web Analytics. 5 min read • Written by

Črt Podlogar

Web analytics is not easy. Once you finish the technical implementation of the analytics tool and begin the data collection, your work has only just begun. Human nature, the complexity of the web, lack of real data and not enough consideration about what you wish to measure and what these data would tell you, can often cause a wrong interpretation of data.

People have well-formed ideas about the world and the things making it up; but these ideas are not always correct. The situation is no different when it comes to the web, to data and user behaviour. A combination of assumptions and convictions (which are often wrong) and poor-quality data or analytical methods that were not thought over well enough, quickly brings about completely fictitious results, achievements, findings and trends, and this can lead to wrong decisions. 

Anyone engaged in advertising analytics and optimisation can come up with a number of mistakes, surprises and strange things they experienced in their career. Below I present some of mine.

Mistake #1 – Too good results of submitted online registrations for a service

What was the mistake: this illusion sneaked into the analytics and is not the result of a wrong implementation or interpretation of data. An extraordinary number of submitted online registrations was recorded on a project. The problem was that the online registration wasn’t binding – another offline confirmation step followed, and the service wasn’t available to everyone. So users submitted registrations to “test” if the service was available to them. Of course, not all of them intended to complete the registration; and on top of that, all were not even able to complete it.

How did we discover the mistake: honestly? The results were too good to be true. Facebook as a channel usually acts mostly as an awareness tool – it’s good for promotion and to expose the brand, but in most cases (with some exceptions, of course) it doesn’t generate last-click transactions. In this specific case, it brought atypically high numbers every week, so we had a closer look at the matter.

How was the mistake corrected: we gave the users what they wished and needed: a separate tool to check if the service was available to them. And as expected, we immediately noticed a decrease in the number of registrations for the service. Those that remained counted. The advertising optimisation could continue smoothly, and decisions were made based on real, not fictitious data.

Mistake #2 – wrong assessment of advertising as successful

In my life, I made many analytical and advertising mistakes; this is a major one, and most informative at the same time.

What was the mistake: our definition of the advertising target was completely wrong. Our task was to generate transactions from new users. All existing users were removed from the advertising, and then the transactions from advertising were measured. Logical.

We forgot that in almost 100% of cases, several days before the transaction, a registration and order are submitted – this is the nature of the product. Only a few days after the customer submitted the order, the payment is made too. As our advertising was focused on transactions, we of course advertised to those who already bought the product, but didn’t pay for it yet. All revenues we recorded would have happened with or without advertising.

How we discovered the mistake: my life as advertiser taught me that sometimes it makes sense to export specific transactions from Google Analytics and compare the data with the data from the back-end system (CRM, payment or booking system, etc.) I was shocked to find out that nearly all transactions brought by the advertising were made by customers who were already registered in the system, and not new customers.

How was the mistake corrected: we ensured that the client’s order placement system sent data to Google Analytics after each order, before the transaction was made. At the same time, we focused all advertising on generating placed orders, while completely omitting optimisation on transactions.

It makes sense to compare data from Google Analytics and your systems to avoid deviations and mistakes.

Mistake #3 – the ranking of products in a category impacts the product conversion rate

What was the mistake: we monitored the analytics to see how the product ranking affected the purchase conversion rate. We tested in two categories with the highest number of visitors, as the quickest way to obtain a sufficient sample to make the decision. It indicated that the products ranked between the first and eighth position, were sold best.  Based on this test result, we of course newly ranked the products in all categories and ranked those we wished to sell most to the first eight positions. But you probably know already that the sales of these products didn’t increase.

The trick was that the algorithm of the online shop automatically ranked products so those with the highest discounts were ranking best; but this wasn’t reflected by the test categories when we performed the testing. And of course, the final product ranking was a complete failure.

How did we discover the mistake:  the catastrophic result made us repeat the testing, which indicated that the highest conversion rate moved lower in the category. After some effort and research, we found out that products with the lowest margin have the highest conversion rate; and a low margin means a high discount. This moment followed:

How was the mistake corrected: all was set back to the initial situation, we patted our shoulder and on that day didn’t open the analytics or Excel again.  We also came to terms with the fact that discounts are what is selling, and not everything lies in the user experience – people sometimes wish just the product with the lowest price.

Mistake #4 – the major discovery of CRO/UX optimisation

Where was the mistake: based on an A/B test of the purchase process, we determined the winning version of the test, which promised a high increase of the conversion rate. The testing lasted for 14 days, we were too quickly satisfied with the results, and did not consider the statistical significance.

How did we discover the mistake:  very easily. When we installed the winning version of the A/B test on the entire website, the conversion rate decreased overnight, to a record low of the last 8 months. We checked the A/B test results again, and found out that the nature of products purchased in the winning version was the cause of a very high deviation from the average, and distorted the test results.

How was the mistake corrected: It wasn’t. Unfortunately, we gave up on this test. Due to the too low number of visitors we couldn’t perform the test in the short time, and due to the product nature and seasonal and other external influences, we couldn’t extend the testing period. In practice this means that we couldn’t obtain a sufficient number of users in the test to make it statistically significant, so we omitted the testing. Sometimes it is still better to throw in the towel.

What is the common factor in all four mistakes? Although very different, they all occurred because of two very basic reasons: when interpreting or collecting data, I made the conclusions or assumptions too fast, based on my past experience, or I didn’t take enough time to think what I was testing and especially which other factors in the behaviour of users could impact my test.

The analysis of results and online behaviour doesn’t permit quick decisions, assumptions and half-baked tests you didn’t spend enough time on. If your testing will be quick, your analysis of the test will most probably be quick too; both will drastically increase the probability of mistakes in the analytical process.

Still here?
Question? Write it in the comment bellow, let's open a debate.

Leave a Reply