How Analysis goes wrong is a new weekly series focused on evaluating common forms of business analysis. All evaluation of the analysis is done with one goal in mind: Does the analysis present a solid case why spending resources in the manner recommended will generate additional revenue than any other action the company could take with the same resources. The goal here is not to knock down analytics, it is help highlight those that are unknowingly damaging the credibility of the rational use of data. What you don’t do is often more important then what you do choose to do. All names and figures have been altered where appropriate to mask the “guilt”.
I once again heard the same common mistake in determining where to test earlier today, and it reminded me of the topic for this next Awful Analysis. It is super easy to find data that you believe is “interesting” or “compelling” that in no way actually makes a rational argument that you are attributing to it. An example is as follows:
Analysis: We found that people that interact with internal search spend 1/3 as much as people who don’t. This tells us that there is a massive opportunity to optimize internal search and increase total revenue.
This is probably the most common type of search for “test ideas”, as it sounds like a rational argument and it is using real numbers. The problems come from the fact that the data presented is a non sequitur as far as what to test. These types of arguments are interesting stories and I actually do not always suggest you do not leverage them. My problem is when people start to believe the story and do not realize just how irrelevant the information presented is.
As a reminder, the only way to know what the efficiency or value of a test is requires three pieces of information: Population, Influence, and cost. With that in mind, I want to dive into this type of analysis:
1) You have no way of knowing from the analysis (or any correlative information) if people who spend less do so BECAUSE of the use of internal search, or if people who are going to spend less are the ones who aren’t quite sure what they are looking for and instead choose to buy cheaper things. It could also be that you get a lower RPV because people doing research ARE far more likely to use search to compare items.
2) You have no clue that even if you are right that they are more valuable, if the search results page is the place to influence them, or if it is the entry channel? Or the landing page? Or maybe the product page?
3) You have presented no evidence to your ability to influence that group even if you ignore #1 and #2. Even if you have the perfect group and the perfect place, you still have no insight into what to actually change.
4) There is nothing presented that says that this same group cannot be improved far more dramatically by looking at and interacting with them based on other population dimensions. Last I checked, new users, search users, purchasers, and IE users also use internal search.
5) There is no look at the cost to change this page (and population) versus known results or even just the technical ramifications. Search Results pages are often one of the one or two hardest pages to test simply from a technical resources and page interaction front.
More than anything, the threat of this type of analysis is that it sounds perfectly rational. Who wouldn’t want to “fix” a group that is spending 1/3 as much as another group? Why aren’t all users spending their entire paycheck on my site and my site only? You have to make sure that as the analyst that you are presenting rational fact based data if you expect anyone else to leverage data in a rational manner. You might be right or you might be wrong, but if you do not stop yourself from falling for these stories and do not hold yourself to a higher standard, then how can you expect anyone else to. If you are going to find data to tell a story, then what is the point of the data other than to present your opinions versus someone else’s?