How Analysis Goes Wrong: The Week in Awful Analysis – Week #8
How Analysis goes wrong is a new weekly series focused on evaluating common forms of business analysis. All evaluation of the analysis is done with one goal in mind: Does the analysis present a solid case why spending resources in the manner recommended will generate additional revenue than any other action the company could take with the same resources. The goal here is not to knock down analytics, it is help highlight those that are unknowingly damaging the credibility of the rational use of data. What you don’t do is often more important then what you do choose to do. All names and figures have been altered where appropriate to mask the “guilt”.
If we were to really dive into the real world uses of data, there are two parts to every recommendation that an analyst makes. The first is that action is needed, and the second is what action to take. Fundamentally the problems arise when we confuse which one of these we have actual valid data for, and even worse when we convince others based on those flawed assumptions. While the core goal of analysis is to encourage action, we are creating a duplication of the same flaws that analysts rail against if we are presenting a course of action that is not based on factual data but instead on our own biases and opinions to which we simply attach data as a means of justification.
A perfect example of this is a very common and easy ROI analysis across channels. The multitude of problems with attribution are too long to get into here, but needless to say this is yet another example of people confusing rate and value, or more specifically attribution does not mean generation. Because of this, you can easily take this type of analysis, which can make a case to for action, as some sort of determination of what action to take.
Analysis: By looking at our different marketing channels and the revenue that we attribute to them, we find that Paid Search has an ROI of 145%, Display 76%, Email 250%, and Organic Search 112%. Based on this, we recommend that you move as much of your budget away from display and towards email and Paid Search.
The case you are making here is that you need to optimize your spend or that you can in fact make more money by improving what you do. I find this an ironic statement however in that I would hope that every group knows that they need to constantly improve, and those that don’t know that are not likely to accomplish anything if they do act. If that is not the case, then one must question what the point of any argument is. It is either an argument for the sake of avoiding either blame for past incompetence or pushing solely for the case of presenting evidence of growth, even if there is not any functional improvement. In either case, the story is secondary to the suggested actions derived from the data.
The real problems here lie completely with the suggested steps to improve. Let’s dive into the components of it:
1) Just because I can attribute 250% of revenue to email, it doesn’t mean that I actually GENERATE $2.50 for every dollar I pump into email. The problem here is that you are simply saying that people who interact with email ended up giving us X amount of revenue. You have no way of knowing if it was the email that lead to that revenue, or if people who make purchase and plan to again would be the same one’s signing up for more communication.
2) You have no clue if these channels are even causing revenue at all. It is possible that by not showing someone a paid ad for a specific item, that they would instead purchase a different item and generate more revenue. Even if you do not believe that is likely, you can’t in any way know how much of the revenue is generated solely from the channel.
3) Cross pollination of people hitting multiple channels is in here, so you had to pick an arbitrary method for assigning value. No matter what you choose, you are adding bias to the results and adding more confusion to the outcome.
4) Changes are not linear, so even if we moved revenue from one to another, you don’t get linear outcomes. You might not make a single dollar more.
5) You don’t know what is cannibalizing the other sections. It’s possible that paid is taking away from organic, or display from organic, etc…
6) Because you don’t know what is generating revenue, then it is just as possible that display is generating revenue more than any of the other channels. While I hate anything that looks like it isn’t even breaking even, if the analysis is to cut the lowest performer, we have no measure of what the lowest performer is.
7) The entire analysis doesn’t even look at correlation between spend fluctuations in the analysis, which is far from perfect, but at least can start to look at what incremental value you get from adding or reducing spend.
Marketers and managers love this type of analysis because it makes an easy story and it seems to have a clear action. The reality is that it can be the most damaging, not only to the rational use of data, but also the company because the story that is presented has no basis in reality. You could be right and nail exactly the best way to change, or you could be dead wrong, or somewhere in between. The sad reality is that if you keep the conversation solely at this level, then there is no way for you to ever know what the real outcome is or for you to be able to justify anything you do or say afterwards.