Everyone Loves a Model

As the online world starts to get deeper and deeper into mathematical disciplines, new people are constantly being made aware of all the amazing mathematical tools that are available. Often times marketers are talking about or leveraging these tools without really understanding the math part, as they get caught up in the more common names, things like media mix modeling, confidence, or revenue attribution modeling. The problem arises however when those tools and their power are focused on, but not the disciplines and functions that make them viable as tools. Just having access to some way of looking at data does not inherently make it valuable, yet too many analysts end the conversation at that point. Every tool is only as good as the way you use it. So how then do you enable people to get value from these tools instead of just empty promises left from not understanding the real nature of the tool.

Before anyone starts focusing on mathematical models, the first thing and the last thing that must be understood is by far my favorite quote about math, “All models are wrong. Some are useful.” All models are built off of assumptions, and those assumptions determine the validity of any output of that system. Not only do the assumptions have to be understood at the start, but as the enviroment that you are modeling evolves, they too must be kept true, which can be extremely problematic with the constantly changing nature of the digital world. Because of this, a constant and vigilant awareness of not only the initial assumptions, but also the longer term continued fit of those assumptions is vital for getting a positive outcome in the long term.

In the world of testing, the most common models used are p-score based models. T-Test, Z-Score, Chi-squared models are all basically different versions of the same concept. The most important things that people miss are that these models require a number of key things before they can ever be useful. The first two things that are monumental are that they require representative data to be meaningful. It doesn’t matter if all other parts of the model are correct if it has no reflection on the real basis of your business. Getting confidence quickly means nothing if the confidence does not reflect your larger business nature. This is why you will find people who do not understand this problem shocked when they act too quickly and then find that the real impact is different then what they measured.

The other large assumption is that the data distribution will approach a normal or Gaussian distribution (a bell curve). Data may over a long enough period approach that distribution, but the reality is that the biases, variance, and constraints of the everyday world mean that this distribution is questionable at best. Because both of the nature of online data collection, be it biased visitor entry, limited catalogs, or constrained numeric outcomes, all of these assumptions may never really come into effect. This does not mean that this, or any model, is completely worthless, but it does mean that you cannot blindly follow these tools even as a deciding factor between hypothesis.

But models are not restricted only to the testing world, in the analytics community, everything from attribution models to media mix modeling systems are becoming all the rage. The sophistication of these models range from one time basic models to large scale complex machine learning systems, but all of them have limitations that require you to keep a close eye on the context of their use. Even in the most advanced and relevant uses, it is important to note that the assumptions and model that you used need to be updated and changed over time. The nature of online data collection makes it so that there are so many variables that impact the bias and distribution of your users, that any model as a one-time fix will almost immediately lose value. Predictive model’s can have amazing impact on your business, but they can also lead you astray if you do not keep a watchful vigilance on the relevance on those models as the world they represents changes. The only true way to ensure value over any period of time is to update and incorporate learned behavior into your models and their usage.

The other limitation to keep in mind is that you start one model with only correlative information, often coming from one or more analytics solution. This gives you a great start, but like all other uses of this information, it lacks vital information that affects its value. The most important part of any use of a model such as these is to understand that you must constantly update that model, especially as you start collecting causal impact data that tells you your ability to influence behavior. A one-time model may sound great, and may give you a short term boost, but in the long term, it becomes almost meaningless unless you keep the model relevant and the data focused on the efficiency of outcome. The world is not static, nor should your use of approximations for that world be static.

This means that as you start to leverage any model, that you must make sure that you have members of your team that understand the nature of data and how best to leverage them. You may not need a full time statistician, but it does mean that you should be spending resources and improving the skills of your current resources to understand both the nature of the tools and the relevance to your business. A full time statistician may actually be a detriment to your group, as you need to make sure that you are not solely focused on classroom statistics, but instead on the real and often complex world of your particular environment. Everything you do should be focused on the pragmatic maximization of value to your organization.

I cannot suggest enough that you think about and explore ways to leverage models into your practices, and that you start to leverage their power to stop opinion based decision making. That being said, if you are to get value from these tools, you must understand both sides of the coin, and make sure that you keep you use of the models as relevant and powerful as their original intent. Never stop growing your understanding of your site, users, and the efficiency of change, but also keep that focus not only on your organization, but also on each tool you leverage to achieve your goals.

Advertisement

One comment

  1. Matt Gershoff

    Thanks for the post. You makse several good points.
    A couple things.
    1) I think the discussion about normality is about the distribution of the sampling mean, not that all data is generated from a process with a Gaussian distribution.
    2) Why not call out the idea of Non-stationary and drift explicitly? That way folks can think in terms of the nature of the online learning problem (drift), rather than in terms of methods (update the model).
    3) ‘Collecting causal impact data’ – this is experimentation data. Call it A/B testing, DOE or whatever, but it is data that you, the organization, generate by making some sort of change/perturbation to your current system to see how the system/process changes. This is active data rather than passive data and other than maybe some appeal to Judah Peal, is really the only way to get causal relationships.
    Thanks
    Matt

Join the Discussion

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s