Why we do what we do: Believing what you want to believe – Observer-Expectancy Effect
The human mind is a funny thing. We can be aware of all our own faults, and others, and yet when it comes to stopping ourselves from falling down the many holes that we create for ourselves, we find it much easier to see the same mistake by others then in ourselves. In the next bias I want to tackle is Observer-Expectancy Effect, or “when a researcher expects a given result and therefore unconsciously manipulates an experiment or misinterprets data in order to find it.” Like its sibling, congruence bias, Observer-Expectancy Effect impacts what we test, but it even more fully impacts what conclusions we arrive at. It is about the entire phenomenon of hearing what you want to hear, or using data to only justify what you already wanted to do.
This sinister worm pops its head up in all types of places, whether it is in experiment design, using data to justify decisions, sales pitches, or even in just our own view of our impact on the world. How much of just regular marketing is telling you what you want to hear? Yet, we lose focus that we are just as susceptible to those messages when we are trying to prove ourselves right.
What is important is not as much what the problem is, but how best to “fix” it. How do you insure that you are looking at data and getting the functional result that is best, and not just letting your own psyche lead you down its predetermined path. The trick is to think in terms of the null assumption. The challenge is to always assume that you are wrong, and to look at the inverse of all your “Experience”; challenge yourself to think in terms of “what if that wasn’t even there” or “what if we did the exact opposite”? Making sure that you are trying to prove the inverse, that you are wrong and you will suddenly have a much deeper understanding into the real impact of the outcomes that you are championing. When you try to prove you are right, you will find confirmation, just as when you try to prove you are wrong, you will also come to that conclusion. You have to be willing to be “wrong” in order to get a better outcome. Remember that when you are wrong, you get the benefit of the increased results, and you have learned something.
So what does this look like in the real world? Every time you decide that you are going to go down a path, you will intrinsically want to prove to yourself and others that what you are doing is valuable. The most common example of this is in the quest for personalization, where we get so caught up in proving we can target to groups that we forget to measure the real impact of this decision. We forget that the same person can be looked at a thousand different ways, so when we choose to pick one, that we fail to measure it against the other alternatives. The number of groups that have championed targeting to some minute segment, who when you look deeper into the numbers and find that targeting to browser or time of day would have magnitudes of greater impact, is legion.
The simplest way to test this is to make sure that all of your evaluations, correlative, causal, or qualitative, include the null assumption. What happens if I serve the same changed content to everyone? Or what happens if serve targeted content to firefox users instead? Despite the constant banter and my belief that a personalized experience is a good thing, what do I really see from my execution? What about if we target to the groups that don’t show different behavior in our analysis? Keep deconstructing ideas and keep trying to find ways to break all the rules, and you will find them. Even better, those are the moments where you truly learn and where you truly get value that you would not have gotten from just taking straight to the action.
This is not just a problem with analytics; it plays out with any sort of analysis, especially A/B testing. So many groups make the mistake of just testing their hypothesis against another, which they fail to see the bigger picture. Hypothesis testing is designed to be absolutely sure of the validity of a single idea, not to compare other ideas or to reach any conclusion at a meaningful speed. It is the end point of a long disciplined process, not the starting point where so many want to leverage it.
The final common way this plays out is when we mistake a rate of an action with the value of the action. We get so caught up in wanting to believe some linear relation between items, that having a great promotion and getting more people to click on it equals more value, that we fail to measure the end goal. We mistake the concept we are trying to propagate with the end goal, assuming that if we are successful in pushing towards a desired action, that we have accomplished our end goal. Having run on average 30 tests a week with different groups over the last 7 years, I can tell you that from my own experience, the times when this plays out in the real world I can count on 1 hand.
So much analysis loses all value because we are pre-wired to just accept the first thing we find, or to find data to confirm what we want to believe, or that we then send out that data to others to prove our point and ignore the larger world. We are so wired to want to think we are making a difference that we constantly fail to discover if this is true. Be better then what you are wired to believe and force yourself to think in terms of the null assumption. Think in terms of purposely looking at the opposite of what you are trying to prove or what you believe. The worst case is that you have spent a few more moments and confirmed, truly, what you believe. The best case scenario is that you have now changed your world view and gotten a better result, one that is not arrived at simply because you expected to arrive at that point.