One of the great ironies of our industry is all the time wasted talking about “Big Data” or “Governance” or a thousand other small wasted catch phrases. All of these are simple attempts to show a “maturation” or growth of the online data world. While there is enough backlash to point out that none of this is really new, it is simply the online matching the offline Business Intelligence path. A world that never has produced 1/50th of the value it also pretends to. The irony lies not in what is being debated or not debated, or if online and offline are the same, but in the fact that we all talk around the real issue and do nothing to address it.
The reality is that most BI work is a wasted effort, designed to facilitate a predetermined wish of some executive, and that the people rewarded are never the ones that actually provide the best analysis, but the ones that produce the analysis that best helps make others agenda’s move forward. The online world is simply following suit, to the point that we no longer look at how many reports you can create as justification for existence, but now how can you create a fancy graphic or data to support one groups agenda or another. This evolution follows a normal path from creation and storage, access, and now delivery of data, without one time dealing with the real issues at hand. Human beings are both awful at understanding or leveraging data, but also most people (especially those in marketing) are awful at their jobs.
If you think about it, it’s not that shocking that marketers are horrendous at their jobs; they make a living telling stories and trying to convince others of things that have no basis in reality. This means to exist in this world, you are left with two options: Act like a sociopath, or unconsciously acquiesce to some sick permanent version of the prisoner’s dilemma, but in this version as long as no one points out how full of it the person speaking is, they will return the favor. Both diseases leave the same outcome, a group of people who exist to propagate the work of the same group of people, and who seek outside justification, be it awards (from the same group), data that is only searched for one way to support them, or case studies of people who did the same thing and likewise lied their way to “success”.
The scare and threat of data is that when used in a rational manner, it can shed light on the real value of the day to day actions we hang our hat on. We can find out just how horrendously off our preconceptions are. One of the great ways to succeed at testing is simply to bet against people, as people are so rarely right that just picking the other direction creates a winning streak that would allow you to live as a billionaire if you could translate it to Vegas. You will quickly find that most experts are nothing more then storytellers, and that most of the largest gains companies made are often the least publicized, but those that are shared are often subconscious attempts to get others to fall pray to the same mistake that they too wasted months on. With almost no effort you can prove that most actions taken provide no value whatsoever to the company, or are so inefficient that they are far worse. The data and evidence is easy to get, but we avoid it in order to cope within this world.
Why can’t people act rationally more often? Why is data accepted and abused, why do we seek confirmation not information? Why do we not worship those that get results instead of those that tell stories? The answer is simple, we fear most that which may hurt our own world view. If everyone was willing to search for the right answer, we would all be better off, but as soon as one weak person accepts the word of one sociopath, we are all set on down this path, or suffer silently the fight against the tide.
This is not a new problem, Kant, Engels and many others have been talking about this problem for hundreds of years; we just find new names for the same human weakness. We seek out people to do attribution, and then believe it tells you anything about generation. We seek out those that confirm our hypothesis, not those that disprove them, despite the fact that the disproven hypothesis inherently has a better outcome. We want people to speak at conferences and point out why everyone else is screwed up but why we can change and do the exact same things that they were railing on, but now with a new name. We want to find a site that tells us “which test won”, not one that helps us be better tests or to achieve any value whatsoever. We are constantly searching for the next affirmation to justify who we are, not improve who we are.
“Reality” is not a kind mistress for those that are even slightly interested in it. Empirical Realism is looked at and talked about, but practiced by so few that it is almost as meaningless a buzzword as “personalization”. While it helps companies, it rarely helps those that which to exist in a corporate environment (see prisoner’s dilemma). We are forced to make a Sophie’s choice, do what makes others happy or that which helps our company. We all try and find ways to convince ourselves and others that we are not faced with this choice, yet we only succeed when we stop caring about or thinking about anything other then our own gain. In order to facilitate this, we find every way possible to make the mental pain go away and to find others that will tell us it will all be ok.
So if this is the sandbox in which we play, is it any wonder that our “heroes” are those that best project the best ways to make others believe what is being done matters? We worship at the altar of Avinash, or Peterson, or Eisenberg’s or anyone else we can find that as justification for what we were already doing. We have no way of knowing if what they say is correct, and personal experience shows that following most of that advice leads to immensely fallible results. Far be it from an inquisitive mind to question if the current action is the right one, or if there is a better way to think about and tackle problems. We instead allow others to dictate to us, so that we can avoid cognitive dissonance and rest easy at night… ok, on second thought, most marketers are both suffering from living in a prisoner’s dilemma and are also sociopaths. The data shows that these are not mutually exclusive but complimentary. Glad we got that squared away…
If you want to really make a difference, if you are tired of this same old world or of those charlatans who propagate it, are you prepared to fight the tide? Are you able to evaluate your own work, to go past the comfort and to find out how wrong you are, in just about everything you do? Are you then able to get past that mental scarring to do the same for others? Will you back down the first time someone pushes back, or will you make it your quest to do the right thing when it is neither profitable or easy to do so?
The history of business shows that rarely if ever does this problem truly go away, or does the better answer win. While history is written by the winners to justify their existence, randomness and the trampling of others are bred into every page of this twisted form of storytelling. And yet, until we deal with this real problem, until we are more interested in doing the right thing then the easy one, what will really change?
We will continue to waste time and effort on data, in order to justify wasted time and effort in most other efforts. We will continue to seek new words for old problems, and we will continue to make heroes those that most hold us back. Until we stop propagating the lie and until we look at ourselves first, how can we ever really deal with the real problem of data. Not the collection, not the sharing, not the presentation, but the people who are wired to use that data in the least efficient and most self-serving way possible. You want to solve big data, you want to change the industry, stop wasting time on tag management or Hadoop, solve the people, since there is where all problems lie. Don’t solve how do you share your point, but how do they think about and are they rationally using data to find an answer, or only to justify one?
As the world becomes more and more complicated, the battle between those on both sides of the functional knowledge gape becomes more vital. The need to constantly update your world view; the speed of change and the need to move past no longer relevant concepts leaves many people struggling to keep up and far more willing to listen to any offer to lighten their burden. When people do not know what they do not know, anything sounds like an oasis in the desert. To make up for this, there is a massive amount of groups who promise “solutions” to this fundamental problem, providing an answer of technology as the sole means to to make up for this fundamental inability to adapt to the constantly changing word. The problem comes not from this challenge, but from mistaking the solution being suggested for the sole requirement to the desired end result. Technology is part of a solution, but without understanding that your people must change; no technology will ever provide its promised value and very little will ever be achieved. When we fail to understand that change starts with how you think, not with what you purchase, we lose the ability to gain value from any “solution” that we can ever acquire.
The same cycle plays out time and time again. Senior executive defines a “problem”, such as a lack of clean data or ease of deployment of technology or the need to create a personalized experience. People proceed down a path of trying to both be the one to find a “solution” while at the same time finding ways to pass blame for the current state onto another party, be it internal or external, as any new solution must replace the prior “solution” that did not in fact solve all the world’s problems. They reach out and research and find a provider to give them a solution that makes the largest promise about “ease” or “functionality”, or the one that they have a prior relationship with. From there, it is a process of discover, promises, and then acquisition. The tool then gets shared to all the other groups, and the individuals who now find themselves with the task of trying to get this installed, also must make sure that their boss does not upset others in the company by instituting a change in the status quo. Each group provides “needs” in a one way direction that become part of a massive deployment project road-map. Groups continue to get buy-in and then try resources to deploy, each time acquiescing a little bit to each group they work with. Eventually the solution goes live, activities and tasks enacted, and everyone moves forward. The same problems arise a year or two down the line, agendas get forgotten, and large presentations are held to try and find a positive outcome for all that was invested. Very little has changed, very little has really improved for the organization, just a new piece of technology has been invested in to replace the old technology that went out the door.
I am in no way saying that technology is a bad thing, I work for the top marketing suite in the world, and wouldn’t if I did not feel that the tools themselves were best in class. Technology is simply a magnifying lens, increasing the impact of what your organization does great, but also where it fails. The reality is though that few companies get anywhere close to the value they should from the tools, and often that lack of value is accompanied by magnitudes of increased effort. If groups would start with a real honest change in how they understand the world around them based on each tool, they would find that they are wasting almost all their efforts in the vein attempt to justify their prior actions. Each tool is an opportunity to change and improve the efficiency of your organization, yet in almost all cases this vital task is talked about or ignored, and never enacted in a meaningful way. If you do not start your time with a tool with a discussion around what disciplines define success and failure for that specific tool, than no tool will ever do more than just be window dressing on bad organizational dynamics.
One of the first things I try to teach new analysts and consultants is that there is no such thing as a technical solution. All problems are strategic, they may have a technical solution, but they are truly strategic in nature. It is far easier to find a massively technical work around to do the one thing that senior VP X is asking for then it is to take the time to discover if that effort is needed or if it will provide any actual ROI. The unfortunate truth is that for a vast majority of the “problems” that are being identified, a successful or non-successful answer to that stated problem would have no change in the fact that they are not going to receive value. Slick interfaces don’t make up for poor strategy, integrations between platforms do not make up for not understanding your data. The truth is that in almost all cases the real problems are the ones that we are turning a eye from; they are the elephants sitting in the rooms that we refuse to talk about, so instead we make excuses and sacrifice outcomes in the name of taking credit for change.
This is the nature of confusing the “solution” for the desired outcome. Solutions are a means to an end, not the end itself. Never confuse the need to add functionality with the goal of that functionality. you are not just adding testing to someone’s day to day job, you are asking your entire organization to discover what the value of its own actions are. You do not just find a tag solution for the fun of it, you do it to save resources so that you can then spend them on valuable actions. You do not just start a personalization project from the goodness of your heart; you do it because you believe it will increase revenue. As soon as you keep the conversation about the end result, then you can have a functional conversation about the efficiencies of various paths to arrive at that point. Do you really need 90+ targets active, or would 15 give you a higher lift and much lower costs?
The cycle that technology gets brought into is the problem, as are the egos of those that own that purchase. Like most real world situations, it is far easier to make promises then to fix real problems or to deal with other groups and how they think about and tackle their own problems. Analytics, testing, and marketing are not things that are just done, even if your job is often times just a series of repeat activities. These actions are done to improve performance, which means that the change has to happen with which actions you are taking the resources to do, not just changing technology providers. If more time is not spent on reforming the enviroment around the technology, then all time will end up wasted. Never get caught up in the cycle and the can questions, without constantly keeping a vigilant eye on the should questions of all actions.
No matter if an idea is good or bad, it is always going to be easier to just do what your boss asks, and even easier to find a way to convince yourself that it is somehow valuable. We convince ourselves as others convince us that we are doing the right thing. We do not want to take the time to think about our opportunities to do things in a different way. Sadly most actions commonly done in our world are not valuable or efficient, and in all cases can and should be improved. You must first get over your own fear of doing the right thing before you can ever try and get those above you to do the same. The battles worth fighting when you bring in a piece of technology are not about how many developers can you get to deploy a solution, or how can you get an agency to run it for you, but in how do we find ways to fundamentally change current practices to learn and grow with the technology.
There is no shortage of people who are willing to promise that you don’t need to really look inwards to get value, and in some cases they are able to provide some momentary glimpses of value. Great tools offer you the chance to succeed, but do not guarantee that outcome. No tool will ever be able to make up for the incorrect application of its features, just as no organization will truly change unless change is more important than task. In the end though, every success I have ever seen or had with an organization comes from fundamentally challenging and changing existing practices and in creating simpler ways to get more value. Change is hard, most can not achieve it in a meaningful way, but all value comes from change. Not from creating complex ways to accomplish a task. Complicated will never mean valuable, complicated will always simply mean complicated. Never forget that a solution is a promise as a means to an end, and that the real ability to achieve that end, or more, comes from action, not from just a tag or from a solution being deployed.
Stories are powerful devices that help get a point across to others. They help us close the distance between abstract thought and the way out brains operate through narrative. They help us add order to events. We can convey very complex ideas and help others understand them with our stories. Even more powerfully, this is how we are wired to understand events and information. But what happens if the story we tell is not the right one? How would you even know? Stories are often far more powerful at bypassing rational decisions then in facilitating them. The human mind is wired to support any conclusion it comes to, even in the face of mounting evidence against our supposition. Nassim Taleb describes this error in logic with what he calls the Narrative Fallacy, or the need for people to create stories, even if we do not have evidence that that story is true or even the best explanation of events.
Taleb’s description is as follows:
The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding.
Here is an all too common example of how this plays out in the real world. You run a test and discover the a variant produces a 6% lift in RPV. you also discover that the same variant produces a 5% drop in internal searches. So you tell others that people were obviously finding what they wanted easier, so they didn’t need search and so they were spending more. The data presented does not tell you that, it only tells you that the recipe had a lift for RPV and a drop in search. It doesn’t tell you that search and RPV are related, or why someone spent more, it only presented a single data point for comparative analysis between default and that recipe. Any story you come up with adds nothing to why you should make the decision (it raised RPV) and it can set dangerous precedent for believing that dropping search always results in raising RPV (it might, but a single data point in no way provides any insight into that relationship).
Any set of data can be used to make a story. There doesn’t have to be a connection between the real world and the story we tell, since we are the ones that are filling in the gaps between data points. Randomness and direct cause are confused easily when we start to narrate an action. We love these stories because they make it accessible and easy for someone to hear, first event A happened, then event B, then event C. What happens is our minds instantly race to say, event B happened BECAUSE event A happened. Because event A lead to event B, then naturally event C happened. This might be true, it might not be, but the story we use and tell ourselves grants us the excuse to not understand what really went on. We are eliminating the discovery of the reality of this relationship by granting ourselves the story that fills in those gaps, despite its lack of connection to the real world. This action can completely ignore hundreds of other causes and also rules out the involvement of chance.
The world is a very complex place, and there is almost never as simple an answer as a simple series of events to explain any action, let alone one that would actually be important enough to make a business decision. So why then do we let ourselves fall into this and why do we fall back on stories as a tool to make those decisions? We are not actually adding any real value to the information from these stories, we are simply packaging them in a way to help get across an agenda.
We don’t actually need stories to make decisions, we only need discipline. Often times we find ourselves stuck trying to convey concepts beyond others ability to understand in short period and between many different draws for their attention, but this is not an excuse for us believing the fiction that we narrate. Many people base their entire jobs on their ability to tell these stories, not on their ability to deliver meaningful information or change. The reality is that to make a decision, you simply need the ability to compare numbers and choose the best one. I don’t need to know why variant C was better than B, I simply need to know that it was 5% better. Pattern and anomalies are powerful tools and the analyst best friend, but we can never confuse them with explanations for those events. Often times things happen for very complex and difficult reasons, and while it is nice if we feel that we understand them, it does not change the pattern of events.
One of the main opportunities for groups to grow is to move past this dangerous habit of creating stories, and to instead focus on create disciplined and previously agreed on rules of action in order to enable decisions to be made away from the narrative. This move allows you to stop wasting energy on this discussion and instead use it to think of better more creative opportunities to explore and measure the value of. Any system is only good as the input into it, so start focusing on improving the input, and stop worrying about creating stories for every input into that system.
Because of the difficult nature of this change for some groups, many are turning to more advanced techniques in hope of avoiding this bias. The fundamental goal of machine learning is to remove human interpretation of results, and to instead let an algorithm find the most efficient option. All of these system fail when we lose focus and get back into storytelling, when we let the ego of others dictate an action based on how well they understand the reality of the situation. They also fail when we allow our own biases to make the decisions over the system, instead of letting the system learn and choose the best option. When we free ourselves from storytelling, it allows us the freedom to focus on the other end of that system. We don’t need to worry about acting on the data, or in others understanding it; we can instead focus ours and others energy in trying new things and in feeding the system with more quality input.
Love your stories, and if you need them to get a point across, do not instantly remove them from your arsenal. Just don’t believe that they are conveying anything resembling the cause and effect of the world, and do not let them be the deciding factor in how you view and act on the world. They are color, and they make others feel good, but they add no value to the decisions being made. Be clear with others on how you are to act before you ever get to the storytelling and you will discover that stories are simply color. Every journey is a story, just make sure yours is less fiction and more about making correct decisions.
The many layers of what can run a testing program off track are both complicated and simple. All the major errors come from a need to fit an existing structure, to do what your boss wants, and most importantly to do what will make others happy. All the hard work that really defines success is the things that no one wants to do, be it agreeing on a metric, being the leader that your organization needs (even when they may not want it), or making sure people understand efficiency. All of these sins really are about defining how you are going to act when it comes time to do an action.
The next sin, only testing what you want, is the first one that is really about the action itself. It is the act of not just sitting together and “brain storming” or listening to a pitch about something that sounds great, and about incorporating the need to grow and learn in your actions, so that the path your group takes is organic and not inorganic. Groups get so caught up on only testing what the boss wants, or what your design people think will win, that they miss almost all the really important successes that can happen from a test. Every group starts out by wanting to test out one feature or another, and everyone hears about a best practice or a cool thing that another group did and they want to do the same. We fail to feed the system with a broad range of inputs because we can’t see past our own opinion, and because of that we dramatically lower the outcomes of our efforts.
Groups fail when they are too caught up on what wins, or on proving someone right. People are so caught up on validating an idea that they fail to see what other options will do, or even more, what if they are wrong? What matters more is not the discovery of a single validated idea, but the comparative analysis of multiple paths. This means that the worst thing we can do is limit or focus any effort only on what we want or limiting our efforts to what is popular. The sin in testing is the want to validate instead of learn, and the want to limiting of focus only to our own opinion. The truth is that the least important part of any test is what wins, since the value of the “win” is only as valuable as the context of that win. If we discover a 5% lift, that may be great, but if in the same test we could have had a 10%, 20%, or a 50% lift, then the 5% suddenly becomes an awful result. We get the most value when we are wrong, and when we discover this and allow ourselves to move down that path. Being aware of your ego, and not limiting your efforts to you or your bosses opinion, is what defines the magnitude of actual value you are achieving.
The sad truth is that we are often extremely unaware of the real value of our opinions. There is almost an inverse correlation between what people think will win, and what will win. One of the best ways to test this is to make sure that each test has a large number of very different variants and then do a poll before the test for what people think will win. You will find almost no connection between votes and outcome, which says a lot about our ability to measure things rationally after we have formed an opinion. This means that anytime that we only test what people think will win, or what they want to see will win, we have fundamentally crippled our ability to deliver meaningful value. Remember that if you only test two things, and the thing you want won, all you have done is added cost with the test. Challenge yourself and others to think in terms of possibilities and not opinions, and to scope things in terms of achieving the most options, and not just the set options.
One of the hardest tasks for groups to deal with is the need to assume that they know nothing about the value of an action. You are invested in proving to others that you know best, or in the value of any action that is already happening, but the sad truth is that most actions are done out of pattern and history and not because of measured value to an organization. Even worse, we build out giant project plans that we suddenly become inflexible to change or disrupt, being so focused on completion that we only pretend care about the actual value of the project. The need to take a step back and understand that, “if I am right, it will prove itself out. If I am wrong, it will show me a better way to do things” is easy to say but almost impossible to take hold of immediately. There is nothing worse then doing a massive project only to discover neutral or negative performance, yet this is by far the most common outcomes for groups when they test out large redesigns. It is vital that for programs as they build out that they not only test what they want to win, but that they build into all plans dynamic points where things can go directions not expected, so that we are not so inflexible to the reality of quantitative results.
Ask yourself these questions before you take any action: “how do I prove myself wrong?”, “what if I am focusing on all the wrong things?”, “what are the other feasible alternatives for this page, section, module?” It sounds counter intuitive, but it will help you understand just how much larger the testable world is from the world that you inherently would start out with. You are not limited to only what you want, you are only limited by your imagination and the efficient use of your resources. Force every action to fundamentally deal with these questions, and not the question of “how much better is this idea?”.
It is easy to limit the possible value of your program from thinking you are right. All people are wired to do so, and build their empires off the projection of this knowledge to others. Building out your tests to get past this sin, and instead find the right answer and to know how it measures to the larger world is vital to the level of value that you can get from your testing program.