If we look at just online A/B testing, I think that there have been a number of major changes over the past few years. Looking back, testing started out as a “cool” feature that a few companies did and that was only available through internal IT set-ups or from a very select few companies who offered very limited testing options. During this time you had major debates on things like partial versus full factorial testing, data waves, efficiency and iterative testing. You did not have a ton of segmentation built into tools and for the most part, tools required a bit more knowledge of how to work with different groups. You also had the first wave of people claiming to be “experts” starting to saturate the market.
If you look now, we have many more tools and much richer information to segment and target to. You have a preponderance of tools available and the cost to get these tools live has gone down dramatically. Testing has gone from the few to the many, and as such more people are far more interested in the impact that testing can have…
The problem is that the growth of tools and share of mind, there has not been an equal growth of understanding of data or testing discipline. We have created access to tools and made promises and support people who have no clue what they are doing. Instead of a few tools trying to create the best product available, the market is saturated and tools are instead focusing on the lowest common denominator with things like GUIs, integration, and very bad advice to companies to make it seem like what they want to do is actually going to drive revenue.
The power of these tools is light years ahead of where they were just 5-6 years ago, but the actual value derived by most organizations has dropped precipitously as focus shifts from discipline to things like targeting, or handing control to your analytics team, or content optimization. Even worse, the area of the “expert” has exploded with everyone and their brother talking about “best practices” that are nothing more then empty excuses for you to do their “test ideas”. They search out validation for their ideas from empty knowledge basis like whichtestwon. Personalization has become a common refrain, but there is so little understanding of what is needed to actual derive value from discipline that 7 out of the last 7 groups I looked into the outcomes of their programs were losing money, not even coming out neutral. In reality most testing is now nothing more then a function or commodity for organizations, believing that the mere fact of having a tool or running a test in some way correlates to the value derived.
As you see the market saturate with tools, the knowledge gap has become the driving factor in determining success. With so many programs unknowingly failing at just about anything they do, the reality is that the difference between the “haves” and the “have nots” has gotten critical. My favorite axiom of, “you can fail with any tool, it is only when you are trying to succeed that the tools matter.” has never been more true.
That is not to say that all is lost, because there has been 2 developments that are looking great for the future. The first is the growth of some of the tools in the marketplace do allow for much more value then ever before. The ability to segment and look for causal inference, and the move away from confidence as a blind measure of outcome have been great advancements and allow organizations to make much better decisions. While a majority of the market is lowering the common denominator in order to make groups feel better about their results, there is equally a few groups that are attempting to raise the bar and derive more value, not more myths. The second is that you are also seeing a growth of N-armed bandit type of yield optimization options hit the market, as we move farther and farther away from opinion dictating outcomes, and closer and closer we get to rational uses of data, the more value can be achieved and the more people get used to the fact that their opinion or their test ideas are pretty much useless.
My sincere hope is that in another year or two, that we have moved past this personalization insanity and instead are talking about dynamic user experiences. That we have stopped talking about data integrations, and are instead talking about data discipline. That tools stop trying to tell people how easy and fast it is to get a test live, and instead focus on the parts of testing where the value comes from. More then anything, I hope that testing matures to the point that people fully understand that it is a completely separate discipline and one that requires completely different ways to think about problems then analytics, or traditional marketing, or IT, or product management, or really anything else out there. Testing can provide far more value with far fewer resources then just about anything else an organization can do, but it is going to take the maturation of the entire testing world in order for people to stop being led not by results, but by bad marketing messages. Time will tell where things go from here.