Category: Rants

The New Long Road Ahead

So much has changed in my world recently and I wanted to give everyone a heads-up. After 5+ years trying to fix some of the largest and most complicated organizational optimization issues I have stepped away from Adobe and have decided to go in a somewhat new direction. I have taken a position as Director of Optimization for a small company in the Carlsbad, CA area called Questpoint where I will be overseeing optimization of a number of lead gen situations.

What this means is that I now deal with much smaller but much more meaningful measures of success. It also means that I can now talk much more directly about the challenges I face and the solutions as they present themselves to me. I will continue to investigate the theoretical challenges of optimization but will also be more directly talking about the realities of testing on a budget. I will be using a number of tools including Google Analytics and Google Experiments and will be breaking down the advantages and disadvantages of them in comparison to the enterprise level tools that I was familiar with.

Here is to the new path before me and here is to the many barriers and hills one must climb to bring that boulder to the top of the mountain.


The Harsh Realities of the Business World

I was recently lucky enough to get to take 6 weeks off for a sabbatical, where I was able to really get away from the life of a consultant and just spend time with my family. Upon returning I reached out to a number of people with whom I work or know in the industry and wanted to catch-up. I had two different people, both of whom I respect deeply and who I think are some of the best and brightest in the industry, regal stories about how fed up they were with the industry and how they were losing faith in the system. This conversation is hardly new, but the stark difference between the “real” world and what happens in the corporate world was striking.

Without fail everyone comes to realize the limitations of the corporate world, and while every organization is different, the things you see, especially at the largest corporations, are almost all universal. One of the hardest things I have to deal with as I try and mentor new people or people who I want to help in the industry is help them really come to grips with this reality and help them see that there is hope, but that they will always be making choices: what is good for the company vs. what is good for them.

With that in mind, I wanted to help express some universal truths that I think everyone should be comfortable with if they want to really exist for any period of time in this business world.

Most effort is wasted –
This becomes striking clear when you start doing exploratory casual analysis and look at the impact of work or entire departments. The number of times in the last 5 years that I have taken a few minutes of effort and the end result has shown that entire years’ worth of effort had negative impacts to the bottom line cannot be counted on my appendages. There are entire disciplines that people have devoted their entire life to that have no impact whatsoever and are nothing more than phrenology.

But 100% of people think they do excellent work – This really hit home today with one of the conversations as the realization that action is confused with value really came home. Most people assume their actions are providing value, and because of the preponderance of data out there, most can find ways to come up with some story to justify their actions.

I have had multiple engagements where it started with the person showing reports and graphs and presentations showing massive value to the program, only to take a few minutes and dive into the numbers and show that not only was it not improving the business, in multiple cases they were actually causing catastrophic harm to the business. It happens everywhere, at least 70% of case studies are full of 100% fake data. People are so desperate to please their boss or make themselves look good they find ways, often subconscious, to show their value. It is not malicious, it is just sociopathic.

My favorite story of this was when I was working with a group that was reporting how great their recommendation tool was and how it was generating 18% more revenue! In reality they were only looking at the revenue increase of the products recommended (3 out of a library in the hundreds). When doing just a standard analysis of the entire revenue stream, there were piles of data that showed they were losing 6% net revenue for the entire company, totaling millions and millions of dollars.

People are not rewarded if a company makes 3% more money from their actions, and as such they treat the lack of complaining as the ultimate sign of success. We never look at what could have been, only what was and how people reacted to it. Context is something you have to strive for and work hard to get, every day, otherwise almost all stories and data is meaningless. This often leads to many long term problems…

Because of this most people have no clue what they are talking about – If you can come up with a story to justify any action, you don’t dive to see if you are right, most effort is wasted, and the only thing people look at are the stories you weave, why would you need to know what you are talking about? It is far easier to create a narrative out of the air then it is to actually be able to back up anything you are talking about. Thanks to fear, Dunning-Kruger, and just common greed, this is allowed to take place. The more someone is able to convince people of their story (which is a different skill then actual results), the more they move forward, the more people believe it, and the more people want to copy it. It is a self-fulfilling cycle and one where actual knowledge is scorned because it serves as a direct challenge to the empires built by these people. If you really want to improve things, you must always make people go where they don’t want to, because the safe shores are the ones without accountability and that sound like the same things they have always been doing.

People who build the tools and work at agencies often know even less – I have direct experience with a large number of tools, and I am lucky enough to know “thought leaders” at a large number of other tools and agencies, and I can tell you as a whole most of the people you hear talking couldn’t provide value to you if their lives depended on it. They have become experts at telling you what you want to hear, not telling you what you need to hear. The top people in the industry are story tellers who weave a tail of telling you to do basically what you have been doing, but justify it with fancy terms or new actions to get to the same place. Tools become designed for this, people get advancements for this, and oftentimes anyone who doesn’t want to take part of this vicious cycle move on to other endeavors, meaning the worst become the ones their longest, gaining power and only making the cycle worse. I can attest that the top 5 agencies I know in my space, I know multiple top people at all of them and not a one knows anything about providing value, or even cares. They do the same tired actions because that is what they have always done, and they don’t get called out on it, so why should they change? To quote one famous person in our industry, “I throw a grenade and try to get people to come to me when they run from it.”

And you thank them for it – You know why the tools are designed to do things that aren’t valuable and the top agencies are run by people who tell stories and have no clue what they are doing?

Because you do not hold them accountable

I have worked with exactly 3 organizations in 12 years where results mattered, the rest just want to sell a story internally, do something new, and then do more actions that make their boss happy. You buy the story, do the failed actions, sell that story internally, which results in promotion which propagates the cycle. The cycle spreads and just as stated before, knowledge of other ways is simple a risk. This is why you find people in these places that are so good technically, but very few if any in most organizations that have any clue about strategy other than repeating the same tired failing things that everyone else repeats. Organizations want people to do what they say and tell them it is golden, not to make them money. The only person who is going to really hold you accountable for value derived is yourself.

But all hope is not lost – This environment is where we exist, and it has been that way since you started and will be far after you are done working. The environment doesn’t change, so it is up to you to decide how to deal with it. Just because people don’t want to change doesn’t mean they won’t, it just means it isn’t easy. Just because others don’t hold you accountable does not mean that you can’t. Just because doing what others want will help you move forward, it doesn’t mean that you have to sell out at all opportunities. It is a balancing act, both of survival and how to tackle these complex problems. The thing that makes people survive or be good is that they don’t hide from the reality, they embrace it, they might get frustrated, but they come back and push back even harder tomorrow. If you give in, if you become cynical, if you just give up and take the easy path, that is your decision, just as it is to do the right thing even if it is not best for your career. No one can tell you what the right choice is, all they can do is help you see that you are making these choices and help you make the one that is best for you.

And when you do overcome these problems it can be the ultimate high –
Just because the entire system might be designed to keep things from progressing does not mean that progress isn’t made, only that it is rare and incredibly hard fought. I had the pleasure to also be on a call today with a client who has come from an org with no background in testing, who just threw up tests because they thought they should and who had no resources and no knowledge, who in the last 9 months has transformed to the point that they have a separate team, great discipline, good educational base, and who is running a series of exploratory tests. That moment, which I wish happened all the time but doesn’t, makes it all worth it, at least for me.

In the end, it isn’t about what title you have, who thinks you did well, it is about what are you trying to accomplish and did you hold yourself accountable to it? People can do good work, almost all of the problems I outlined above happen subconsciously, not consciously. People aren’t out to screw each other; they just do it and then rationalize it away. Opening up someone’s eyes, or making it so they don’t have to do the easy thing just because is all you can do.

Choose what you want, and then do it. Don’t let the system dictate the outcome, it is up to you to overcome, adapt, or become a cog in that machine.

Rant – Testing is about the Driver, not the Car

I recently answered a question on the value of testing on Quora and was asked to re-post my response here by a few people I know in the industry.

Question: I’ve been all about A/B testing, but then I just read this post from Erik Severinghaus. Is A/B testing as valuable as we think?

Answer: Like many things in life, the answer is not that simple. Think of it like driving a car, there are good drivers, slow drivers, oblivious drivers, angry drivers, and skill ranges from low to professional. The issue is in the driver, not in the concept of a car.

Testing is much the same way. The reality is that in many organizations (including many that champion testing to death) there is very little value in how they are leveraging testing. There are many cases where testing is actually costing those companies money, because they are not disciplined in how they approach things, they focus on idea validation and do not understand how to act on data, doing things like blindly following statistical confidence. If you look at how he describes testing in that blog post, then this is where those people are at. If MVT is simply a way to throw a bunch of items against a wall and choose a winner, then you know that you are firmly in this realm. In those cases, I would argue that testing is worse then a mouse pad, it is more analogous to a cup holder in a car. It is there, people use it, they get enjoyment out of it, but it has nothing to do with where the car ends up or how fast it gets there.

There are other organizations which look at data differently and who use testing in a different manner, one used to focus and leverage resources and one that is not used for validation or “choosing 2 headlines”. In those situations, there is very little that can be said to describe just how valuable testing is. Testing changes the direction of entire organizations, it proves people wrong, it focuses resources and it allows for the exploration of alternative feasible options and allows you to really know the value of actions, not just argue them. It is a tool whose use is to find out what the most valuable of many different routes are, and then help you drive down those roads providing more and more value at each step. Those scenarios are more analogous to testing being the GPS, describing routes and shorter distances, as well as helping maximize time and fuel.

In both cases, there are ways to automate the process to lower decision time and to increase the efficiency of the test itself. That doesn’t address the real problem however which is if the entire vision of testing is wrong, then it doesn’t matter what system you use to make decisions or how you leverage MVT. It really doesn’t matter what sized drink goes into the cup holder, or how many different drinks can be placed there over time. If you are going down the other thought route, then how fast your GPS updates, what information it uses, and what factors you use to decide routes can have a massive impact on where you end up.

How Analysis Goes Wrong: The Week in Awful Analysis – Week #9

How Analysis goes wrong is a new weekly series focused on evaluating common forms of business analysis. All evaluation of the analysis is done with one goal in mind: Does the analysis present a solid case why spending resources in the manner recommended will generate additional revenue than any other action the company could take with the same resources. The goal here is not to knock down analytics; it is help highlight those that are unknowingly damaging the credibility of the rational use of data. What you don’t do is often more important than what you do choose to do. All names and figures have been altered where appropriate to mask the “guilt”.

I have a special place in my heart for all the awful analysis that is currently being thrown around in regards to personalization. So many different groups are using personalization as the outward advantage prevalent with big data. No matter where you go, ad servers, data providers, vendors, agencies, and even internal product teams, they are all trying to talk about or move towards personalization.

This is not to say that personalization is a bad thing, I believe that dynamic experiences can produce magnitudes higher value then status experiences and have helped many groups achieve just that. What most surprises me however is the awful math being used show the “impact” of personalization from groups who have achieved absolutely nothing. I have lost count the number of times I have walked into and found one person or group talking about how personalization has improved their performance by some fantastic figure that it seems that the business should be doing nothing but thank them for their genius. The sad reality is that most of the analysis is biased and bad that in most cases the same companies are actually losing millions by doing this “personalization” practice.

Analysis – By putting in place personalization, we were able to improve the performance of our ads by 38%.

We have to tackle the larger picture here to evaluate statements such as above. Before we dive too deep into how many things are wrong with this analysis, we need to start with a fundamental understanding of one concept. There is a difference between the changing of content or the user experience and the targeting portion of that experience. In other words, changing things will result in an outcome, good or bad, and then targeting specific parts of that change to groups is also going to lead to an outcome. The only way that “personalization” can be valuable is if that second part of the equation is the one leading to a higher outcome.

1) Just to get the obvious out of the way, the analysis doesn’t tell you what the improvement was. Was it clicks? Visits? Engagement? Conversion? Or RPV? If it is anything but RPV, then reporting any increase has no bearing on the revenue derived for the organization. Who cares if you increased engagement by 38% if total revenue is down 4%.

2) The only way that “personalization” can be generating 38% increase would be if the following was true:

The dynamic changes of content raised performance by 38% to total RPV over any of the specific static content or content served in ANY other fashion.

In other words, if I would have gotten 40% increase by showing offer B to everyone, then personalization is actually costing us 2%.

3) Since most personalization is tied to content and the inherent nature of content changes is very high initial difference and then normalizing over time, what is the range of outcome? What is the error rate? The inherent nature of any bandit problem with would use causal data to update content means that you either have to act as quickly as possible, resulting in higher chance of error, or act slow and risk the chance of not responding fast enough to the market. In either case, performance will never be consistent.

Rather than continue to dive through each and every biased and irrational part of this analysis, I want to instead present two ways that you can test out these assumptions to see the actual value of personalization:

Set-up: Let’s say that you believe that 5 different pieces of content are needed for a “personalized” experience. In other words, you have a schema that will change content by 5 different rules.

The same steps work for anything from 2 rules to 200.

Option #1 (the best option):

Serve all 5 pieces of content to 100% of users randomly and evenly. Look at the segments for the 5 rules AND all other possible segments that make sense.

You will get 1 of two outcomes:

1) Each piece of content is the highest performing one for that specific segment and those are the highest value changes


2) ANY OTHER OUTCOME which by definition in this case results in more revenue.

Option #2

Create dynamic logic in the tool, based on the 5 rules.

Create 6 experiences.

Each experience except the last shows each piece of content one at a time to all users (so content matching group A actually gets served to all 5 user definitions in recipe A). In the last recipe, then add the dynamic rules to the last experience.

If the last experience wins, then you at least know that the dynamic content is better than static content. If you are looking at your segments correctly, you will then also be able to calculate the total lift from other ways of looking at the content to the dynamic experience that you tested. If the dynamic experience is still the top performer, congratulations on being correct. If any other way works best, congratulations on finding more revenue.

In both of these tests, if something else won, then by doing what you were going to do or what you would otherwise report on IS COSTING THE COMPANY MONEY.

There are massive amounts of value possible by tackling personalization the right way. If you do rational analysis that looks for total value, then you will find that you can achieve results that blow even that 38% number out of the water. Report and look at the data the same way that most groups do though, and you are ensuring that you will get little or no value, and that you are most likely going to cost your company millions of dollars.