Category: Organization
The Road to Greatness: The Do’s and Don’ts of Starting an Optimization Program
As more and more programs start to emerge with the growth of the online optimization field, there becomes a preponderance of “best practices” when it comes to testing, personalization, and all other active forms of leveraging data, it seems like you have to know a massive amount to just understand completely what those “experts” are saying.
With that in mind, I wanted to present some very simple do’s and don’ts for programs just getting going. Starting correctly and setting the stage for success is vital to really being efficient and getting value out of your program that you can and should. The problem is that in almost all cases people’s first instincts lead them astray. What you don’t do is more important usually then what you do choose to do. The key is to make sure that you focus your limited time on the actions that will provide the greatest growth and value to your program. The same advice can work for groups that have been testing for years as many of those programs also are just built up versions of the same bad behaviors.
DO – Hold discussions about a single success metric
The very first and sometimes the most painful hurdle that a program faces is getting groups to agree on how to act. This is in many ways completely counter culture as many groups have competing goals and are only focused on their little piece of the larger pie. If you do nothing else, getting people to agree on the one thing that you can all make a decision on is vital.
A side benefit of this conversation is that it starts the process of allowing people to dissociate the actions they think will lead to success and the actual measure of success. Way too many people think that if their idea is more people looking at product X will generate additional revenue then the measure of success is more people looking at product X. You may have an idea for what you want to do, but you are doing it to accomplish a goal, so measure the goal, not the action. The measure of success would be additional revenue, and once that is the only goal, you can start comparing all feasible ways to achieve that exact goal.
DON’T – Get too caught up on test ideas
Some of the least important parts of a testing program is the generation of test ideas. While this is the fun part for people to try and prove their point, the keys to success are not in having a bunch of ideas, but in putting together the infrastructure and helping understand the discipline of successful testing. Test ideas will come naturally out of everyday conversations and especially out of prior tests and learned knowledge. There is never a lack of things you can do, but focusing too much on that part allows people to get caught up in many different biases which will make their rational evaluation of the results to collide with their ego.
DO – Apply tech resources on a larger infrastructure
All tools require some sort of deployment, and while some are easier than others, the biggest mistake you can do is to think that every test will require massive amount of resources. If you build a proper infrastructure across your site, then most tests will not require any involvement from development resources whatsoever.
The key to a good infrastructure is to have tagging in the key locations on your top pages so that you can test just about anything. You will also need to make sure that you have tracking in place for your success metric, and for any additional information (like segment information) that you may want to provide.
Testing should not be thought of as a project but as an ongoing organization and site feature. It is something that should be set-up in a way to never stop and to never be about the simple validation of a single idea. In order to maximize this, going through the initial “pain” of a larger deployment and making sure that your IT group understands that this is not a permanent engineering owned project will dramatically improve your ability to move quickly later on. The key once this is done is to prioritize tests based on resource usage and prioritize tests that will deliver the greatest return for the lowest resource usage.
DON’T – Think testing is just an extension of your analytics group
How you think about optimization is almost the exact opposite of analytics. Instead of patterns and anomalies of larger data sets, you have a single point and the push to make consistent meaningful changes. Testing is not just the action arm of some analysis you did to validate your point, it is the active acquisition and interaction with data.
To succeed, you need to think about segmentation differently. You need to think about what a success metric really is and how it is different in testing. You need to able to speak in terms of comparative analysis, not validation. Basically, you have to be able to turn just about everything you do with analytics on its side. Later on, you can start leveraging the two together, but as you start, separating them completely is going to grant you far more return with far less work then trying to just tack testing into your analytics daily activities.
DO – Think about your rights management
Make sure you know who is going to have what rights and make sure you have some checks in place from too many people changing your site.
DON’T – Blindly follow statistical measures
You don’t need to know everything about all statistics, but you do need to understand some basic concepts to really understand results. The first is that for any statistical tool to be useful, you need not just statistical confidence, but you also need the data to be representative of the change you are going to make. If you get 99% confidence in 3 hours on a Friday afternoon, that data is only representative of that period of Friday afternoon.
DO – Starting thinking how you are going to store and share results
When you are testing right, you are going to constantly learn new things and if you are doing your testing right these lessons you learn will eventually be far more valuable than any individual result. You need to start thinking about where you are going to share this information, the format, and the availability. You also need to make sure that this is not a static item but a living knowledge base.
DON’T – Let any test go out with just two recipes
One of the hardest lessons to learn is that testing is not about validation of a single point, but about comparing feasible alternatives and being prepared to go in directions that you never imagined. While not everyone will be ready for this day one, the simplest way to prepare people is to force discipline on them. Making people have multiple very different but feasible alternatives will start giving you far more information and will start to show you areas where what they thought mattered didn’t.
There are a thousand other things that go into running a program, but just starting out, if you tackle these simple things and avoid some common traps, and then you will be get far more results, make a larger impact to your business, and use far less resources. Think about what you really want from your program and then stop focusing on individual tasks and instead start putting the key pieces in place for long term success.
Change the Conversation: Technology as the only “solution”?
As the world becomes more and more complicated, the battle between those on both sides of the functional knowledge gape becomes more vital. The need to constantly update your world view; the speed of change and the need to move past no longer relevant concepts leaves many people struggling to keep up and far more willing to listen to any offer to lighten their burden. When people do not know what they do not know, anything sounds like an oasis in the desert. To make up for this, there is a massive amount of groups who promise “solutions” to this fundamental problem, providing an answer of technology as the sole means to to make up for this fundamental inability to adapt to the constantly changing word. The problem comes not from this challenge, but from mistaking the solution being suggested for the sole requirement to the desired end result. Technology is part of a solution, but without understanding that your people must change; no technology will ever provide its promised value and very little will ever be achieved. When we fail to understand that change starts with how you think, not with what you purchase, we lose the ability to gain value from any “solution” that we can ever acquire.
The same cycle plays out time and time again. Senior executive defines a “problem”, such as a lack of clean data or ease of deployment of technology or the need to create a personalized experience. People proceed down a path of trying to both be the one to find a “solution” while at the same time finding ways to pass blame for the current state onto another party, be it internal or external, as any new solution must replace the prior “solution” that did not in fact solve all the world’s problems. They reach out and research and find a provider to give them a solution that makes the largest promise about “ease” or “functionality”, or the one that they have a prior relationship with. From there, it is a process of discover, promises, and then acquisition. The tool then gets shared to all the other groups, and the individuals who now find themselves with the task of trying to get this installed, also must make sure that their boss does not upset others in the company by instituting a change in the status quo. Each group provides “needs” in a one way direction that become part of a massive deployment project road-map. Groups continue to get buy-in and then try resources to deploy, each time acquiescing a little bit to each group they work with. Eventually the solution goes live, activities and tasks enacted, and everyone moves forward. The same problems arise a year or two down the line, agendas get forgotten, and large presentations are held to try and find a positive outcome for all that was invested. Very little has changed, very little has really improved for the organization, just a new piece of technology has been invested in to replace the old technology that went out the door.
I am in no way saying that technology is a bad thing, I work for the top marketing suite in the world, and wouldn’t if I did not feel that the tools themselves were best in class. Technology is simply a magnifying lens, increasing the impact of what your organization does great, but also where it fails. The reality is though that few companies get anywhere close to the value they should from the tools, and often that lack of value is accompanied by magnitudes of increased effort. If groups would start with a real honest change in how they understand the world around them based on each tool, they would find that they are wasting almost all their efforts in the vein attempt to justify their prior actions. Each tool is an opportunity to change and improve the efficiency of your organization, yet in almost all cases this vital task is talked about or ignored, and never enacted in a meaningful way. If you do not start your time with a tool with a discussion around what disciplines define success and failure for that specific tool, than no tool will ever do more than just be window dressing on bad organizational dynamics.
One of the first things I try to teach new analysts and consultants is that there is no such thing as a technical solution. All problems are strategic, they may have a technical solution, but they are truly strategic in nature. It is far easier to find a massively technical work around to do the one thing that senior VP X is asking for then it is to take the time to discover if that effort is needed or if it will provide any actual ROI. The unfortunate truth is that for a vast majority of the “problems” that are being identified, a successful or non-successful answer to that stated problem would have no change in the fact that they are not going to receive value. Slick interfaces don’t make up for poor strategy, integrations between platforms do not make up for not understanding your data. The truth is that in almost all cases the real problems are the ones that we are turning a eye from; they are the elephants sitting in the rooms that we refuse to talk about, so instead we make excuses and sacrifice outcomes in the name of taking credit for change.
This is the nature of confusing the “solution” for the desired outcome. Solutions are a means to an end, not the end itself. Never confuse the need to add functionality with the goal of that functionality. you are not just adding testing to someone’s day to day job, you are asking your entire organization to discover what the value of its own actions are. You do not just find a tag solution for the fun of it, you do it to save resources so that you can then spend them on valuable actions. You do not just start a personalization project from the goodness of your heart; you do it because you believe it will increase revenue. As soon as you keep the conversation about the end result, then you can have a functional conversation about the efficiencies of various paths to arrive at that point. Do you really need 90+ targets active, or would 15 give you a higher lift and much lower costs?
The cycle that technology gets brought into is the problem, as are the egos of those that own that purchase. Like most real world situations, it is far easier to make promises then to fix real problems or to deal with other groups and how they think about and tackle their own problems. Analytics, testing, and marketing are not things that are just done, even if your job is often times just a series of repeat activities. These actions are done to improve performance, which means that the change has to happen with which actions you are taking the resources to do, not just changing technology providers. If more time is not spent on reforming the enviroment around the technology, then all time will end up wasted. Never get caught up in the cycle and the can questions, without constantly keeping a vigilant eye on the should questions of all actions.
No matter if an idea is good or bad, it is always going to be easier to just do what your boss asks, and even easier to find a way to convince yourself that it is somehow valuable. We convince ourselves as others convince us that we are doing the right thing. We do not want to take the time to think about our opportunities to do things in a different way. Sadly most actions commonly done in our world are not valuable or efficient, and in all cases can and should be improved. You must first get over your own fear of doing the right thing before you can ever try and get those above you to do the same. The battles worth fighting when you bring in a piece of technology are not about how many developers can you get to deploy a solution, or how can you get an agency to run it for you, but in how do we find ways to fundamentally change current practices to learn and grow with the technology.
There is no shortage of people who are willing to promise that you don’t need to really look inwards to get value, and in some cases they are able to provide some momentary glimpses of value. Great tools offer you the chance to succeed, but do not guarantee that outcome. No tool will ever be able to make up for the incorrect application of its features, just as no organization will truly change unless change is more important than task. In the end though, every success I have ever seen or had with an organization comes from fundamentally challenging and changing existing practices and in creating simpler ways to get more value. Change is hard, most can not achieve it in a meaningful way, but all value comes from change. Not from creating complex ways to accomplish a task. Complicated will never mean valuable, complicated will always simply mean complicated. Never forget that a solution is a promise as a means to an end, and that the real ability to achieve that end, or more, comes from action, not from just a tag or from a solution being deployed.
One Problem but Many Voices – The Many Ways to Explain Rate & Value
One of my great struggles in the entire data world is to get people to understand the difference in rate and value. It seems like this problem has a thousand different faces, yet it can be extremely difficult to find the right way to correct the misconceptions of any particular case. People are constantly trying to abuse data to show that they provided value or that something is directly tied to an outcome, despite the fact that the data itself can not in any way tell you this fact.
I was faced recently with trying to explain this to a person new to data discipline and found that once again, my answer was much longer and more complicated then I would hope. It seems like such a great concept, but the truth is that everyone has their own way to understand and tackle this problem. With that in mind, I reached out to some of the smartest people I know to see how they tackle the issue. The specific problem I asked about was explaining the difference and contradictory nature of revenue attribution and revenue generation.
Not everyone agrees on the issue or how to express is, and that is why it is so difficult for some, especially those that don’t deal with it on a daily basis. It takes many great voices to find the tools that enable anyone to really correctly tackle large complex issues.
Below are a few of the answers that I was able to gather:
Brent Dykes – Author of Web Analytics Action Hero and general analytics guru –
For example, if my site’s conversion rate is 10%, you’d think that would be great. In the back of your mind, you may remember reading somewhere that the average conversion rate for most sites between 2-3% so 10% sounds fantastic. However, when we start to add context and perform comparisons this number may end up sounding less appealing. What if my site’s conversion rate last year was 15% compared to today’s 10%? What if similar country sites in my organization have 20% conversion rates? What if my closest industry peers recently shared in a media article that they have average conversion rates of 30%? Now the 10% conversion rate doesn’t sound as good.
A rate simply provides us with a number, and what we do with the measure is what adds value. When we analyze what’s happening with the conversion rate, we can determine how to create more value or stop value leakages. Through testing we can confirm what we found in our analysis (correlation vs. causation) before making wholesale changes. It’s important to use the right rates or metrics, but the numbers without any context or comparisons are meaningless. Value only comes from understanding the rates and making changes to improve them over time.
Russell Lewis – Optimization Consultant
You have two QB’s to play. One has a higher completion rate than the other. This rate indicates that he should have a high predicted score when it comes to game time. When you decide to play him, he falls flat on his face. This rate did not give you the value of what his performance is, it just showed what he has done in the past in regards to completed passes to attempted passes. The value of what he actually did is seen when put in comparison to the QB on the bench that had the 10 additional points needed to win the game for the week. Without the comparison to the other QB and the current matchup, we would have no value.
Anonymous –
When trying to determine revenue generating sources, I have always relied on a less granular outlook. Rather than saying “this email message generated $X,” stepping back and saying “email campaigns drove $X, while SEO drove $X”. To get much more granular than that and you begin speculating too much about human nature, which is anything but reliable.
To me that is when you get into the psychology of it, and it gets too nit-picky. I think broadly if you are trying to determine whether to put ad dollars in email or SEO it can help…but when you start saying “well, if we put out an email with this call to action, it will generate $X in return” you have a problem.
To me it is a gross misuse of the scientific method…you almost need to look at the control group and see what they are doing before you can determine anything. No one looks at the visitors not associated with a campaign…maybe people on the site just buy stuff on their own.
Jared Lees – Business Consulting Manager
• Revenue allocation – similar to attribution or correlation. Assigning credit to an activity. The amount of credit could depend on the business rules or attribution model you want to do.
• Revenue generation – total revenue acquired from a singular action. There could be other actions that influenced it, but we aren’t counting that here.
Rhett Norton – Senior Retail Consultant & Team Lead
I think the best thing that helps explain these types of situations is explaining causation and correlation.
This made me think of the jobs report on the economy – unemployment rate is 8%, which there is not really anything we can do with that, it is just a number/rate. Lots of people like to look at different sectors and pretend we know what is going on when they can say that growth increased in the technology sector—this is similar to the page participation example above. Again, there isn’t anything you can do with that, it is just a rate. The real question is how do we move the needle, how do we create jobs or what actions make jobs decline.
Derek Tangren – Principal Analytics Consultant –
• Revenue allocation is a method/means to assign success based on certain behavior
• Revenue generation references an action that you are taking in order to invoke a positive change in driving revenue
I would define revenue generation as the action you take and the revenue allocation the means by which you measure the success.
There were many more answers, as you would expect. Some said it didn’t matter because the point is to give executives evidence to continue their agenda, others simplified the situation to simply correlation and causation, and even more didn’t even acknowledge the problem. Most acknowledged that the problem is a major one, but were unable to come up with a simple direct way to convey the message.
Like so much in the online data world, there is no simple answer. Even more, there are as many different agendas and points of views as there are ways to answer the question. Simple answers will always leave you with more questions than answers. How do you deal with this when running your program? Is this the type of battle that you wage, and if not, why? How do you know when you are having the right conversations?
Change the Conversation: Defining Success
One of the more common refrains I hear as I speak with different organizations or read industry blogs, is how do you deal with a failed test? People speak of this as if it is a common or accepted practice, one that you need to help people understand before you move forward. The irony of these statements is that when most groups are speaking, they are measuring the value of the test by if they got a “winner”, a recipe that beat their control. People almost always come into testing with the wrong idea of what a successful test is. Change what what success means, and you will be able to change your entire testing program.
Success and failure of any test is determined before you launch the test, not by the measurement of one recipe versus another. A successful test may have no recipe beat the control, and an unsuccessful test may have a clear single winner. Success is not lift, because lift without context is nice but almost meaningless.
Success is putting an idea through the right system, which enables you find out the right answers and that allows you to get performance that you would not have otherwise. If all you do is test one idea versus another that you were already considering, you are not generating lift, you are only stopping negative actions. In addition, if I find something that beats control by 5%, that sounds great, until you add context that if I had tested 3 other recipes, they would result in a 10%, 15%, and 25% change. Do you reward the 5% gain, or the 20% opportunity loss?
In the long run, a great idea poorly executed will never beat a mediocre idea executed correctly.
You can measure how successful a test will be by asking some very simple questions before you consider running the test:
1) Are you prepared to act on the test? – Do you know what the metric you are using is? Do you have the ability to push results? Is everyone in agreement before you start that no matter what wins, you will go with it? Do you know what the rules of action are and when you can call a winner and when is too soon? If you answered no to any of those questions, then any test you run is going to be almost meaningless.
2) Are you challenging an assumption? – This means that you need to make sure that you can see not only if you are correct, but if you are wrong. It also means that you need to have more than 1 alternative in a test. Alternatives need to be different from each other and allow for an outcome outside of common opinion to take hold. Consider any test with a single alternative to be a failure as there is no way to get a result with context.
3) Are you focusing on should over can?– This is when we get caught up on can we do a test, can we target to a specific group, or making sure that we can track 40 metrics. It is incredibly easy to get lost in the execution of a campaign, but the reality is that most of the things we think are required aren’t, and if we can not tie an action back to the goal of a test, then there is no reason to do it. These items should be in consideration based on your infrastructure, and based on value. Prioritize campaigns by how efficient they are to run, and never include more then you need to take the actions you need to take. Any conversation that you are having that is focused purely on the action is both inefficient and a red herring taking you away from what matters.
So how then do you make sure that you are getting success from a test? If nothing else, you need to build a framework for what will define a successful test, and then make sure that all actions you take fill that framework. Getting people to agree to these rules can seem extremely difficult at first, but having the conversation outside of a specific test and making it a requirement that they follow them will help ensure that your program is moving down the right path to success.
Here is a really simple sample guideline to make sure all tests you run will be valuable. Each organization should build their own, but they will most likely be very similar:
- At least 4 recipes
- One success metric that is site wide, same as other tests, and directly tied to revenue
- No more than 4 other metrics, and all of these must be site wide and used in multiple tests
- Everyone in agreement on how to act with results
- Everyone prepared to do a follow-up test based on the outcome
- At least 7 segments and no more than 20, with each segment at least 5-7% of your population and all must have a comparable segment
- If interested in targeting, test must be open to larger population and use segments to either confirm beliefs or to prove yourself wrong. (e.g. if I want to target to Facebook users, I should serve the same experiences to all users and if I am right, then the content I have for Facebook users will be the highest performer for my Facebook segment).
One of the most important things that an optimization program can do is make sure that all tests follow a similar framework. Success in the long run follows from how you approach the problem, not by the outcome of a specific action. You will notice that in no point here is the focus on the creation of test ideas, which is where most people spend way too much time. Any test idea is only as good as the system by which you evaluate it. Tests should never be about my idea versus yours, but instead about the discovery and exploitation of comparative information, where we can figure out what option is best, not if my idea is better than yours.
What variant won, whose idea was it, and generating test ideas are some of the biggest red herrings in testing programs. You have to be able to move the conversation away from the inputs, and instead focus people on the creation of a valuable system by which you filter all of that noise. Do not let yourself get caught in a trap of being reactive, instead proactively reach out and help groups understand how vital it is that we follow this type of framework.
Change the conversation, change how you measure success, and others will follow. Keep having the same conversation or let others dictate how you are going to act, and you will never be able to prove success.