2012 Year in Review

Since this time of year everyone puts out their yearly recaps, I thought it would be interesting to look back at some of the larger bits of news or changes in my industry over the past 12 months:

1) Tag Management blows up… And then starts dying

At end of last year and the start of this year, there was massive news about all sorts of new players pushing heavily into the tag manager space. One of my personal favorites was ensighten, but in general, they were all focused on trying to make it easier for companies to get their analytics code out across their site. Considering how little value most companies actually get from their tools, this was probably a good things as it would at the very least stop these companies from wasting quite as much in the way of resources.

Unfortunately for all the bit players in this space, the two largest analytics providers, Google and Adobiture, decided to release free tag management solutions. Making any tool a commodity in a saturated marketplace (especially one with questionable ROI) tends to be the death of any niche. This entire market niche does a great job of really kicking dirt on the grave way too early. It will be interesting to see where the next big push is as more and more companies become ripe for vultures to pick apart their perceived problems and to provide “solutions”.

2) Growth of the competition

Not that these companies started operation in 2012, but they certainly started making waves and really creating their own unique niche. The biggest players to join the mainstream were Optimizely, Visual Website Optimizer, and Monetate. With GWO finally dying (god was that an awful tool) these players have grown from carrion to trying to be real players in the mainstream. All of them offer some actually pretty cool features, from slick interfaces, to easy deployment, and full service rapid testing. Test&Target, a tool that I know a thing or two about, has certainly suffered from a massive failure to really innovate, and its newest direction does absolutely nothing to resolve that issue.

Test&Target still completely blows away the competition when it comes to things that actually provide value, like flexibility, visitor based metric system, segmentation and data usage. For the people out there who are going to run a meaningful testing program, there still isn’t real competition (but I wish there were), but for people who have no clue what they are doing, people who listen to Tim Ashe or the Eisenberg’s and their immediate first reaction is not “these people have absolutely no clue what they are talking about”, these tools do make it cheaper and easier to waste resources.

That being said, the play that the new players are making sure seems like they are tackling the lowest common denominator. From completely ignoring statistical relevance, pushing people to just follow through with their own basic biases, and trying to push how easy it is to get tests up instead of doing meaningful tests, these groups are doing far more harm to the marketplace then good. Unfortunately, the direction of all of these tools seems to be following the inventor’s dilemma, and instead of improving the market, all them seem hell bent on trying to race to the bottom. The only real hope is that they mature long enough for one or two of them to really become a functional tool and that to cause Adobe to really push their own tool to be meaningful in the optimization space.

3) Big Data continues to be a buzzword without definition

The second half of the year brought about a bunch of push back against the use of the term big data, most of which was petty arguing about what that word really means. The start-up marketplace has been over saturated with technology and tools to present data or combine it in, pushing past hadoop to many newer similar technologies. The irony of course is that we are still operating from a A) collect data, B) ?????, C) Profit business plan. Big Data seems to be just a word thrown around (like marketing) to make it sound like people have a clue what they are doing.

Having worked with so many different organizations, the one things stands out more then anything else is that there is very little knowledge about how to get value from data, but a thousand different ways to find data after the fact to validate someone’s agenda. The more data you collect, the more complicated the systems, the more this seems to be true. And to top it off, you have people who feast on this gap by providing flashy middle and topware, which makes fancy dashboards which provide zero value but make some executive feel powerful.

I do not expect this pattern to change anytime soon.

Ok, so a few predictions for 2013:

1) Buzzword bingo will never go away –

I think we finally reached a critical mass where most people laugh at social (I hope), but that doesn’t mean that the buzzwords will go away. Personalization will hopefully start getting more push back by the middle of year, and will be replaced by newer buzzwords. It seems like native advertising is the current “gem”, but I expect once people figure out that it is just a new name for the same tired BS, that they will move on to grander more interesting words. My guess based on the actions of Adobe and IBM, is that suite and digital marketing collaboration will come back in a big way, but no matter what the word is, the instant it becomes a big deal you will find all sorts of people popping out of the woodworks talking about how they have always been an expert in this subject and that will be happy to provide the one thing you have to do to be successful.

2) By the end of the year, at least 1 of the companies in the testing space will die/merge

Like all industries, you see an explosion of want to be technology start to emerge to take on a clear leader, and eventually that technology dies and becomes no longer relevant, while a few grow/emerge enough to actually be a legitimate contender to the title. This space will all of the flash and zero substance is ripe for this entire scenario, the only question is will it be by the end of 2013 or the middle of 2014.

If I had to guess which tool was most likely to join the ranks of GWO, Vertster and Optimost, I am going to go with Monetate. Besides all the massive limitations with that tool (and the god awful statistics), it seems to be caught in the middle between a much better but higher priced competitor, and a lower priced but just as good and easier to use lower end of the market (optimizely, VWO).

If I were going to guess which tool is most likely to mature meaningfully, I am going to go with Optimizely, just because of the flash. If they ever get someone who actually understands testing and is not just BSing their way with moronic tales about the first Obama campaign, then they can mature enough to really become a major player. There are features of that tool that are ahead of the market (though the actual value of those pieces is questionable at best). That being said, they will most likely continue their carrion approach of “ease of use” and fast testing instead of meaningful or relevant testing.

I sincerely hope they do mature however, as the industry seriously needs meaningful thought leading competition instead of what we currently have.

So there you have it. Happy 2013 to all.

Change the Conversation: Technology as the only “solution”?

As the world becomes more and more complicated, the battle between those on both sides of the functional knowledge gape becomes more vital. The need to constantly update your world view; the speed of change and the need to move past no longer relevant concepts leaves many people struggling to keep up and far more willing to listen to any offer to lighten their burden. When people do not know what they do not know, anything sounds like an oasis in the desert. To make up for this, there is a massive amount of groups who promise “solutions” to this fundamental problem, providing an answer of technology as the sole means to to make up for this fundamental inability to adapt to the constantly changing word. The problem comes not from this challenge, but from mistaking the solution being suggested for the sole requirement to the desired end result. Technology is part of a solution, but without understanding that your people must change; no technology will ever provide its promised value and very little will ever be achieved. When we fail to understand that change starts with how you think, not with what you purchase, we lose the ability to gain value from any “solution” that we can ever acquire.

The same cycle plays out time and time again. Senior executive defines a “problem”, such as a lack of clean data or ease of deployment of technology or the need to create a personalized experience. People proceed down a path of trying to both be the one to find a “solution” while at the same time finding ways to pass blame for the current state onto another party, be it internal or external, as any new solution must replace the prior “solution” that did not in fact solve all the world’s problems. They reach out and research and find a provider to give them a solution that makes the largest promise about “ease” or “functionality”, or the one that they have a prior relationship with. From there, it is a process of discover, promises, and then acquisition. The tool then gets shared to all the other groups, and the individuals who now find themselves with the task of trying to get this installed, also must make sure that their boss does not upset others in the company by instituting a change in the status quo. Each group provides “needs” in a one way direction that become part of a massive deployment project road-map. Groups continue to get buy-in and then try resources to deploy, each time acquiescing a little bit to each group they work with. Eventually the solution goes live, activities and tasks enacted, and everyone moves forward. The same problems arise a year or two down the line, agendas get forgotten, and large presentations are held to try and find a positive outcome for all that was invested. Very little has changed, very little has really improved for the organization, just a new piece of technology has been invested in to replace the old technology that went out the door.

I am in no way saying that technology is a bad thing, I work for the top marketing suite in the world, and wouldn’t if I did not feel that the tools themselves were best in class. Technology is simply a magnifying lens, increasing the impact of what your organization does great, but also where it fails. The reality is though that few companies get anywhere close to the value they should from the tools, and often that lack of value is accompanied by magnitudes of increased effort. If groups would start with a real honest change in how they understand the world around them based on each tool, they would find that they are wasting almost all their efforts in the vein attempt to justify their prior actions. Each tool is an opportunity to change and improve the efficiency of your organization, yet in almost all cases this vital task is talked about or ignored, and never enacted in a meaningful way. If you do not start your time with a tool with a discussion around what disciplines define success and failure for that specific tool, than no tool will ever do more than just be window dressing on bad organizational dynamics.

One of the first things I try to teach new analysts and consultants is that there is no such thing as a technical solution. All problems are strategic, they may have a technical solution, but they are truly strategic in nature. It is far easier to find a massively technical work around to do the one thing that senior VP X is asking for then it is to take the time to discover if that effort is needed or if it will provide any actual ROI. The unfortunate truth is that for a vast majority of the “problems” that are being identified, a successful or non-successful answer to that stated problem would have no change in the fact that they are not going to receive value. Slick interfaces don’t make up for poor strategy, integrations between platforms do not make up for not understanding your data. The truth is that in almost all cases the real problems are the ones that we are turning a eye from; they are the elephants sitting in the rooms that we refuse to talk about, so instead we make excuses and sacrifice outcomes in the name of taking credit for change.

This is the nature of confusing the “solution” for the desired outcome. Solutions are a means to an end, not the end itself. Never confuse the need to add functionality with the goal of that functionality. you are not just adding testing to someone’s day to day job, you are asking your entire organization to discover what the value of its own actions are. You do not just find a tag solution for the fun of it, you do it to save resources so that you can then spend them on valuable actions. You do not just start a personalization project from the goodness of your heart; you do it because you believe it will increase revenue. As soon as you keep the conversation about the end result, then you can have a functional conversation about the efficiencies of various paths to arrive at that point. Do you really need 90+ targets active, or would 15 give you a higher lift and much lower costs?

The cycle that technology gets brought into is the problem, as are the egos of those that own that purchase. Like most real world situations, it is far easier to make promises then to fix real problems or to deal with other groups and how they think about and tackle their own problems. Analytics, testing, and marketing are not things that are just done, even if your job is often times just a series of repeat activities. These actions are done to improve performance, which means that the change has to happen with which actions you are taking the resources to do, not just changing technology providers. If more time is not spent on reforming the enviroment around the technology, then all time will end up wasted. Never get caught up in the cycle and the can questions, without constantly keeping a vigilant eye on the should questions of all actions.

No matter if an idea is good or bad, it is always going to be easier to just do what your boss asks, and even easier to find a way to convince yourself that it is somehow valuable. We convince ourselves as others convince us that we are doing the right thing. We do not want to take the time to think about our opportunities to do things in a different way. Sadly most actions commonly done in our world are not valuable or efficient, and in all cases can and should be improved. You must first get over your own fear of doing the right thing before you can ever try and get those above you to do the same. The battles worth fighting when you bring in a piece of technology are not about how many developers can you get to deploy a solution, or how can you get an agency to run it for you, but in how do we find ways to fundamentally change current practices to learn and grow with the technology.

There is no shortage of people who are willing to promise that you don’t need to really look inwards to get value, and in some cases they are able to provide some momentary glimpses of value. Great tools offer you the chance to succeed, but do not guarantee that outcome. No tool will ever be able to make up for the incorrect application of its features, just as no organization will truly change unless change is more important than task. In the end though, every success I have ever seen or had with an organization comes from fundamentally challenging and changing existing practices and in creating simpler ways to get more value. Change is hard, most can not achieve it in a meaningful way, but all value comes from change. Not from creating complex ways to accomplish a task. Complicated will never mean valuable, complicated will always simply mean complicated. Never forget that a solution is a promise as a means to an end, and that the real ability to achieve that end, or more, comes from action, not from just a tag or from a solution being deployed.

Everyone Loves a Model

As the online world starts to get deeper and deeper into mathematical disciplines, new people are constantly being made aware of all the amazing mathematical tools that are available. Often times marketers are talking about or leveraging these tools without really understanding the math part, as they get caught up in the more common names, things like media mix modeling, confidence, or revenue attribution modeling. The problem arises however when those tools and their power are focused on, but not the disciplines and functions that make them viable as tools. Just having access to some way of looking at data does not inherently make it valuable, yet too many analysts end the conversation at that point. Every tool is only as good as the way you use it. So how then do you enable people to get value from these tools instead of just empty promises left from not understanding the real nature of the tool.

Before anyone starts focusing on mathematical models, the first thing and the last thing that must be understood is by far my favorite quote about math, “All models are wrong. Some are useful.” All models are built off of assumptions, and those assumptions determine the validity of any output of that system. Not only do the assumptions have to be understood at the start, but as the enviroment that you are modeling evolves, they too must be kept true, which can be extremely problematic with the constantly changing nature of the digital world. Because of this, a constant and vigilant awareness of not only the initial assumptions, but also the longer term continued fit of those assumptions is vital for getting a positive outcome in the long term.

In the world of testing, the most common models used are p-score based models. T-Test, Z-Score, Chi-squared models are all basically different versions of the same concept. The most important things that people miss are that these models require a number of key things before they can ever be useful. The first two things that are monumental are that they require representative data to be meaningful. It doesn’t matter if all other parts of the model are correct if it has no reflection on the real basis of your business. Getting confidence quickly means nothing if the confidence does not reflect your larger business nature. This is why you will find people who do not understand this problem shocked when they act too quickly and then find that the real impact is different then what they measured.

The other large assumption is that the data distribution will approach a normal or Gaussian distribution (a bell curve). Data may over a long enough period approach that distribution, but the reality is that the biases, variance, and constraints of the everyday world mean that this distribution is questionable at best. Because both of the nature of online data collection, be it biased visitor entry, limited catalogs, or constrained numeric outcomes, all of these assumptions may never really come into effect. This does not mean that this, or any model, is completely worthless, but it does mean that you cannot blindly follow these tools even as a deciding factor between hypothesis.

But models are not restricted only to the testing world, in the analytics community, everything from attribution models to media mix modeling systems are becoming all the rage. The sophistication of these models range from one time basic models to large scale complex machine learning systems, but all of them have limitations that require you to keep a close eye on the context of their use. Even in the most advanced and relevant uses, it is important to note that the assumptions and model that you used need to be updated and changed over time. The nature of online data collection makes it so that there are so many variables that impact the bias and distribution of your users, that any model as a one-time fix will almost immediately lose value. Predictive model’s can have amazing impact on your business, but they can also lead you astray if you do not keep a watchful vigilance on the relevance on those models as the world they represents changes. The only true way to ensure value over any period of time is to update and incorporate learned behavior into your models and their usage.

The other limitation to keep in mind is that you start one model with only correlative information, often coming from one or more analytics solution. This gives you a great start, but like all other uses of this information, it lacks vital information that affects its value. The most important part of any use of a model such as these is to understand that you must constantly update that model, especially as you start collecting causal impact data that tells you your ability to influence behavior. A one-time model may sound great, and may give you a short term boost, but in the long term, it becomes almost meaningless unless you keep the model relevant and the data focused on the efficiency of outcome. The world is not static, nor should your use of approximations for that world be static.

This means that as you start to leverage any model, that you must make sure that you have members of your team that understand the nature of data and how best to leverage them. You may not need a full time statistician, but it does mean that you should be spending resources and improving the skills of your current resources to understand both the nature of the tools and the relevance to your business. A full time statistician may actually be a detriment to your group, as you need to make sure that you are not solely focused on classroom statistics, but instead on the real and often complex world of your particular environment. Everything you do should be focused on the pragmatic maximization of value to your organization.

I cannot suggest enough that you think about and explore ways to leverage models into your practices, and that you start to leverage their power to stop opinion based decision making. That being said, if you are to get value from these tools, you must understand both sides of the coin, and make sure that you keep you use of the models as relevant and powerful as their original intent. Never stop growing your understanding of your site, users, and the efficiency of change, but also keep that focus not only on your organization, but also on each tool you leverage to achieve your goals.

One Problem but Many Voices – The Many Ways to Explain Rate & Value

One of my great struggles in the entire data world is to get people to understand the difference in rate and value. It seems like this problem has a thousand different faces, yet it can be extremely difficult to find the right way to correct the misconceptions of any particular case. People are constantly trying to abuse data to show that they provided value or that something is directly tied to an outcome, despite the fact that the data itself can not in any way tell you this fact.

I was faced recently with trying to explain this to a person new to data discipline and found that once again, my answer was much longer and more complicated then I would hope. It seems like such a great concept, but the truth is that everyone has their own way to understand and tackle this problem. With that in mind, I reached out to some of the smartest people I know to see how they tackle the issue. The specific problem I asked about was explaining the difference and contradictory nature of revenue attribution and revenue generation.

Not everyone agrees on the issue or how to express is, and that is why it is so difficult for some, especially those that don’t deal with it on a daily basis. It takes many great voices to find the tools that enable anyone to really correctly tackle large complex issues.

Below are a few of the answers that I was able to gather:

Brent Dykes – Author of Web Analytics Action Hero and general analytics guru –

A rate is simply a calculated metric. We use rates to measure all kinds of things such as the bounce rate of a landing page or conversion rate of a checkout process. In order to get value from rates we need both context and comparisons. On its own, a rate doesn’t tell us anything useful.

For example, if my site’s conversion rate is 10%, you’d think that would be great. In the back of your mind, you may remember reading somewhere that the average conversion rate for most sites between 2-3% so 10% sounds fantastic. However, when we start to add context and perform comparisons this number may end up sounding less appealing. What if my site’s conversion rate last year was 15% compared to today’s 10%? What if similar country sites in my organization have 20% conversion rates? What if my closest industry peers recently shared in a media article that they have average conversion rates of 30%? Now the 10% conversion rate doesn’t sound as good.

A rate simply provides us with a number, and what we do with the measure is what adds value. When we analyze what’s happening with the conversion rate, we can determine how to create more value or stop value leakages. Through testing we can confirm what we found in our analysis (correlation vs. causation) before making wholesale changes. It’s important to use the right rates or metrics, but the numbers without any context or comparisons are meaningless. Value only comes from understanding the rates and making changes to improve them over time.

Russell Lewis – Optimization Consultant

Here is one that spawns from my latest fantasy football win.

You have two QB’s to play. One has a higher completion rate than the other. This rate indicates that he should have a high predicted score when it comes to game time. When you decide to play him, he falls flat on his face. This rate did not give you the value of what his performance is, it just showed what he has done in the past in regards to completed passes to attempted passes. The value of what he actually did is seen when put in comparison to the QB on the bench that had the 10 additional points needed to win the game for the week. Without the comparison to the other QB and the current matchup, we would have no value.

Anonymous –

To me, revenue allocation has always been a method for ranking performance in much the same way page views or visits are. It gives you something to sort by, and that’s about it. Not to mention that depending on the type of allocation you are using you may be inflating your total revenue anyway, so it inherently is not a reliable method of determining revenue generating sources.

When trying to determine revenue generating sources, I have always relied on a less granular outlook. Rather than saying “this email message generated $X,” stepping back and saying “email campaigns drove $X, while SEO drove $X”. To get much more granular than that and you begin speculating too much about human nature, which is anything but reliable.

To me that is when you get into the psychology of it, and it gets too nit-picky. I think broadly if you are trying to determine whether to put ad dollars in email or SEO it can help…but when you start saying “well, if we put out an email with this call to action, it will generate $X in return” you have a problem.

To me it is a gross misuse of the scientific method…you almost need to look at the control group and see what they are doing before you can determine anything. No one looks at the visitors not associated with a campaign…maybe people on the site just buy stuff on their own.

Jared Lees – Business Consulting Manager

Here is my short answer:

• Revenue allocation – similar to attribution or correlation. Assigning credit to an activity. The amount of credit could depend on the business rules or attribution model you want to do.

• Revenue generation – total revenue acquired from a singular action. There could be other actions that influenced it, but we aren’t counting that here.

Rhett Norton – Senior Retail Consultant & Team Lead

I think what the person was saying something like this, “I looked at a specific channel and it said it generated $4.50 worth of revenue.” And then you would say “Who cares what the rate is, we need to find what impacts true value and actually changes revenue since that number is just a rate.”

I think the best thing that helps explain these types of situations is explaining causation and correlation.

This made me think of the jobs report on the economy – unemployment rate is 8%, which there is not really anything we can do with that, it is just a number/rate. Lots of people like to look at different sectors and pretend we know what is going on when they can say that growth increased in the technology sector—this is similar to the page participation example above. Again, there isn’t anything you can do with that, it is just a rate. The real question is how do we move the needle, how do we create jobs or what actions make jobs decline.

Derek Tangren – Principal Analytics Consultant –

I’d describe the two as follows:

• Revenue allocation is a method/means to assign success based on certain behavior
• Revenue generation references an action that you are taking in order to invoke a positive change in driving revenue

I would define revenue generation as the action you take and the revenue allocation the means by which you measure the success.

There were many more answers, as you would expect. Some said it didn’t matter because the point is to give executives evidence to continue their agenda, others simplified the situation to simply correlation and causation, and even more didn’t even acknowledge the problem. Most acknowledged that the problem is a major one, but were unable to come up with a simple direct way to convey the message.

Like so much in the online data world, there is no simple answer. Even more, there are as many different agendas and points of views as there are ways to answer the question. Simple answers will always leave you with more questions than answers. How do you deal with this when running your program? Is this the type of battle that you wage, and if not, why? How do you know when you are having the right conversations?