Category: Organization
Optimizing the Organization: Where Does Testing Fit into your Org Structure?
Where then does testing fit into your organization? Is it just something people do? Is it a central component to be shared, or is it just something that each group does on their own. All groups face this struggle when they discover that you can’t just make it an additional duty for someone and get good value. When you have decided that you want to build a real optimization team within your organization, you are challenged with those very questions. What I want to propose are a couple of frameworks that I have seen work to great success, and what fundamentally makes them successful. It may not be possible to move mountains quickly to get to these structures immediately, but it is important to understand why they work and to think of ways to move towards those directions.
One of the core challenges is that testing really isn’t a full on marketing discipline, nor an analytics discipline, IT discipline or really anything else that is in most groups normal organization structure. To most groups starting out, testing is thought of as a feature done in order to prove that their current efforts are better then their prior efforts. Testing instead is a unique discipline that takes parts of all of those, but is also at its best when it is showing the inefficiencies in your internal processes and mindset. for any group to succeed, you need to have people, alignment, and the correct mindset, otherwise it all goes to waste. When testing is not allowed to be a new discipline, it suffers exponentially the inefficiencies of the discipline it is placed under.
The first thing to understand is that for larger organizations, testing works best in a hub and spoke model; meaning that you have a central team that then works with members of the various business units to improve their actions. While this might be the best model, it only works if you have established clear rules and the correct mindset in those other groups as well as your own.
Education will always be the primary role of the testing team.
Each of the frameworks below shows the central team, that would then work with an individual or team in each structure that follows a similar format. Central alignment allows you to separate the resources and to insure that you are not just adding on testing to existing duties. This format allows you the benefit of creating a central knowledge base, while leveraging local knowledge, resources, and structure to work. The accumulation and sharing of knowledge, and the design of your efforts to accomplish this task is the primary goal of your larger structure. For this to work, testing can not be dictated only by the business unit. It must instead be a collaborative effort where both sides work together to create a continuous culture of optimization, one that is focused on being “wrong” over being “right”. There can never be a time where just someone coming up with an “idea” is allowed to be the end all of what is tested. No idea by itself is sacred, and no one, the CEO down to your janitor, should be allowed to just throw something up because they think “it will work”.
Framework #1:

In this first model, you see that we have a manager of testing who works through or acts as a project manager for external resources (IT and creative). This person may have analysts working under them, but fundamentally they work with other groups in a dotted line creating a cross functional team that tackles testing. While you may not have direct full time people on the team, in this structure the same people work together regularly to advance the organizations optimization efforts. You will also notice that while they may work under analytics, they are not analytics, with a separate team handling those responsibilities. The disciplines are dramatically different, and there is a lot of value for bringing different data types together, but if you have just your standard analytics team also doing testing, you will never achieve anything close to the value that you can and should receive.
The limitations of this type of structure is a heavier need for sponsorship to allow the freedom and align the teams on central goals. You will now have personal or team goals for the various business units that may be opposite of the central or optimization team goals. It is extremely easy when resources are not “owned” for those to go to projects based on political or popular reasons and not the value they may bring to the organization. It is easy to talk about working together and having access to resources, but that tends to last only as long as there is not a fire that someone feels needs to be put out. Other limitations are the constants pull to do other types of work, especially for report pulling, as well as the need to make it clear what people are measured on despite being on very different teams. Despite those limitations, a lot of good can come if you have clear and strong leadership, accountability, and the right people in place. This is one of the most common structures for mature groups, and is one that will allow some level of success and expansion throughout the organization.
Framework #2:

In this second example, you see that you have a full optimization team, one without dotted lines and one who is independently part of the entire data team. Here you have technical and creative resources, but who only tangentially part of the larger marketing and IT teams. These resources are not directly part of their team, but the same group continuously work together to grow and expand the organizations testing efforts. The benefits of this structure is the ability to really develop the skills of the members, the central role within the larger organization, and the ability to have a separate charter and to really focus on improving the site as a whole, not just the smaller components within.
This is the preferred structures that I have seen for larger organizations. This allows for consistent resources, the ability to do the right thing for the site, not just the business unit, and an independent role for analytics and just marketing. It doesn’t create confusion for the team to do things that help only one group or person, and it allows for a clear line between testing and any other group. The goal is to work together to improve, not for teams to fight over who gets credit. The limitations are the need to constantly be working with and educating the various business units, and the complications of owning the impact for the team’s actions.
In order for either framework to function properly, the executive sponsor must take an active role in keeping people aligned and accountable for site goals, not just personal goals. You will never be able to function if you cant first deal with petty infighting and a lack of accountability towards a common central goal. If you do not have that, then don’t wait for it to happen and instead seek out sponsorship to make it happen. You don’t need or even want higher level executives to be dealing with the day to day operations, but you need an umbrella by which to separate the disciplines, align the resources, and make sure that the system itself is being used to its highest function. As much as we may want to pretend we live in a world where everyone works together towards a common goal, it is rare that this is the case. The executive sponsor’s primary responsibility is to raise the level of discourse away from petty individual goals and to hold people accountable for actions that make everyone better.
These are but two of the more common examples I have seen for successful programs. In all cases, the real success is far more about developing skills, challenging ideas, getting buy-in and accountability, and more than anything treating optimization as an independent function that never starts and never ends. The team is meant to be the best friend of all business units, not their worst enemy. It is meant to show them the efficiency of their efforts and to make sure that what they think matters really does. If you are just going to leverage testing to just push the same ideas, to prove value for other tools, or to make someone look good then you have no chance of building a world class organization. This means that you have to fight the battles to get past the initial resistance and educate people as to why their ideas won’t work and why you must challenge all ideas in order to understand the value, not just directly but relative to other courses of action.
There are no magic bullets to make your organization a great optimization organization. No matter what structure and path you choose to go down, it takes time and a lot of hard work to get people to understand how to make it work. The hardest task is always going to be getting past poor misconceptions and petty internal battles for control and who gets to claim success. Once you have gotten past those points, aligning in a way that insures success is the next step on the path to a great program. Programs fail when they stop fighting necessary fights and just go with the flow. If you want to make your organization the best it can be however, you can never stop improving and you can never stop trying to get people to align in the best ways possible.
Optimizing the Organization: The Mental Evolution of Programs
Building a true optimization program in your company can be a daunting experience. No matter how much you might want to make things work perfectly, the newness of the concepts presented, the politics around who is “right”, and a hundred other factors conspire against you. Most people speak about wanting to get good results, but are often unwilling or incapable of changing their own behaviors, let alone others, in order to get those results. Even worse, there are very few people who have actually built a world class program, and they are drowned out in a sea of “experts” who have the one thing you need to do to succeed. With all that information, where you are mentally about your program speaks volumes about the value you are getting and what the next steps are to really become world class.
No program is perfect day one, and almost all of them have to go through some very difficult growing pains before they are even functional. It is true that every program follows a similar pattern of evolution, but all programs risk eventually stopping their evolution due to a lack of will or understanding. The challenge starts with the mental evolution of the program, since the functional parts are mere reflections of where people view testing. It is important that you understand where you are, and where you need to get to in order to succeed.
The challenge is that all programs reach a stopping point, either through mental exhaustion, political pushback, personal ego, or a hundred other reasons. The key to becoming a top program is to get past that point and continue down the path, even when it seems daunting or does not seem to help you advance politically.
The mental evolution of programs:

Random Testing –
All groups start here, thinking of testing as a one off action you take to figure out which piece of creative to show, or which landing page is best. No program is able to achieve the efficiency that is necessary for their program when they are stuck at this phase, yet most conversations around testing and a great many programs never get past this point. One of the key reasons for this is the comfort and the easy to grasp nature of this stage. This is what most people think of when they think testing, and that is a shame since so many will never see the power it can truly bring to their organization.
The key signs of this stage are: “better” testing, each test is an individual project, you need to get approval for each test concept, you have no rules of action. Fundamentally you are focused on finding out who or what is “right”. Testing is a one off project that you do when you need to make a decision. Put succinctly, if you are talking about what you want to test, instead of letting results tell you what to test, then you know you have not moved past this phase.
One of the other major signs of this stage is the lack of aligning on site goals. If you have not gotten alignment on a single success metric for all tests, then I can guarantee you are at this point. If you are stuck thinking about specific metrics for a test, or think that you will decide how to act and what is important when you get the results, then you are firmly stuck in this phase. If you try to think or act on the data from tests the same way you do the data from analytics, you can not ever move past this point.
It is possible to get value at this stage, and the sad reality is that most groups never leave this stage, but ultimately you will never have a real optimization program and will be getting pennies on the dollar return if this is where you keep the conversation. Groups that are stuck in this way of thinking often think that more tests means more value, and that is true if you are leaving the outcome of a test to random chance. Groups that want to be efficient and to get real value from their program however need to apply those resources not towards running hundreds of tests, but instead towards shifting the mindset of the program to insure higher returns and more long term value from each and every action.
Long Term Site Integration –
The next evolution is to start tying in testing into larger projects. Working on a redesign? You start testing out smaller portions on the way. Focused on personalization? Then you start testing out different pieces of content or you start testing for different segments. Testing has shifted from being a random action of choosing between to choices to one that can shift and change the entire direction or path of a project. To reach this stage, you must be willing to shift some part of the project away from what you want to do, and instead choose to do the things that the data tells you to do.
The benefits of this stage are the start of organizational building blocks that are fundamental to a successful program. You will have to have agreed on what is success, you are starting to look at testing as an efficiency tool, and you will have built out some processes to make testing more efficient. Most likely you have some more dedicated resources and have testing as an ongoing thing, one that is not just a novelty one off. You are stopping testing what you want, and are letting some results from the test determine the path of a project or initiative.
The limitations are that you are just doing more of the same. You have not really built out a full program and you are still focused on “better” ideas. You are just creating more structure for the randomness of the previous phase, and it is starting to shape a direction, but you have not bought in to testing as a means to the question instead of just a way to find an answer. All groups have to get through this stage, and the ones stuck here are going to get more value from the random testing of the earlier phase, but it is important that you focus on moving to the next stage, which is the largest divergence on the list.
Disciplined Base Testing –
This is the real litmus test of programs and the largest gain and divergence point. Very few groups make this leap, but the ones that do see testing as a very different and more valuable component of their entire organization. The keys to this phase are a movement away from “better” testing and a change to open ended looks for the most efficient ways to apply resources. You are no longer looking to test out what you want, but instead using testing to understand the value of different alternatives and letting the results dictate the path of your tests and your initiatives. All tests do not predispose an outcome, instead focusing resources constantly towards the most efficient answer from the prior actions. Test ideas and appeasing CXOs becomes secondary to the discovery process and the opening up of testing to dictate its own path.
A sign that you have reached this stage is that you no longer look at just a test result, but instead focus on the value of outcomes relative to each other. You measure outcomes by the value of relative actions and not just that you went from 1 point to another. If you are not proving yourself and others wrong constantly, and if you are not humbled by the fact of how little you really know, then you have not reached this point.
The benefits are a constantly growing understanding of your site and users, and a move to ensure that all efforts are focused on the most influential sections and in the most efficient manner. You are no longer worrying about a “roadmap” or about what won, but about the process of figuring it out and constantly acting. You are starting to build out the trust that the system is only as good as the input, and to not worry about who is “right” as opposed to providing quality differentiated alternatives. You are learning with every action, and you are constantly stopping current paths that your organization is on and the causal data is discovering the value of new paths that you never thought of or would not have normally pursued.
The limitations of this stage is that you are going to upset a lot of people. You are going to be constantly proving that what people have held dear and believed as the core to their benefit to an organization is actually negative. You are going to show that myths passed down for years from schools, experts, and the very thing that CXO people hold dear is wrong. If you have not built out a culture where you are focusing on being wrong, where the goal is actual success and not propagating someones agenda, you will be dealing with constant headaches and internal strife. It takes a special type of person to stand up to pressure and to do what is right, even if it is not always in their best interest. If you are not willing to pursue results over “glory” then you will most likely not ever reach this phase in your programs growth.
If you have dealt with those issues, even if with just one or two groups, you will see dramatic improvements to the efficiency and return on your efforts.
Constant Iterative Testing –
There are very few organizations that have reached this point. At this phase, all parts of the site are open and constantly evolving using discipline based testing methods in order to grow and to get more efficient. There is no longer a view of a project at all, but a constant use of multiple resources to have a shifting and evolving user experience. All jobs in the organization have optimization as part of their duties, and no longer are you having debates about what you should test and what you think is better. Everyone is aligned on a common goal of growth and of proving each other wrong, not right. It is not about the test proving anything right, but instead about the quality of the input that is used to feed the system, which is starting to dictate just about every part of your user experience and internal resources and initiatives.
Getting a program to build out the mindset that allows for this phase usually means that you have dealt with all of the negatives that might arise, and have people aware of the benefit of being “wrong“. There is nothing that will show the inefficiencies of your organization faster then trying to constantly do things that might “hurt” someone. If you are not willing, capable to deal with the previous phases, most likely you will not come close to this level in your program.
Optimization Organization –
I refer to this as the mythical unicorn, as there are no organizations in the world that have reached this “nirvana”. At this point, all concepts feed the system and the system dictates the outcome. This is about letting go of worrying about who is right, and instead understanding that you still have to feed the system, but making the final decision has to be left to a disciplined (non-biased and predetermined) use of data, especially causal data, to make those decisions.
No group has reached this point because all organizations are run by people, all of which have their own agendas and all of which need to prove themselves “superior”. I don’t expect there to ever be a point where people are capable of putting their egos and politics at check, but working towards this point is the only way to really make a true difference and not just use data to push an agenda or to promote yourself.
The evolution of programs is a tricky thing, and not one that is quite as black and white as this path might make it seem. What is important is that you understand that you have to shift how you think, and be willing to change all the pieces that follow, in order to be successful. If you are still thinking about testing as a means of choosing between two items, or if you haven’t built out a culture where being wrong is more important then being right, then all the resources in the world wont make you successful. Success is not a random thing, and it is not dictated by how many resources you have, but instead by your willingness to prove yourself and others wrong. Building out the right mindset determines the long term value you receive, yet very few take the time to really understand or educate others. There are plenty of material out there that is happy to make you feel good about whatever stage you find yourself at, but if you really want to get value and really make a successful program, nothing will top constant hard work and the willingness to challenge the norm and to do things against “best practices”.
Change the Conversation: What does “Efficiency” really mean?
One of the great mysteries of the analytics space is the use of words that have almost no real meaning. Words like optimization, analytics, marketing, social, value, personalization, predictive, and segment have different meanings to different people. They become useful jargon to direct a conversation, but when it comes down to giving them a real meaning, so many groups struggle because it is a very personal definition. When we do find a meaning for those words, it is usually an old tired one that has lost all relevance in the modern world. To me, the most commonly abused term is efficiency.
What does efficiency mean? Is it just an outcome? Is it something that you can actually measure? If it is as simple as just ROI, why do we fail then to really measure against it? I want to present a simple way to think about efficiency, your actions towards improving it, and then give you real world ways to use that to measure your actions and to improve the “efficiency” of your organization.
Here is the way I suggest measuring efficiency:

This gives you a value, which you can then measure against others. The difference between values shows you what is efficient and what is not efficient. It is strongly related to ROI, but separates its components and allows you to look at any action, not just revenue. We are given the choice to interact with 1 or all 3 parts, and we can measure our ability to do so on the same scale.
It is important to understand the 3 components, to make sure that everyone is on the same page.
Scale – The size of the population that is impacted.
Impact – This is the measure of recordable lift or gain. This is your ability to influence. This must be towards a site wide goal, not just a dependent goal such as the next page or clicks.
Cost – This is how much in time, energy, money or other resources it takes to acquire and maintain the impact listed above.
To do this however, you must always keep all three things in mind, not just one.
Scale reminds us that a high increase of a small group is often less valuable then a small increase to a large group. We can try to increase the scale of something, but without knowing the impact or the cost to achieve it, we have nothing.
Impact reminds you that you can’t only look at lift. If you hear that you got a 12% lift, then you are still missing two really important pieces of information. If the 12% is to 100 people or 100,000 people, it dramatically changes the outcome.
The cost to achieve those two pieces tells us if we actually did something valuable or not. If it takes you 2 hours and $20 to achieve this outcome, or if takes you 6 months, 500 man hours, 1.2 million in new products and has a long term maintenance cost, then it is not going to be as valuable.
In order to enforce a conversation around maximizing return, you must first change the conversation so that you are no longer discussing only one of these metrics at a time. Do not accept a conversation that only tells you lift, or that only tells you a population without knowing the ability to impact that group and the cost to achieve that change. Do not just blindly hear that you have likelihood to change a metric, understand that you have to know the cost and scale of doing so. Do not just hear that a group has a different behavior, understand that you need to know the scale of impact and the cost to change them to understand the efficiency of that action.
So this may seem like a very simple definition for a complex issue, but it gives you the ability to truly view the world differently. To quote Jim Horning, “Nothing is as simple as we hope it will be.” We like to pretend we think these things all the time and that they are obvious in every conversation, yet time and time again we drop the entire context in the name of pushing an agenda. There are hundreds of conversations every day that talk about metrics that have nothing to do with improving performance (e.g. bounce rate) or that only talk about a single portion of performance (lift). Stop those conversations, and remind people that reducing costs or increasing scale is just as effective as improving your impact. Do not assume that everyone is putting everything in the right context, because they aren’t.
So what is efficiency? It is simply the act of making sure that you are improving this ratio, and you are remembering that you can not look at only one aspect to answer a question. We can’t fail to measure actions against each other. These are not just isolated events. It is acting in a way that you keep both the denominator and numerator equivalent in your discussions and actions, and that you do everything in your power to reduce low value actions and increase high value actions. Once you have an action, measure it against other actions, and continue to balance the discovery of the value of actions against your exploitation of the higher value ones.
Being efficient is simply taking resources away from low value actions and towards high value actions. The very concept implies that you will stop doing certain actions and that you will do new ones you aren’t currently doing. It is the entire discipline of knowing that what you are doing today is wrong, and that there is always a better way to do things.
It is not the concept but the constant discipline of following it and holding yourself and others accountable that will truly define your outcome. Nothing here is revolutionary, other than eliminating all of the other factors and excuses people love to throughout in their arguments. It gives you the way to measure different outcomes against each other, and because of that, you can truly see what the value of various actions are against each other.
If you are disciplined in your tracking, honest in your impact, and willing to evaluate actions as how they help your site, and not just you, you will arrive at amazing conclusions that will shift your organization. The only way to improve is to change, so do not fear change, embrace it. Do things you aren’t sure about, challenge common thinking, do the exact opposite to see what the value of what you are doing really is. Measuring things in this simple a form is not sexy or “advanced”, and it can seem juvenile, but it is only by doing the small things well that you will ever succeed at all those large things people promise revolutionize the world.
Bridging the Gap: Dealing with Variance between Data Systems
One of the problems that never seems to be eliminated from the world of data is education and understanding on the nature of comparing data between systems. When faced with the issue, too many companies find the variance between their different data solutions to be a major sign of a problem with their reporting, but in reality variance between systems is expected. One of the hardest lessons that groups can learn is to focus on the value and the usage of information over the exact measure of the data. This plays itself out now more than ever as more and more groups find themselves with a multitude of tools, all offering reporting and other features about their sites and their users. As more and more users are dealing with the reality of multiple reporting solutions, they are discovering that all the tools report different numbers, be it visits, visitors, conversion rates, or just about anything else. There can be a startling realization that there is no single measure of what you are or what you are doing, and for some groups this can strip them of their faith in their data. This variance problem is nothing new, but if not understood correctly, it can lead to some massive internal confusion and distrust of the data.
I had to learn this lesson the hard way. I worked for a large group of websites who used 6 different systems for basic analytics reporting alone. I led a team to dive into the different systems and understand why they reported different things and to figure out which one was ”right.” After losing months of time and almost losing complete faith in our data, we discovered some really important hard won lessons. We learned that the use of the data is paramount, that there is no one view or right answer, that variance is almost completely predictable once you learn the systems, and that we would have been far better served spending that time on how to use the data instead of why they were different.
I want to help your organization avoid the mistakes that we made. The truth is that no matter how deep you go, you will never find all the reasons for the differences. The largest lesson learned was that your organization can be so caught up in the quest for perfect data that they forget about the actual value of that data. To make sure you don’t get caught in this trap, I want to help establish when and if you do have a problem, the most common reasons for variance between systems, and some suggestions about how to think about and how to use the new data challenge that multiple reporting systems presents.
Do you have a problem?
First, we must set some guidelines around when you have a variance problem and when you do not. When you have systems designed for different purposes, they will leverage that data in very different ways. No systems will match, and in a lot of cases, being too close represents artificial constraints on the data that is actually hindering its usability. At the same time, if you are too far apart, then that is a sign that there might be a reporting issue with one or both of the solutions.
Here are two simple questions to evaluate if you do have a variance “problem”:
1) What is the variance percentage?
Normal variance between similar data systems is almost always between 15-20%.
For non-similar data systems the range is much larger, and is usually between 35-50%.
If the gap is too low or too large, then you may have a problem. A 2% variance is actually a worse sign then a 28% variance on similar data systems.
Many groups run into the issue of trying too hard to constrain variance. The result is that they put artificial constraints on their data, causing the representative nature of the data to be severely hampered. Just because you believe that variance should be lower does not mean that it really should be or that lower is always a good thing.
This analysis should be done on non-targeted groups of the same population (e.g., all users to a unique page.) The variance for defendant tracking (segments) is going to always be higher.
2) Is the variance consistent in a small range?
You may see variance be in a series of 13, 17, 20, 14, 16, 21, 12 over a few days, but you should not see 5, 40, 22, 3, 78, 12.
If you are within the normal range and you are in the normal range of outcomes, then congratulations, you are dealing with perfectly normal behavior and I could not more strongly suggest that you spend your time and energy on how best to use the different data.
Data is only as valuable as how you use it, and while we love the idea of one perfect measure of the online world, we have to remember that each system is designed for a purpose, and that making one universal system comes with the cost of losing specialized function and value.
Always keep in mind these two questions when it comes to your data:
1) Do I feel confident that my data accurately reflects my users’ digital behavior?
2) Do I feel that things are tracked in a consistent and actionable fashion?
If you can’t answer those questions with a yes, then variance is not your issue. Variance is the measure of the differences between systems. If you are not confident in a single system, then there is no point in comparing it. Equally, if you are comfortable with both systems, then the differences between them should mean very little.
The most important thing I can suggest is that you pick a single data system as a system of record for each action you do. Every system is designed for different purposes, and with that purpose in mind, each one has advantages and disadvantages. You can definitely look at each system for similar items, but when it comes time to act or report, you need to be consistent and have all concerned parties aligned on which system is the one that everyone looks at. Choosing how and why you are going to act before you get to that part of the process is the easiest fastest way to insure the reduction of organizational barriers. Getting this agreement is far more important for going forward than the dive into the causes behind normal variance.
Why do systems always have variance?
For those of you who are still not completely sold or who need to at least have some quick answers for senior management, I want to make sure you are prepared.
Here are the most common reasons for variance between systems:
1) The rules of the system – Visit based systems track things very differently than visitor based systems. They are meant for very different purposes. In most cases, a visit based system is used for incremental daily counting, while a visitor based system is designed to measure action over time.
2) Cookies – Each system has different rules about tracking and storing of cookie information over time. This tracking will dramatically impact what is or not tracked. This is even more true for 1st versus 3rd party cookie solutions.
3) Rules of inclusion vs. Rules of exclusion – For the most part, all analytics solutions are rules of exclusion, meaning that you really have to do something (IP filter, data scrubbing, etc.) to not be tracked. A lot of other systems, especially testing, are rules of inclusion, meaning you have to meet very specific criteria to be tracked. This will dramatically impact the populations, and also any tracked metrics from those populations.
4) Definitions – What something means can be very specific to a system. Be it a conversion, a segment, a referrer, or even a site action. The very definition can be different. An example of this would be a paid keyword segment. If I land on the site, and then see a second page, what is the referrer for that page? Is it the visit or the referring page? Is it something I did on an earlier visit?
5) Mechanical Variance – There are mechanical differences in how systems track things. Are you tracking the click of a button with an onclick? Or is landing on the previous page? Or is it he server request? Do you use a log file system or a beacon system? Is that a unique request or added on to the next page tag? Do you rely on cookies or are all actions independent? What are the different timing mechanisms for each system? Do they collide with each other or other site functions?
Every system does things differently, and as such these smaller changes can build up over time, especially when combined with some of the other reasons listed above. There are hundreds of reasons beyond those listed, and the reality is that each situation is unique and each one is the culmination of the impact of these hundred different reasons. There is no way to ever get to the point where you can accurately describe with 100% certainty why you get the variance.
Variance is not a new issue, but it is one that can be the death of programs if not dealt with in a proactive manner. Armed with this information, I would strongly suggest that you hold conversations with your data stakeholders before you run into the questions that inevitably come. Establishing what is normal, how you act, and a few reasons why you are dealing with the issue should help cut all of these problems off at the pass.