Your frequently asked conversion optimization questions, answered!

Reading Time: 28 minutes

Got a question about conversion optimization?

Chances are, you’re not alone!

This Summer, WiderFunnel participated in several virtual events. And each one, from full-day summit to hour-long webinar, ended with a TON of great questions from all of you.

So, here is a compilation of 29 of your top conversion optimization questions. From how to get executive buy-in for experimentation, to the impact of CRO on SEO, to the power (or lack thereof) of personalization, you asked, and we answered.

As you’ll notice, many experts and thought-leaders weighed in on your questions, including:

Now, without further introduction…

Your conversion optimization questions

Optimization Strategy

  1. What do you see as the most common mistake people make that has a negative effect on website conversion?
  2. What are the most important questions to ask in the Explore phase?
  3. Is there such a thing as too much testing and / or optimizing?

Personalization

  1. Do you get better results with personalization or A/B testing or any other methods you have in mind?
  2. Is there such a thing as too much personalization? We have a client with over 40 personas, with a very complicated strategy, which makes reporting hard to justify.
  3. With the advance of personalization technology, will we see broader segments disappear? Will we go to 1:1 personalization, or will bigger segments remain relevant?
  4. How do you explain personalization to people who are still convinced that personalization is putting first and last name fields on landing pages?

SEO versus CRO

  1. How do you avoid harming organic SEO when doing conversion optimization?

Getting Buy-in for Experimentation

  1. When you are trying to solicit buy-in from leadership, do you recommend going for big wins to share with the higher ups or smaller wins?
  2. Who would you say are the key stakeholders you need buy-in from, not only in senior leadership but critical members of the team?

CRO for Low Traffic Sites

  1. Do you have any suggestions for success with lower traffic websites?
  2. What would you prioritize to test on a page that has lower traffic, in order to achieve statistical significance?
  3. How far can I go with funnel optimization and testing when it comes to small local business?

Tips from an In-House Optimization Champion

  1. How do you get buy-in from major stakeholders, like your CEO, to go with a conversion optimization strategy?
  2. What has surprised you or stood out to you while doing CRO?

Optimization Across Industries

  1. Do you have any tips for optimizing a website to conversion when the purchase cycle is longer, like 1.5 months?
  2. When you have a longer sales process, getting them to convert is the first step. We have softer conversions (eBooks) and urgent ones like demo requests. Do we need to pick ONE of these conversion options or can ‘any’ conversion be valued?
  3. You’ve mainly covered websites that have a particular conversion goal, for example, purchasing a product, or making a donation. What would you say can be a conversion metric for a customer support website?
  4. Do you find that results from one client apply to other clients? Are you learning universal information, or information more specific to each audience?
  5. For companies that are not strictly e-commerce and have multiple business units with different goals, can you speak to any challenges with trying to optimize a visible page like the homepage so that it pleases all stakeholders? Is personalization the best approach?
  6. Do you find that testing strategies differ cross-culturally?

Experiment Design & Setup

  1. How do you recommend balancing the velocity of experimentation with quality, or more isolated design?
  2. I notice that you often have multiple success metrics, rather than just one? Does this ever lead to cherry-picking a metric to make sure that the test you wanted to win seem like it’s the winner?
  3. When do you make the call for A/B tests for statistical significance? We run into the issue of varying test results depending on part of the week we’re running a test. Sometimes, we even have to run a test multiple times.
  4. Is there a way to conclusively tell why a test lost or was inconclusive?
  5. How many visits do you need to get to statistically relevant data from any individual test?
  6. We are new to optimization. Looking at your Infinity Optimization Process, I feel like we are doing a decent job with exploration and validation – for this being a new program to us. Our struggle seems to be your orange dot… putting the two sides together – any advice?
  7. When test results are insignificant after lots impressions, how do you know when to ‘call it a tie’ and stop that test and move on?

Testing and technology

  1. There are tools meant to increase testing velocity with pre-built widgets and pre-built test variations, even – what are your thoughts on this approach?

Your questions, answered

Q: What do you see as the most common mistake people make that has a negative effect on website conversion?

Chris Goward: I think the most common mistake is a strategic one, where marketers don’t create or ensure they have a great process and team in place before starting experimentation.

I’ve seen many teams get really excited about conversion optimization and bring it into their company. But they are like kids in a candy store: they’re grabbing at a bunch of ideas, trying to get quick wins, and making mistakes along the way, getting inconclusive results, not tracking properly, and looking foolish in the end.

And this burns the organizational momentum you have. The most important resource you have in an organization is the support from your high-level executives. And you need to be very careful with that support because you can quickly destroy it by doing things the wrong way.

It’s important to first make sure you have all of the right building blocks in place: the right process, the right team, the ability to track and the right technology. And make sure you get a few wins, perhaps under the radar, so that you already have some support equity to work with.

Further reading:

 

Back to Top

Q: What are the most important questions to ask in the Explore phase?

Chris Goward: During Explore, we are looking for your visitors’ barriers to conversion. It’s a general research phase. (It’s called ‘Explore’ for a reason). In it, we are looking for insights about what questions to ask and validate. We are trying to identify…

  • What are the barriers to conversion?
  • What are the motivational triggers for your audience?
  • Why are people buying from you?

And answering those questions comes through the qualitative and quantitative research that’s involved in Explore. But it’s a very open-ended process. It’s an expansive process. So the questions are more about how to identify opportunities for testing.

Whereas Validate is a reductive process. During Validate, we know exactly what questions we are trying to answer, to determine whether the insights gained in Explore actually work.

Further reading:

  • Explore is one of two phases in the Infinity Optimization Process – our framework for conversion optimization. Read about the whole process, here.

 

Back to Top

Q: Is there such a thing as too much testing and / or optimizing?

Chris Goward: A lot of people think that if they’re A/B testing, and improving an experience or a landing page or a website…they can’t improve forever. The question many marketers have is, how do I know how long to do this? Is there going to be diminishing returns? By putting in the same effort will I get smaller and smaller results?

But we haven’t actually found this to be true. We have yet to find a company that we have over-A/B tested. And the reason is that visitor expectations continue to increase, your competitors don’t stop improving, and you continuously have new questions to ask about your business, business model, value proposition, etc.

So my answer is…yes, you will run out of opportunities to test, as soon as you run out of business questions. When you’ve answered all of the questions you have as a business, then you can safely stop testing.

Of course, you never really run out of questions. No business is perfect and understands everything. The role of experimentation is never done.

Case Study: DMV.org has been running an optimization program for 4+ years. Read about how they continue to double revenue year-over-year in this case study.

 

Back to Top

Q: Do you get better results with personalization or A/B testing or any other methods you have in mind?

Chris Goward: Personalization is a buzzword right now that a lot of marketers are really excited about. And personalization is important. But it’s not a new idea. It’s simply that technology and new tools are now available, and we have so much data that allows us to better personalize experiences.

I don’t believe that personalization and A/B testing are mutually exclusive. I think that personalization is a tactic that you can test and validate within all your experiences. But experimentation is more strategic.

At the highest level of your organization, having an experimentation ethos means that you’ll test anything. You could test personalization, you could test new product lines, or number of products, or types of value proposition messaging, etc. Everything is included under the umbrella of experimentation, if a company is oriented that way.

Personalization is really a tactic. And the goal of personalization is to create a more relevant experience, or a more relevant message. And that’s the only thing it does. And it does it very well.

Further Reading: Are you evaluating personalization at your company? Learn how to create the most effective personalization strategy with our 4-step roadmap.

 

Back to Top

Q: Is there such a thing as too much personalization? We have a client with over 40 personas, with a very complicated strategy, which makes reporting hard to justify.

Chris Goward: That’s an interesting question. Unlike experimentation, I believe there is a very real danger of too much personalization. Companies are often very excited about it. They’ll use all of the features of the personalization tools available to create (in your client’s case) 40 personas and a very complicated strategy. And they don’t realize that the maintenance cost of personalization is very high. It’s important to prove that a personalization strategy actually delivers enough business value to justify the increase in cost.

When you think about it, every time you come out with a new product, a new message, or a new campaign, you would have to create personalized experiences against 40 different personas. And that’s 40 times the effort of having a generic message. If you haven’t tested from the outset, to prove that all of those personas are accurate and useful, you could be wasting a lot of time and effort.

We always start a personalization strategy by asking, ‘what are the existing personas?’, and proving out whether those existing personas actually deliver distinct value apart from each other, or whether they should be grouped into a smaller number of personas that are more useful. And then, we test the messaging to see if there are messages that work better for each persona. It’s a step by step process that makes sure we are only creating overhead where it’s necessary and will create value.

Further Reading: Are you evaluating personalization at your company? Learn how to create the most effective personalization strategy with our 4-step roadmap.

 

Back to Top

Q: With the advance of personalization technology, will we see broader segments disappear? Will we go to 1:1 personalization, or will bigger segments remain relevant?

Chris Goward: Broad segments won’t disappear; they will remain valid. With things like multi-threaded personalization, you’ll be able to layer on some of the 1:1 information that you have, which may be product recommendations or behavioral targeting, on top of a broader segment. If a user falls into a broad segment, they may see that messaging in one area, and 1:1 messaging may appear in another area.

But if you try to eliminate broad segments and only create 1:1 personalization, you’ll create an infinite workload for yourself in trying to sustain all of those different content messaging segments. And it’s almost impossible for a marketing department practically to create infinite marketing messages.

Hudson Arnold: You are absolutely going to need both. I think there’s a different kind of opportunity, and a different kind of UX solution to those questions. Some media and commerce companies won’t have to struggle through that content production, because their natural output of 1:1 personalization will be showing a specific product or a certain article, which they don’t have to support from a content perspective.

What they will be missing out on is that notion of, what big segments are we missing? Are we not targeting moms? Newly married couples? CTOs vs. sales managers? Whatever the distinction is, that segment-level messaging is going to continue to be critical, for the foreseeable future. And the best personalization approach is going to balance both.

Back to Top

Q: How do you explain personalization to people who are still convinced that personalization is putting first and last name fields on landing pages?

A PANEL RESPONSE

André Morys: I compare it to the experience people have in a real store. If you go to a retail store, and you want to buy a TV, the salesperson will observe how you’re speaking, how you’re walking, how you’re dressed, and he will tailor his sales pitch to the type of person you are. He will notice if you’ve brought your family, if it’s your first time in a shop, or your 20th. He has all of these data points in his mind.

Personalization is the art of transporting this knowledge of how to talk to people on a 1:1 level to your website. And it’s not always easy, because you may not have all of the data. But you have to find out which data you can use. And if you can do personalization properly, you can get big uplift.

John Ekman: On the other hand, I heard a psychologist once say that people have more in common than what separates them. If you are looking for very powerful persuasion strategies, instead of thinking of the different individual traits and preferences that customers might have, it may be better to think about what they have in common. Because you’ll reach more people with your campaigns and landing pages. It will be interesting to see how the battle between general persuasion techniques and individual personalization techniques will result.

Chris Goward: It’s a good point. I tend to agree that the nirvana of 1:1 personalization may not be the right goal in some cases, because there are unintended consequences of that.

One is that it becomes more difficult to find generalized understanding of your positioning, of your value proposition, of your customers’ perspectives, if everything is personalized. There are no common threads.

The other is that there is significant maintenance cost in having really fine personalization. If you have 1:1 personalization with 1,000 people, and you update your product features, you have to think about how that message gets customized across 1,000 different messages rather than just updating one. So there is a cost to personalization. You have to validate that your approach to personalization pays off, and that is has enough benefit to balance out your cost and downside.

David Darmanin: [At Hotjar], we aren’t personalizing, actually. It’s a powerful thing to do, but there is a time to deploy it. If personalization adds too much complexity and slows you down, then obviously that can be a challenge. Like most things that can be complex, I think that they are the most valuable, when you have a high ticket price or very high value, where that touch of personalization has a big impact.

With Hotjar, we’re much more volume and lower price points, so it’s not yet a priority for us. Having said that, we have looked at it. But right now, we’re a startup, at the stage where speed is everything. And having many common threads is as important as possible, so we don’t want to add too much complexity now. But if you’re selling very expensive things, and you’re at a more advanced stage as a company, it would be crazy not to leverage personalization.

Video Resource: This panel response comes from the Growth & Conversion Virtual Summit held this Spring. You can still access all of the session recordings for free, here.

 

Back to Top

Q: How do you avoid harming organic SEO when doing conversion optimization?

Chris Goward: A common question! WiderFunnel was actually one of Google’s first authorized consultants for their testing tool, and Google told us is that they support optimization fully. They do not penalize companies for running A/B tests, if they are set up properly and the company is using a proper tool.

On top of that, what we’ve found is that the principles of conversion optimization parallel the principles of good SEO practice.

If you create a better experience for your users, and more of them convert, it actually sends a positive signal to Google that you have higher quality content.

Google looks at pogo-sticking, where people land on the SERP, find a result, and then return back to the SERP. Pogo-sticking signals to Google that this is not quality content. If a visitor lands on your page and converts, they are not going to come back to the SERP, which sends Google a positive signal. And we’ve actually never seen an example where SEO has been harmed by a conversion optimization program.

Video Resource: Watch SEO Wizard Rand Fishkin’s talk from CTA Conf 2017, “Why We Can’t Do SEO without CRO

 

Back to Top

Q:When you are trying to solicit buy-in from leadership do you recommend going for big wins to share with the higher ups or smaller wins?

Chris Goward: Partly, it depends on how much equity you have to burn up front. If you are in a situation where you don’t have a lot of confidence from higher-ups about implementing an optimization program, I would recommend starting with more under the radar tests. Try to get momentum, get some early wins, and then share your success with the executives to show the potential. This will help you get more buy-in for more prominent areas.

This is actually one of the factors that you want to consider when prioritizing where to test. The “PIE Framework” shows you the three factors to help you prioritize.

PIE framework for A/B testing prioritization.
A sample PIE prioritization analysis.

One of them is Ease. Potential, Importance, and Ease. And one of the important aspects within Ease is political ease. So you want to look for areas that have political ease, which means there might not be as much sensitivity around them (so maybe not the homepage). Get those wins first, and create momentum, and then you can start sharing that throughout the organization to build that buy-in.

Further Reading: Marketers from ASICS’ global e-commerce team weigh in on evangelizing optimization at a global organization in this post, “A day in the life of an optimization champion

 

Back to Top

Q: Who would you say are the key stakeholders you need buy-in from, not only in senior leadership but critical members of the team?

Nick So: Besides the obvious senior leadership and key decision-makers as you mention, we find getting buy-in from related departments like branding, marketing, design, copywriters and content managers, etc., can be very helpful.

Having these teams on board can not only help with the overall approval process, but also helps ensure winning tests and strategies are aligned with your overall business and marketing strategy.

You should also consider involving more tangentially-related teams like customer support. This makes them a part of the process and testing culture, but your customer-facing teams can also be a great source for business insights and test ideas as well!

Back to Top

Q: Do you have any suggestions for success with lower traffic websites?

Nick So: In our testing experience, we find we get the most impactful results when we feel we have a strong understanding of the website’s visitors. In the Infinity Optimization Process, this understanding is gained through a balanced approach of Exploratory research, and Validated insights and results.

infinity optimization process
The Infinity Optimization Process is iterative and leads to continuous growth and insights.

When a site’s traffic is low, the ability to Validate is decreased, and so we try to make up for it by increasing the time spent and work done in the Explore phase.

We take those yet-to-be-validated insights found in the Explore phase, and build a larger, more impactful single variation, and test the cluster of changes. (This variation is generally more drastic than we would create for a higher-traffic client, since we can validate those insights easily through multiple tests.)

Because of the more drastic changes, the variation should have a larger impact on conversion rate (and hopefully gain statistical significance with lower traffic). And because we have researched evidence to support these changes, there is a higher likelihood that they will perform better than a standard re-design.

If a site does not have enough overall primary conversions, but you definitely, absolutely MUST test, then I would look for a secondary metric further ‘upstream’ to optimize for. These should be goals that indicate or guide the primary conversion (e.g. clicks to form > form submission, add to cart > transaction). However with this strategy, stakeholders have to be aware that increases in this secondary goal may not be tied directly to increases of the primary goal at the same rate.

Back to Top

Q: What would you prioritize to test on a page that has lower traffic, in order to achieve statistical significance?

Chris Goward: The opportunities that are going to make the most impact really depend on the situation and the context. So if it’s a landing page or the homepage or a product page, they’ll have different opportunities.

But with any area, start by trying to understand your customers. If you have a low-traffic site, you’ll need to spend more time on the qualitative research side, really trying to understand: what are the opportunities, the barriers your visitors might be facing, and drilling into more of their perspective. Then you’ll have a more powerful test setup.

You’ll want to test dramatically. Test with fewer variations, make more dramatic changes with the variations, and be comfortable with your tests running longer. And while they are running and you are waiting for results, go talk to your customers. Go and run some more user testing, drill into your surveys, do post-purchase surveys, get on the phone and get the voice of customer. All of these things will enrich your ability to imagine their perspective and come up with more powerful insights.

In general, the things that are going to have the most impact are value proposition changes themselves. Trying to understand, do you have the right product-market fit, do you have the right description of your product, are you leading with the right value proposition point or angle?

Back to Top

 

Q: How far can I go with funnel optimization and testing when it comes to small local business?

A PANEL RESPONSE

David Darmanin: What do you mean by small local business? If you’re a startup just getting started, my advice would be to stop thinking about optimization and focus on failing fast. Get out there, change things, get some traction, get growth and you can optimize later. Whereas, if you’re a small but established local business, and you have traffic but it’s low, that’s different. In the end, conversion optimization is a traffic game. Small local business with a lot of traffic, maybe. But if traffic is low, focus on the qualitative, speak to your users, spend more time understanding what’s happening.

John Ekman:

If you can’t test to significance, you should turn to qualitative research.

That would give you better results. If you don’t have the traffic to test against the last step in your funnel, you’ll end up testing at the beginning of your funnel. You’ll test for engagement or click through, and you’ll have to assume that people who don’t bounce and click through will convert. And that’s not always true. Instead, go start working with qualitative tools to see what the visitors you have are actually doing on your page and start optimizing from there.

André Morys: Testing with too small a sample size is really dangerous because it can lead to incorrect assumptions if you are not an expert in statistics. Even if you’re getting 10,000 to 20,000 orders per month, that is still a low number for A/B testing. Be aware of how the numbers work together. We’ve had people claiming 70% uplift, when the numbers are 64 versus 27 conversions. And this is really dangerous because that result is bull sh*t.

Video Resource: This panel response comes from the Growth & Conversion Virtual Summit held this Spring. You can still access all of the session recordings for free, here.

 

Back to Top

Q: How do you get buy-in from major stakeholders, like your CEO, to go with an evolutionary, optimized redesign approach vs. a radical redesign?

Jamie Elgie: It helps when you’ve had a screwup. When we started this process, we had not been successful with the radical design approach. But my advice for anyone championing optimization within an organization would be to focus on the overall objective.

For us, it was about getting our marketing spend to be more effective. If you can widen the funnel by making more people convert on your site, and then chase the people who convert (versus people who just land on your site) with your display media efforts, your social media efforts, your email efforts, and with all your paid efforts, you are going to be more effective. And that’s ultimately how we sold it.

It really sells itself though, once the process begins. It did not take long for us to see really impactful results that were helping our bottom line, as well as helping that overall strategy of making our display media spend, and all of our media spend more targeted.

Video Resource: Watch this webinar recording and discover how Jamie increased his company’s sales by more than 40% with evolutionary site redesign and conversion optimization.

 

Back to Top

Q: What has surprised you or stood out to you while doing CRO?

Jamie Elgie: There have been so many ‘A-ha!’s, and that’s the best part. We are always learning. Things that we are all convinced we should change on our website, or that we should change in our messaging in general, we’ll test them and actually find out.

We have one test running right now, and it’s failing, which is disappointing. But our entire emphasis as a team is changing, because we are learning something. And we are learning it without a huge amount of risk. And that, to me, has been the greatest thing about optimization. It’s not just the impact to your marketing funnel, it’s also teaching us. And it’s making us a better organization because we’re learning more.

One of the biggest benefits for me and my team has been how effective it is just to be able to say, ‘we can test that’.

If you have a salesperson who feels really strongly about something, and you feel really strongly that they’re wrong, the best recourse is to put it out on the table and say, ok, fine, we’ll go test that.

It enables conversations to happen that might not otherwise happen. It eliminates disputes that are not based on objective data, but on subjective opinion. It actually brings organizations together when people start to understand that they don’t need to be subjective about their viewpoints. Instead, you can bring your viewpoint to a test, and then you can learn from it. It’s transformational not just for a marketing organization, but for the entire company, if you can start to implement experimentation across all of your touch points.

Case Study: Read the details of how Jamie’s company, weBoost, saw a 100% lift in year-over-year conversion rate with and optimization program.

 

Back to Top

Q: Do you have any tips for optimizing a website to conversion when the purchase cycle is longer, like 1.5 months?

Chris Goward: That’s a common challenge in B2B or with large ticket purchases for consumers. The best way to approach this is to

  1. Track your leads and opportunities to the variation,
  2. Then, track them through to the sale,
  3. And then look at whether average order value changes between the variations, which implies the quality of the leads.

Because it’s easy to measure lead volume between variations. But if lead quality changes, then that makes a big impact.

We actually have a case study about this with Magento. We asked the question, “Which of these calls-to-action is actually generating the most valuable leads?”. And ran an experiment to try to find out. We tracked the leads all the way through to sale. This helped Magento optimize for the right calls-to-action going forward. And that’s an important question to ask near the beginning of your optimization program, which is, am I providing the right hook for my visitor?

Case Study: Discover how Magento increased lead volume and lead quality in the full case study.

 

Back to Top

Q: When you have a longer sales process, getting visitors to convert is the first step. We have softer conversions (eBooks) and urgent ones like demo requests. Do we need to pick ONE of these conversion options or can ‘any’ conversion be valued?

Nick So: Each test variation should be based on a single, primary hypothesis. And each hypothesis should be based on a single, primary conversion goal. This helps you keep your hypotheses and strategy focused and tactical, rather than taking a shotgun approach to just generally ‘improve the website’.

However, this focused approach doesn’t mean you should disregard all other business goals. Instead, count these as secondary goals and consider them in your post-test results analysis.

If a test increases demo requests by 50%, but cannibalizes ebook downloads by 75%, then, depending on the goal values of the two, a calculation has to be made to see if the overall net benefit of this tradeoff is positive or negative.

Different test hypotheses can also have different primary conversion goals. One test can focus on demos, but the next test can be focused on ebook downloads. You just have to track any other revenue-driving goals to ensure you aren’t cannibalizing conversions and having a net negative impact for each test.

Back to Top

Q: You’ve mainly covered websites that have a particular conversion goal, for example, purchasing a product, or making a donation. What would you say can be a conversion metric for a customer support website?

Nick So: When we help a client determine conversion metrics…

…we always suggest following the money.

Find the true impact that customer support might have on your company’s bottom line, and then determine a measurable KPI that can be tracked.

For example, would increasing the usefulness of the online support decrease costs required to maintain phone or email support lines (conversion goal: reduction in support calls/submissions)? Or, would it result in higher customer satisfaction and thus greater customer lifetime value (conversion goal: higher NPS responses via website poll)?

Back to Top

Q: Do you find that results from one client apply to other clients? Are you learning universal information, or information more specific to each audience?

Chris Goward: That question really gets at the nub of where we have found our biggest opportunity. When I started WiderFunnel in 2007, I thought that we would specialize in an industry, because that’s what everyone was telling us to do. They said, you need to specialize, you need to focus and become an expert in an industry. But I just sort of took opportunities as they came, with all kinds of different industries. And what I found is the exact opposite.

We’ve specialized in the process of optimization and personalization and creating powerful test design, but the insights apply to all industries.

What we’ve found is people are people, regardless of whether they’re shopping for a server, or shopping for socks, or donating to third-world countries, they go through the same mental process in each case.

The tactics are a bit different, sometimes. But often, we’re discovering breakthrough insights because we’re able to apply principles from one industry to another. For example, taking an e-commerce principle and identifying where on a B2B lead generation website we can apply that principle because someone is going through the same step in the process.

Most marketers spend most of their time thinking about their near-field competitors rather than in different industries, because it’s overwhelming to look at all of the other opportunities. But we are often able to look at an experience in a completely different way, because we are able to look at it through the lens of a different industry. That is very powerful.

Back to Top

Q: For companies that are not strictly e-commerce and have multiple business units with different goals, can you speak to any challenges with trying to optimize a visible page like the homepage so that it pleases all stakeholders? Is personalization the best approach?

Nick So: At WiderFunnel, we often work with organizations that have various departments with various business goals and agendas. We find the best way to manage this is to clearly quantify the monetary value of the #1 conversion goal of each stakeholder and/or business unit, and identify areas of the site that have the biggest potential impact for each conversion goal.

In most cases, the most impactful test area for one conversion goal will be different for another conversion goal (e.g. brand awareness on the homepage versus checkout for e-commerce conversions).

When there is a need to consider two different hypotheses with differing conversion goals on a single test area (like the homepage), teams can weigh the quantifiable impact + the internal company benefits in their decision and make that negotiation of prioritization and scheduling between teams.

I would not recommend personalization for this purpose, as that would be a stop-gap compromise that would limit the creativity and strategy of hypotheses, as well as create a disjointed experience for visitors, which would generally have a negative impact overall.

If you HAVE to run opposing strategies simultaneously on an area of the site, you could run multiple variations for different teams and measure different goals. Or, run mutually exclusive tests (keeping in mind these tactics would reduce test velocity, and would require more coordination between teams).

Back to Top

 

Q: Do you find testing strategies differ cross-culturally? Do conversion rates vary drastically across different countries / languages when using these strategies?

Chris Goward: We have run tests for many clients outside of the USA, such as in Israel, Sweden, Australia, UK, Canada, Japan, Korea, Spain, Italy and for the Olympics store, which is itself a global e-commerce experience in one site!

There are certainly cultural considerations and interesting differences in tactics. Some countries don’t have widespread credit card use, for example, and retailers there are accustomed to using alternative payment methods. Website design preferences in many Asian countries would seem very busy and overly colorful to a Western European visitor. At WiderFunnel, we specialize in English-speaking and Western-European conversion optimization and work with partner optimization companies around the world to serve our global and international clients.

Back to Top

Q: How do you recommend balancing the velocity of experimentation with quality, or more isolated design?

Chris Goward: This is where the art of the optimization strategist comes into play. And it’s where we spend the majority of our effort – in creating experiment plans. We look at all of the different options we could be testing, and ruthlessly narrow them down to the things that are going to maximize the potential growth and the potential insights.

And there are frameworks we use to do that. Its all about prioritization. There are hundreds of ideas that we could be testing, so we need to prioritize with as much data as we can. So, we’ve developed some frameworks to do that. The PIE Framework allows you to prioritize ideas and test areas based on the potential, importance, and ease. The potential for improvement, the importance to the business, and the ease of implementation. And sometimes these are a little subjective, but the more data you can have to back these up, the better your focus and effort will be in delivering results.

Further Reading:

 

Back to Top

Q: I notice that you often have multiple success metrics, rather than just one? Does this ever lead to cherry-picking a metric to make sure that the test you wanted to win seem like it’s the winner?

Chris Goward: Good question! We actually look for one primary metric that tells us what the business value of a winning test is. But we also track secondary metrics. The goal is to learn from the other metrics, but not use them for decision-making. In most cases, we’re looking for a revenue-driving primary metric. Revenue-per-visitor, for example, is a common metric we’ll use. But the other metrics, whether conversion rate or average order value or downloads, will tell us more about user behavior, and lead to further insights.

There are two steps in our optimization process that pair with each other in the Validate phase. One is design of experiments, and the other is results analysis. And if the results analysis is done correctly, all of the metrics that you’re looking at in terms of variation performance, will tell you more about the variations. And if the design of experiments has been done properly, then you’ll gather insights from all of the different data.

But you should be looking at one metric to tell you whether or not a test won.

Further Reading: Learn more about proper design of experiments in this blog post.

 

Back to Top

 

Q: When do you make the call for A/B tests for statistical significance? We run into the issue of varying test results depending on part of the week we’re running a test. Sometimes, we even have to run a test multiple times.

Chris Goward: It sounds like you may be ending your tests or trying to analyze results too early. You certainly don’t want to be running into day-of-the-week seasonality. You should be running your tests over at least a week, and ideally two weekends to iron out that seasonality effect, because your test will be in a different context on different days of the week, depending on your industry.

So, run your tests a little bit longer and aim for statistical significance. And you want to use tools that calculate statistical significance reliably, and help answer the real questions that you’re trying to ask with optimization. You should aim for that high level of statistical significance, and iron out that seasonality. And sometimes you’ll want to look at monthly seasonality as well, and retest questionable things within high and low urgency periods. That, of course, will be more relevant depending on your industry and whether or not seasonality is a strong factor.

Further Reading: You can’t make business decisions based on misleading A/B test results. Learn how to avoid the top 3 mistakes that make your A/B test results invalid in this post.

 

Back to Top

Q: Is there a way to conclusively tell why a test lost or was inconclusive? To know what the hidden gold is?

Chris Goward: Developing powerful hypotheses is dependent on having workable theories. Seeking to determine the “Why” behind the results is some of the most interesting part of the work.

The only way to tell conclusively is to infer a potential reason, then test again with new ways to validate that inference. Eventually, you can form conversion optimization theories and then test based on those theories. While you can never really know definitively the “why” behind the “what”, when you have theories and frameworks that work to predict results, they become just as useful.

As an example, I was reviewing a recent test for one of our clients and it didn’t make sense based on our LIFT Model. One of the variations was showing under-performance against another variation, but I believed strongly that it should have over-performed. I struggled for some time to align this performance with our existing theories and eventually discovered the conversion rate listed was a typo! The real result aligned perfectly with our existing framework, which allowed me to sleep at night again!

 

Back to Top

Q: How many visits do you need to get to statistically relevant data from any individual test?

Chris Goward: The number of visits is just one of the variables that determines statistical significance. The conversion rate of the Control and conversion rate delta between the variations are also part of the calculation. Statistical significance is achieved when there is enough traffic (i.e. sample size), enough conversions, and the conversion rate delta is great enough.

Here’s a handy Excel test duration calculator. Fortunately, today’s testing tools calculate statistical significance automatically, which simplifies the conversion champion’s decision-making (and saves hours of manual calculation!)

When planning tests, it’s helpful to estimate the test duration, but it isn’t an exact science. As a rule-of-thumb, you should plan for smaller isolation tests to run longer, as the impact on conversion rate may be less. The test may require more conversions to potentially achieve confidence.

Larger, more drastic cluster changes would typically run for a shorter period of time, as they have more potential to have a greater impact. However, we have seen that isolations CAN have the potential to have big impact. If the evidence is strong enough, test duration shouldn’t hinder you from trying smaller, more isolated changes as they can lead to some of the biggest insights.

Often, people that are new to testing become frustrated with tests that never seem to finish. If you’ve run a test with more than 30,000 to 50,000 visitors and one variation is still not statistically significant over another, then your test may not ever yield a clear winner and you should revise your test plan or reduce the number of variations being tested.

Further Reading: Do you have to wait for each test to reach statistical significance? Learn more in this blog post: “The more tests, the better!” and other A/B testing myths, debunked

 

Back to Top

Q: We are new to optimization (had a few quick wins with A/B testing and working toward a geo targeting project). Looking at your Infinity Optimization Process, I feel like we are doing a decent job with exploration and validation – for this being a new program to us. Our struggle seems to be your orange dot… putting the two sides together – any advice?

Chris Goward: If you’re getting insights from your Exploratory research, those insights should tie into the Validate tests that you’re running. You should be validating the insights that you’re getting from your Explore phase. If you started with valid insights, the results that you get really should be generating growth, and they should be generating insights.

Part of it is your Design of Experiments (DOE). DOE is how you structure your hypotheses and how you structure your variations to generate both growth and insights, and those are the two goals of your tests.

If you’re not generating growth, or you’re not generating insights, then your DOE may be weak, and you need to go back to your strategy and ask, why am I testing this variation? Is it just a random idea? Or, am I really isolating it against another variation that’s going to teach me something as well as generate lift? If you’re not getting the orange dot right, then you probably need to look at researching more about Design of Experiments.

 

Q: When test results are insignificant after lots impressions, how do you know when to ‘call it a tie’ and stop that test and move on?

Chris Goward: That’s a question that requires a large portion of “it depends.” It depends on whether:

  • You have other tests ready to run with the same traffic sources
  • The test results are showing high volatility or have stabilized
  • The test insights will be important for the organization

There’s an opportunity cost to every test. You could always be testing something else and need to constantly be asking whether this is the best test to be running now vs. the cost and potential benefit of the next test in your conversion strategy.

 

Back to Top

 

Q: There are tools meant to increase testing velocity with pre-built widgets and pre-built test variations, even – what are your thoughts on this approach?

A PANEL RESPONSE

John Ekman: Pre-built templates provide a way to get quick wins and uplift. But you won’t understand why it created an uplift. You won’t understand what’s going on in the brain of your users. For someone who believes that experimentation is a way to look in the minds of whoever is in front of the screen, I think these methods are quite dangerous.

Chris Goward: I’ll take a slightly different stance. As much as I talk about understanding the mind of the customer, asking why, and testing based on hypotheses, there is a tradeoff. A tradeoff between understanding the why and just getting growth. If you want to understand the why infinitely, you’ll do multivariate testing and isolate every potential variable. But in practice, that can’t happen. Very few have enough traffic to multivariate test everything.

But if you don’t have tons of traffic and you want to get faster results, maybe you don’t want to know the why about anything, and you just want to get lift.

There might be a time to do both. Maybe your website performance is really bad, or you just want to try a left-field variation, just to see if it works…if you get a 20% lift in your revenue, that’s not a failure. That’s not a bad thing to do. But then, you can go back and isolate all of the things to ask yourself: Well, I wonder why that won, and start from there.

The approach we usually take at WiderFunnel is to reserve 10% of the variations for ‘left-field’ variations. As in, we don’t know why this will work, but we’re just going to test something crazy and see if it sticks.

David Darmanin: I agree, and disagree. We’re living in an era when technology has become so cheap, that I think it’s dangerous for any company to try to automate certain things, because they’re going to just become one of many.

Creating a unique customer experience is going to become more and more important.

If you are using tools like a platform, where you are picking and choosing what to use so that it serves your strategy and the way you want to try to build a business, that makes sense to me. But I think it’s very dangerous to leave that to be completely automated.

Some software companies out there are trying to build a completely automated conversion rate optimization platform that does everything. But that’s insane. If many sites are all aligned in the same way, if it’s pure AI, they’re all going to end up looking the same. And who’s going to win? The other company that pops up out of nowhere, and does everything differently. That isn’t fully ‘optimized’ and is more human.

Optimization, in itself, if it’s too optimized, there is a danger. If we eliminate the human aspect, we’re kind of screwed.

Video Resource: This panel response comes from the Growth & Conversion Virtual Summit held this Spring. You can still access all of the session recordings for free, here.

 

Back to Top

What conversion optimization questions do you have?

Add your questions in the comments section below!

The post Your frequently asked conversion optimization questions, answered! appeared first on WiderFunnel Conversion Optimization.

Build your free WordPress website with Host2.us free hosting today!

About the Author

Leave a Reply