This post was written by Lindsay Hunt, Product Manager for ClickBank
Many of you have noticed quite a few changes to ClickBank’s order form over the last several months. We’ve been split testing different versions of the order form and want to share some answers to a few frequently asked questions about our methodology and what you can expect going forward.
What’s wrong with the standard order form? Why does ClickBank need to split test?
Our primary goal in split testing is to maximize revenue (both ours and yours). To do that, we’re not just concerned with the conversion rate. There are plenty of changes we could make to the order form that might improve conversions, but increase refunds. Maximizing revenue means that we optimize conversion while also making sure we don’t negatively affect refund rates.
There’s always room for improvement and our order form is no exception. Frequent tests ensure that we’re implementing the newest and best features, keeping up with changes in technology and updating our design in a way that provides a positive experience for customers.
What if your testing is lowering my conversion rate?
Our tests are well planned and designed. Typically we execute them on a very large data set for no more than a few days. Our experience and analysis of individual vendor conversion rates has shown that there can be great variation in day-to-day conversion rates, even when no test is running. Track your own conversion rate over time and look at the variations you see from day-to-day, from affiliate-to-affiliate or for various products.
When looking at your conversion, make sure to note the size of your impressions and sales. Often when we hear complaints about lower conversion, we see very low sample sizes. Please keep in mind that small traffic volume cannot give you a true picture of overall conversion rate.
What elements do we test on our order form?
Most of the changes we test fall into one of the following categories:
Design – The most familiar split tests are typically those that involve design changes. These changes are fairly straightforward and involve testing changes in white space, layout, font, images, colors, buttons and copywriting.
Usability – Usability changes make interacting with the order form easier and faster for customers. We might test the order of the fields on the form or see if there is any impact by adding and removing fields, checkboxes or dropdowns. Handling error messages and making sure that customers can complete their order even when they encounter an error also falls under usability.
Security and Trust – Customers arrive to our order form from a variety of different vendor pitch pages. When they reach the order form, we want to ensure that they have a high level of trust so they’re comfortable entering their payment information. We test elements like the placement and display of security badges (McAfee, Norton), the placement and display of the ClickBank guarantee and how we message pricing and recurring payments.
Internationalization – ClickBank is an international company that has customers and vendors all around the world. There isn’t necessarily a one-size-fits-all order form for every country. Often, consumer expectations of the ecommerce experience can vary drastically so our testing is designed to optimize for local cultures. We have country-specific experiments that test different wording, displays, languages and payment types.
What is our process for split testing?
We don’t split test just for the sake of split testing. It may be interesting to find the difference between a red and yellow button, but we would only run that test if we had reason to believe it would improve the performance of the form.
Our testing process closely mirrors the scientific process. We use the following steps for each test:
- Define the problem – We use internal data analysis, research, convention studies, best practices and client feedback to identify underperforming segments, features causing errors, technology performance issues and usability problems.
- Identify hypotheses – We use the information from step 1 to create a hypothesis. (e.g., If we change the position of the security icons, customers will have more confidence in the form and conversion will increase.)
- Design the test – We test our hypothesis by developing one or more variations of the order form that will allow us to determine whether our hypothesis is true. During this step, we also decide what segments (language, country, product type) to include in the test.
- Measure results – We collect data on the variations of the form and look for statistically significant differences in conversion rate and refund rate.
- Implement changes – If we find a new version that outperforms the original form, we implement it. Sometimes we find tests whose performance matches the original form, but improve the user experience. We implement these changes, too.
How long will the tests run?
We may have several tests running at any given time. It’s our goal to test as frequently as we can to continually improve the form. The length of a specific test depends on what segment we’re testing.
We calculate the number of impressions we need for significance based on the sensitivity of the test (e.g., we want to see a 10% improvement in conversion rate) and the baseline conversion before the test.
We don’t stop tests as soon as significance is reached because the sample size may not be large enough to accurately represent our full population of vendors and customers. On the other hand, we don’t let tests run too long because very small differences will be significant and we won’t be able to draw conclusions from the test.
Without going into too much depth with the statistics, we use what’s called a power analysis to determine how many impressions we need before running the test. We stop it once we reach this level. (For more details on the statistics, see Determining Sample Size, How Not to Run an AB Test, AB Test Significance Calculator)
What are our plans for the order form going forward?
We plan to continue to test, implement our changes and optimize the experience going forward. Don’t be surprised to see small and sometimes large changes as we determine what works best for our customers and for our vendors.
Any other questions?
Now that you understand our process and philosophy behind split testing, do you have any further questions? If so, please let us know in the comments on this post. We’ll answer the most common questions in a follow up blog post.