I did a lot of A/B testing in the past, especially with paid advertising and clicks.
But it's difficult to do that with lower traffic and a major conversion (e.g. purchase).
Instead I started doing what I call: A-then-B testing.
In A-then-B testing, you measure your results right now (your A, the control).
Then you make a change (your B).
Let it run for awhile.
Then measure the results of the change (B).
Then compare A to B. Where things better or worse? By how much?
If other things happened during that time, say an event or outage or anything else that could mess with the data, you might have to restart the test and try again.
This process isn't statistically significant.
It's not a pure test.
It has many flaws.
It might leave you with no clear winner.
But it does let you try out changes when there's no other way. Especially ones you aren't 100% sure about.
A big part of this process is to be fully willing to go back to the original if the change fails to get any results or the result you like. If you're going to keep B even if it under-performs, that's not really a conversion optimization. As a mentor taught me long time ago: "You're just changing shit".
Which is fine, just be honest with yourself and the results.
Sometimes I'll even try a A-then-B-then-A test.
That's where I go back to the original setup (A, the control) after testing a change. That gives me a chance to collect more data and see if that changes my confidence in the results. If B still looks good, I'll go ahead and (re)do that change and keep that.
It takes longer but it pays off sometimes by catching more of the test flaws.
Whatever testing method you do, please make sure to measure the results. Use Shopify's reports. Use Google Analytics. Use my app. Use a sheet of paper.
Just record them somehow.
Eric Davis
When are your best customers defecting?
Are your best customers defecting? Use Repeat Customer Insights to find out where in their lifecycle you're losing them and what you can do to win them back.