This week I was reading a conversation from a self-described "conversion expert". They were claiming that some optimizations made to a Shopify store were invalid because the store didn't run an A/B test and wait for statistical significance.
What a line of BS.
Statistical significance is jargon for, the result is valid and probably not due to random chance.
It doesn't mean how good or bad a result is, only that it's valid. You can have a statistical significant test that results in a major drop in say, conversion rates.
(A quick hint I use for statistical significance on websites is, enough people have gone through the test for the results to be valid).
The vast majority of conversion optimization changes have to do with improving conversion rates. Since conversion rates are small (single or low two-digit), that means getting statistical significance requires a high level of traffic in a short period of time. Probably out of reach for most stores.
Does that mean if you can't do conversion optimization without massive traffic levels?
That just means you won't be sure that your specific optimization caused the result or if it was something else.
Meaning you'll have to make optimizations more carefully than if you had a A/B test running. You'll need to research, create a change (the test), and give the change time to work out a result.
That means reading up on optimizations, getting advice from multiple sources (often conflicting), trying out the ones that sound the best, and measuring how things change.
The better research and advice you get, the better your chances of hitting on a winning combination.
The clearer your measurements, the easier it'll be to know there's a winner.
Just don't feel like you only have two extremes: massive traffic with A/B tests VS changing random things and see what sticks.
It's also great to start a test on a clear date. Either a set day of a month (1st) or the start of a specific day of the week. That'll let you more easily compare the before and after results.
I like the first of the month, especially when you are segmenting and measuring cohorts like in Repeat Customer Insights. That can make it really clear where the results belong to.