As I started doing this for clients I was many times left with scratching my head in disbelief. Here I am running a test A then B same traffic flow etc. and I find that say B is better. I then implement the change. Sounds good right? uh no...
I was finding that over a amount of time after the implementation of some tested changes there was no indication of better conversion or at least to the measured predicted change, and in some cases there was a LOSS, to the level that the A test indicated.
As many of you know, when you run into an issue with CRO its not like you can type "CRO" into a search engine and hope to find an answer... unless the answer includes listening to some European music group! hahaha
So I am retesting and retesting, and the results are jumping all over the place. At this point I'm at wits end.. so I decide to do the unthinkable and do a A/A test. and I kid you not, I start the test ( as in the first A ) I am looking at some article and click on this and that and end up on a Neil Patel article describing A/A testing. ( I was going to share the link, but I cant find the article hahaha ) all literally with in 15 minutes of starting the A/A test in desperation.
Basically the principle here is to validate the page, before you test the page. Ensuring the results from your test will in effect BE valid.
As many here are I am sure testing their own pages, like myself you look at any page you have and kinda know what the traffic across that page is. More specifically you know that Sunday is the lowest day and say Tuesday is its best day. And sure you can see this stuff in analytics, but its really not something you would look at or consider.. or at least I didn't. 1000 visitors across a page is 1000 visitors across a page right?
Since I have started A/A testing, I am finding that 1000 is not 1000 ALWAYS. and that there are other variables at play. I am finding the day specifically can effect the outcome, and in some cases I am finding the time of day to be a variable. Well let me rephrase that.. Im not finding.. I am guessing at this point that those are possible indicators. ( More testing required! hahaha )
I just found the article What Spending $252,000 On Conversion Rate Optimization Taught Me #2 specifically is where Neil gets into that. I forgot that he mentions that is due to bad software, but I think the principle of setting a benchmark regardless still applies.
Hope that Helps someone!
PS thanks Neil!!!!