Do you do A/A/B tests?

8 replies
I've seen a lot of discussion about doing A/A/B tests in order to differentiate the "noise" from actual increase or decrease in conversions. A/A testing - Julia Evans

Have you guys had much experience with this?
#a or a or b #tests
  • Profile picture of the author beyond11
    Optimizely and usabilityhub are great A/B testing tools to improve your website's user experience and enhance conversion rates.
    Signature

    Profi webshop üzemeltetés hazai vállalkozásoknak. | Színvonalas weboldal készítés Budapest területén. | Ismerje meg a keresőoptimalizálás referenciák weblapot.

    {{ DiscussionBoard.errors[9903503].message }}
  • Profile picture of the author BrentSkillHD
    Always be A/B testing. How I usually do it is 90% to my current winner and 10% traffic to my tests.
    {{ DiscussionBoard.errors[9903889].message }}
    • Profile picture of the author savidge4
      I have run into a few cases where the numbers just didn't seem right. went with the results, and they were big losers. Kind of the first it me I ran into this I think I ran the test over and over like 4 times, it was basically like 4 different results. that's when I ran an A/A test and saw the numbers were screwed.

      If time allows, I now run A/A then A/B almost flat across the board. It is amazing now that I am doing this, how often there will be a bit of a skew in the numbers.
      Signature
      Success is an ACT not an idea
      {{ DiscussionBoard.errors[9905029].message }}
      • Profile picture of the author ninjaking
        Originally Posted by savidge4 View Post

        I have run into a few cases where the numbers just didn't seem right. went with the results, and they were big losers. Kind of the first it me I ran into this I think I ran the test over and over like 4 times, it was basically like 4 different results. that's when I ran an A/A test and saw the numbers were screwed.

        If time allows, I now run A/A then A/B almost flat across the board. It is amazing now that I am doing this, how often there will be a bit of a skew in the numbers.
        This suggests that you didn't achieve statistical significance with your tests.
        {{ DiscussionBoard.errors[9905037].message }}
        • Profile picture of the author savidge4
          Originally Posted by ninjaking View Post

          This suggests that you didn't achieve statistical significance with your tests.
          Oh no.. they reached Statistical Significance. In the A/A test I ran... there was Statistical Significance. Not that they were equal, but that one was a better option than the other. And this was a test that was 1000 impressions per side in just under a 48 hour period.

          I have finally nailed down the possible culprit to being 2 major traffic sources to the site. Charting the 2 traffic sources I could see some inconsistencies in flow, and they both are clearly pulling separate demographics.
          Signature
          Success is an ACT not an idea
          {{ DiscussionBoard.errors[9905193].message }}
  • Profile picture of the author ethanalvin
    I think it all boils down to your sample size. If it is substantial, then there isn't a need (still good to do) to perform A/A test.

    As digital marketing is a dynamic field performed in a multi-variate environment, it is still good to do A/A/B test to eliminate the noise.
    Signature
    Marketing with Alvin
    Learn Actionable Digital Marketing Tips: Social, Wordpress, Email & Content Marketing.
    {{ DiscussionBoard.errors[9922459].message }}
  • Profile picture of the author a123
    The idea here is that the duplicated A or B treatments somehow provide a measure of the accuracy of an A/B split test. If the difference between A and A or B and B are statistically significant, then we consider the test flawed and discard its results.This test runs into issues, though. It greatly increases the chance to get a false positive, because of the multiple comparisons problem. The more A/A/A/A/A or B/B/B/B/B variations you test, the more likely it is for one of them to register a statistically significant difference at your preferred level. So not only doesn’t it give us any information about the accuracy of our split testing procedures, but it is highly likely that it will fool you into discarding perfectly fine results and wasting even more money in tests that generate no new insights.
    {{ DiscussionBoard.errors[10179226].message }}
  • Profile picture of the author Mrsparrow
    I ran an A/A test last year and the results were quite similar for each version and no winner could be declared:
    - Control: 9625 visits and 32 conversions
    - Variation: 9195 visits and 34 conversions

    Here's a print screen.
    Signature
    Your traffic not converting?
    Omniconvert - the complete CRO tool
    {{ DiscussionBoard.errors[10181323].message }}

Trending Topics