Split Tests - What's the Maximum Number A Person Should Run on a Single Page?

3 replies
What's the maximum number of split tests a person should run on a single page at the same time before it gets too much?

Thanks,

Ike
#maximum #number #page #person #run #single #split #tests
  • Quite frankly, I wouldnt run more than one or two at one given time unless your traffic is huge. And make sure to split test the important stuff (like price) before you drill down into the minor details (like the color of the buy button).
    {{ DiscussionBoard.errors[2535480].message }}
  • In my opinion, you should only run one at once.

    If you're making two changes, have changed both elements, and increase your conversion, there's no way of checking whether it's element (a) or element (b) that made the difference.

    The only exception I can see is a very high traffic page, but even then I'd suggest you're better running the tests for a shorter amount of time, rather than running more tests consecutively.

    Thom
    Signature

    Get My Exclusive Online FAST Start Training
    Totally FREE For A Limited Time - No Email Optin Required

    {{ DiscussionBoard.errors[2535488].message }}
  • Profile picture of the author Andy Fletcher
    It's possible to test as many variables as you like using split or multivariate testing however, the more things you test at the same time the longer it'll take to reach "significant" levels of traffic.

    Significant being the most important word in that last sentence. Significance isn't just a guesswork term, it's a statistics term referring to whether your test has received enough traffic to reach a point where you can trust the results you're seeing.

    For a completely naive test you could show alternate visitors different sales pages and select the one that makes the first sale as "better" but you have no way of knowing how much random chance played into that sale.

    Let's compare that test to flipping a coin. We could say that whichever result comes up (heads or tails) is the "better" result when in fact the two outcomes are just as likely.

    Or we could roll a six sided dice and divide the results into 1,2 or 3,4,5,6 and see the "better" result. In this case the most likely outcome (3-6) will come up 66% of the time but the test will still be wrong 33% of the time.

    So when are test results "significant"?

    If you want to really understand the maths then there's plenty of info out there on sites such as wikipedia but if you just want a rule of thumb to work to -

    Start with A/B split testing. You have a control A (your current best known sales page) and an experiment B (the sales page you think will be an improvement).

    Decide how many test sales you are going to make (49 is a low but good starting number)

    Then you have 2 results -

    X the higher number of sales
    Y the lower number of sales

    eg if A sold 21 units and B sold 28 units X would be 28 and Y would be 21

    Then you test if X - Y is greater than the square root of X + Y.

    If it is then your results are "significant" if it's not then your results could still be correct but you don't know for sure.

    So in our example B sold 28 units (X = 28) and A sold 21 units (Y = 21)

    X + Y = 49
    square root 49 = 7
    X - Y = 7

    So the results are significant and we can say that sales page B is better and it becomes the control page for the next experiment.

    Lather, rinse, repeat.
    {{ DiscussionBoard.errors[2535665].message }}

Trending Topics