A Tiny (Yet Interesting) Case Study Of Mine...

13 replies
So 48 hours ago I sent out an email to subscribers of mine for the first time. It was for a new PLR-related venture, and the email to all three of my segments was the same. Same time, as well (around 1pm EST). It was several paragraphs, simply explaining a restructuring I'm doing, that we'll be back live in several weeks, and that wonderful things are coming.

My three segmented lists are: purely from on-page opt-ins (A), from a free WSO (B), and from a $1 WSO (C).

The open rates, 48 hours later, are as follows:

(A) = 28.3%
(B) = 20.3%
(C) = 36.0%

Takeaway: Paid WSO opt-ins are almost 50% more likely to read your emails than free WSO opt-ins.
#case #interesting #mine #study #tiny
  • Well, as you say, that's just one result. You'd have to test more extensively to be sure.

    But yes, that's one of the things that make you sit up and pay attention. And it's true that in most niches, freebie seekers don't convert well.

    fLufF
    --
    Signature
    Fiverr is looking for freelance writers for its blog. Details here.
    Love microjobs? Work when you want and get paid in cash the same day!
    {{ DiscussionBoard.errors[5913950].message }}
  • Profile picture of the author RHert
    It's been proven that when someone has paid for something they are much more likely to continue with it and buy more products later. This is why people sell cd's with the information on it and only make them pay shipping and handling. It makes the cd cost very little but once someone's already gotten their credit card out they are much more likely to continue using it.
    Signature
    Copywriting at it's Best! - Tips and tricks to connect with your reader.
    {{ DiscussionBoard.errors[5914122].message }}
  • Profile picture of the author Kelly Verge
    You used the word, "tiny" in the title. What was your sample size? How big is each segment.

    Results from sample sizes under 1000 are very volatile. The larger the sample size - even over 1k - the more accurate the "case study."
    {{ DiscussionBoard.errors[5914959].message }}
  • Profile picture of the author Ross Cohen
    Indeed they are volatile based on the low numbers - each were in the 75-200 range. But, even with their small size, they can still prove a valid point, right?
    {{ DiscussionBoard.errors[5914978].message }}
    • Profile picture of the author Shaun OReilly
      Originally Posted by Ross Cohen View Post

      Indeed they are volatile based on the low numbers - each were in the 75-200 range. But, even with their small size, they can still prove a valid point, right?
      You need to have at least 30 completed actions on
      each of the three options for your figures to have
      even a slight relevance at all.

      So, that's at least 30 clicks from each of your list
      segments.

      However, the longer you run the test and the bigger
      the sample size, the more reliable your results will be.

      That said, buyers do tend to open more e-mails and
      they're also much more likely to buy from you again
      - on average.

      Dedicated to mutual success,

      Shaun
      Signature

      .

      {{ DiscussionBoard.errors[5915018].message }}
  • Profile picture of the author Kelly Verge
    "Prove?"

    Even with LARGE sample sizes, statistics really only suggest trends. The larger the sample, the more accurate the prediction.

    You're probably right in your supposition, but the numbers are really too small to say how much better one list will respond every time.
    {{ DiscussionBoard.errors[5914997].message }}
  • Profile picture of the author Ross Cohen
    Yes, yes, statistically, according to researchers, this amount of tests have to be done with this amount of a sample size this amount of times - I know...

    But as this was my first email to the lists and a rather non-spammy,"real" email title, while the results most certainly aren't 100% nor "exact", I feel as if the following emails may have relatively close results. Perhaps even a 10-15% change in either direction, but the results of this initially email certainly aren't random or invalid by any means.
    {{ DiscussionBoard.errors[5915149].message }}
    • Profile picture of the author Shaun OReilly
      Ross I think it's great that you're doing something that
      most Internet Marketers simply don't do - and that's
      TEST.

      However, you need to interpret the data accurately for
      the results to have any meaning.

      I've ran tests where the early 'winner' gets beaten
      once the test runs for a longer period and statistical
      relevance kicks in.

      Being a former Mechanical Engineer, I have a scientific
      approach to Internet Marketing.

      Dedicated to mutual success,

      Shaun
      Signature

      .

      {{ DiscussionBoard.errors[5915211].message }}
      • Profile picture of the author Kelly Verge
        Originally Posted by Shaun OReilly View Post

        Being a former Mechanical Engineer, I have a scientific
        approach to Internet Marketing.
        Quality Assurance is a significant part of my background (management representative through an ISO 9000 implementation and certification process), and I also tend to think from that perspective (process/measurement/improvement).

        Simple analogy:

        Say you flip a coin 50 times and it comes up heads 30. You could assume that the odds are 60%. As you said, +/- 10-15% would get you in the ballpark. However, in 50 filps, it's also possible, albeit somewhat less likely, that you'd get heads 15 times. That result would skew your perspective.

        The more times you flip the coin, the the more accurate your analysis. The odds of the coin hitting heads will even out over a large enough sample.

        I can't count the number of times people have posted here one of two scenarios. 1: "I've had 100 visitors and haven't made a sale. What am I doing wrong?" or 2: "I made 3 sales in the first 50 visitors - I have a HOME RUN!"

        That's the problem with feelings. They can interfere with the data.

        Ross, keep building your lists and keep testing. You're doing things right. I just wanted to clarify the statistics.
        {{ DiscussionBoard.errors[5915420].message }}
    • Profile picture of the author Bill Farnham
      Hi Ross,

      First off, thanks for sharing that.

      When it gets down to data being relevent you would also want to see a variety of headlines mimic the current results for it to be of statistical importantance.

      Because you could take those same lists, throw a different headline at them, and get different results.

      And remember, there are only TWO types of people in this world...those that open emails, and those that don't. :p

      ~Bill
      Signature
      {{ DiscussionBoard.errors[5915215].message }}
  • Profile picture of the author Bruce NewMedia
    I also think you may find your open rate on all three subgroups will decline considerably from those %'s as you keep mailing.
    _____
    Bruce
    {{ DiscussionBoard.errors[5916052].message }}
  • Profile picture of the author IMHunter
    Yeah! Paid customers actually buy using real information. Free Optins or from Free WSO's they might have used their throw away emails to only download the free product.
    {{ DiscussionBoard.errors[5916091].message }}
  • Profile picture of the author Ross Cohen
    I disagree, Bruce. I believe it all depends on how I treat them. If they regularly receive junk emails from me, I agree, they'll start leaving. But what if every time I email them it's in regards to stuff they really, really care about?
    {{ DiscussionBoard.errors[5916093].message }}

Trending Topics