Your opinion in this PPC strategy

4 replies
  • PPC/SEM
  • |
Hey fellow warriors,

I´m designing a setup for ppc campaigns striving for constant results optimization and I am interested in knowing the opinion from other experienced digital marketeers.

One of these ideas involves setting up "exploration" ad campaigns for 2 or 3 different variables of a same ad in Facebook for A/B testing purposes.

For example:

1. I set a campaign to the same audience with 1 ad with 3 different variables (images, text).
2. I run the campaign for 7 days to find which ad had the best results.
3. I set up a second campaign with the best result from the first test, but now I test for different audiences.
4. I keep testing for variables until I feel satisfied with a combination of elements.
5. Then I start my main PPC campaign with the results from the exploration campaign.

I believe this path can yield better results for the standard way of setting the best possible ad in your mind and then conforming with the results long term.

My doubt here is, what would be a good time to run each test, I'm thinking 7 days because Facebook says that's what the platform needs to run the auctions precisely, but maybe the test can give precise results in less time.

How many days would you test for in each A/B test?

Cheers,
Gerard.
#ab test #facebook ads #opinion #ppc #strategy
  • Hello Gerard.

    >> 1. I set a campaign to the same audience with 1 ad with 3 different variables (images, text).

    That's not one ad, that's three. You should never test just one ad at at time, always two and never more, although I've been known to break that rule sometimes.

    You test only two ads so that they both get shown to about the same kind of audience. You don't want to dilute the stats which would make your test meaningless.

    >> 2. I run the campaign for 7 days to find which ad had the best results.

    It's not the time that is important. It's the clicks. For each ad.

    If you run for 7 days (that's a week like the song says), and get just five clicks per ad, that's not enough. My standard is a minimum of 20 clicks per ad and more is better. Regardless of how long it takes to get them although some other stats may make me break that rule.

    >> 3. I set up a second campaign with the best result from the first test, but now I test for different audiences.

    I don't know what you are selling but normally, a product has just one audience. If not, it will have one main audience which is the one you should be concentrating on.

    >> 4. I keep testing for variables until I feel satisfied with a combination of elements.

    Yes, keep testing different things. But never stop. I have a client for four years and I've got some high volume groups where I've tested dozens of ads and I still try to come up with something better. You don't see Coke and McDonalds ads from years ago do you?

    >> 5. Then I start my main PPC campaign with the results from the exploration campaign.

    What you describe is your main campaign, the only campaign which you keep testing different variables.


    >> I believe this path can yield better results

    Yes, but you still have to do it right. I've seen many who did exactly what you suggest. The problem is they did not test properly. They used virtually the same elements for all ads such as keep on using the same text for 20 different ads and expecting different results.

    As I said, you should never stop, even if you have the best possible ad. There's always something better.

    There's new feature too that you have to take advantage of. Adwords for instance late last year introduced callout ad extensions. Skeptical at first, I saw that they improved click rates. But my ads very often used that idea so I had to change to take callouts into account and change the ads. The callouts were based on what I knew worked but I could not have the callout repeat what the ad says. This sort of thing happens whenever there's a new feature: you have to think about the impact and how to take advantage.
    {{ DiscussionBoard.errors[10199984].message }}
    • Profile picture of the author mktmaverick
      Hello LucidWebMarketing,

      Thank you for your reply, that was a full analysis I apreciate your thoughts.

      From #2, I think I get you, you are using a certain # of clicks as your comparison point and time is what varies, so you measure how much time it takes you to get those 20 clics for example. Is that right?

      I've been collecting a few kpis (they vary depending on the campaign objective) for example ctr, cpc, reach and engagement in a 7 days period. I use those to evaluate performance.

      #3 when I say audience I'm thinking of segmentation configuration trying to reach that main target market. I agree, Every business usually have between 1 and 3 target markets they focus on. Or buyer personas as the new inbound marketing is calling them.

      I agree on the last points. Yeah those extensions are interesting. I'm actually happy with the semi new audience tools for pixels in Facebook. That has helped me reach some interesting results for custom segmentation.

      Thank you again for your words, I'm going to revise my KPIs thinking of that time variable I was ignoring.
      {{ DiscussionBoard.errors[10204563].message }}
  • You should base decisions based on number of clicks, not time. That's at least 20 for each ad being tested and more is better. Time is irrelevant here.

    You may be thinking, why 20 or more? It's more statistically significant. Think of it this way. If you had Ad1 going 5/40 (12.5% CTR) and Ad2 6/50 (12%) that is not statistically significant. If Ad2 had just one more click, its CTR would be 14%. That would not tell you anything. But if the stats were ten times greater, 50/400 and 60/500, adding that one more click would result in a 12.2% rate so you'd be more confident that Ad1 is indeed the better one, if only marginally, over the long run. I'm of course looking only at CTR and not considering conversions here.

    Another way to look at it is that one extra click in 20 is only 5% more so the margin of error and significance is lesser.
    {{ DiscussionBoard.errors[10205691].message }}
    • Profile picture of the author mktmaverick
      LucidWebMarketing:

      Well I like your technique. It solves the time doubt, and well that can be tested in a day probably, so it might make this optimization strategy more agile. Thank you!
      {{ DiscussionBoard.errors[10206051].message }}

Trending Topics