Upcoming LunaMetrics Seminars
Denver, Feb 9-13 Washington DC, Feb 23-27 San Francisco, Mar 2-6 Chicago, Mar 9-13

Conversion rates and A/B testing

So what is A/B testing, anyway?

A/B testing springs from old fashioned direct marketing. Mailers would have a mailpiece that worked well, but wanted to see if they couldn’t improve on it a little bit. So they changed just one variable — maybe it was the snipe (“Your free gift inside!”), or the offer – and tested the new piece against the control. Because they changed just one variable, they could scientifically measure not only whether the change made the mailpiece perform better than the control, but also what about the new mailpiece made the difference.

Let me take one last minute to distinguish between A/B, multivariate and Split Path, since I’ve mentioned them all. A/B is when you test one kind of creative against the current creative. Maybe you just change the headline on a test page. When doing A/B testing, never forget what your 8th grade science teacher taught: if you change more than one variable, you don’t know which one was responsible. Multivariate is when you change a bunch of elements on the test page, and then use statistical technology to figure out which elements mattered. (I will do another post on this if anyone is clamoring for more…) Finally, Split Path is just the technology to send some people to one page when they click on a link and everyone else to a different test page when they click on the same link. You can do 3-way or 4-way or nth-way tests, but you generally need more traffic so that your sample size is large enough to mean something.

(MarketingSherpa did a very interesting piece on testing multiple variables with a small audience size. I will dig it out when I get back from the UIE Usability conference.)


Robbin Steif

About Robbin Steif

Our owner and CEO, Robbin Steif, started LunaMetrics ten years ago. She is a graduate of Harvard College and the Harvard Business School, and has served on the Board of Directors for the Digital Analytics Association. Robbin is a recent winner of a BusinessWomen First award, as well as a Diamond Award for business leadership.


2 Responses to “Conversion rates and A/B testing”

Dave @ SiteSpect A/B Testing says:

Nice capsule summary of the testing methods that are out there. And of course, I’m always happy to be a muse for a good blog piece :)

A few other tidbits to throw into the ring:

- basic A/B testing is great as a stepping stone towards more advanced testing. “Walk then run” is a nice way to learn. But, it’s also good for advanced testers who understand very specifically what question they want to answer. So don’t think of A/B as “less effective” – sometimes it’s the best tool for the job.

- multivariate testing (aka MVT, multivariable or multifactor testing) is particularly strong at identifying which site elements (“factors” in DOE speak) influence a response vs. those that don’t. For example, does the color of the Add To Cart button contribute more/less to behavior than the size of the button, or the button’s placement in overall site layout? Although MVT can tell you things that a single A/B test cannot, results can be more difficult to interpret and there are generally more caveats. This is particularly true with fractional testing (e.g. Taguchi, among others) where one tries to infer an optimal combination of variations, even if that combination was not actually tested.

Keep up the blogging Robbin… and happy testing!

Dave @ SiteSpect

Matthew Roche says:

Another simple truth about testing is that more is better.

Some tests will wow you, some will intrigue you, some will infuriate!

But the best approach we have seen at Offermatica is to not “pre-test”. This means don’t sit around a table and decide which ideas are “not worth testing” and which are.

The best users of Offermatica know that good test design yields much better knowledge, but the only truly bad test is one you don’t run.

More test ideas and musings at siteisdead.com