This week I’ll be covering the learnings that I got from the A/B testing mastery section by Tom Wesseling. I found the part about the history of A/B testing really interesting. I had no idea that the Dutch people used experiments to understand which people got what disease and what not. He also mentioned how in traditional advertising/print media how A/B testing was used with coupon codes, direct mail. Web testing didn’t really start until 2006, with the introduction of Google Optimize, but didn’t really start to take hold until 2010 when optimizely and VWO came to market. Optimizely coming to marketing with some employees that worked on President Obama’s 2008 election campaign. 2016 is when the industry became more mature with personalization and more sophisticated A/B testing which included more than “Mickey Mouse” tests as Peep likes to call them. Tests that include apps and also more than drag and drop testing functionality. A/B testing continues to evolve rapidly and this author speaks about some of the pitfalls he fell into over the years of being a practitioner in the industry.

A/B testing is commonly seen as a big silver bullet for companies. Companies in recent years have optimized their efficiency. They all move from a waterfall methodology where at some point there is something. They want to grade something and they have the waterfall approach, so hopefully they release something good and revenues go up, although we’ve learned through agile that this is not necessarily the best way to approach things. All of us have seen projects fail because of one big launch (see fire festival). The agile approach really is the solution to this. Take small steps and add another small step and that compounds into serious growth. With small projects, you’re still having fun, it’s high energy, and less risk of a failure being a huge failure because of the smaller nature of it. A definition of efficiency that he gave was really great:

“Efficiency is getting more things done with the same resources.”

The big promise of A/B testing experimentation is to put effectiveness on top. You want to know for sure if you’re going to do something that has an impact and that you are making a move based on something above a few sprints, tests, opinions, or worse, a waterfall method of decision making. You want to have hard evidence that what you’re doing is the right move. If you want to add effectiveness to your company and want to make sure you’re making the right decisions, better decisions, and trustworthy decisions, you need to adopt A/B testing into your company.

A/B tests in companies can be used for several reasons. The first reason they should be used is for deployments. When you deploy something on your website, this could be for legal reasons, a new feature, an update or whatever it is, you want to deploy this as an experiment. You could start this by only shifting 5% of the traffic and then move on to 50% all the way to 95%. It could be split like a proper A/B experiment. Because you want to learn if your deployment doesn’t have a negative impact on the KPI that you’re measuring. If it has no negative impact, then it’s a win. If it’s a flatline, then it’s also a win. Go ahead and deploy it.

The second reason you want to use A/B testing is for research. You can use it for what he calls a “conversion signal map.” If you have a specific webpage and you have a specific product page with a picture, some lines, and a button, etc. You can run experiments with just leaving out elements. You’re not looking for winners here. You’re looking to see if leaving out some elements have some impact, or no impact. If you can see positive signals or negative signals, or flatlines, that doesn’t matter. You want to learn which elements are making an impact on the website. Because if you leave out one that and nothing happens to the conversion and beat the variation, and there is no significant difference, then this element doesn’t really matter. If you leave it out and there’s a really big negative impact, then it’s a really important element. So you can then pick that element to optimize the website. If leaving that element out means that conversion rate goes up, then you’ve really optimized the website or the app. That rarely happens though. It’s mainly focusing on the flatlines and the ones that go down.

The other thing you can do is research what he calls “fly-ins.” If you want to test some sort of motivation if social proof is working on your website (“24 people have bought this {insert item} in the last hour”). Make a fly-in that is attention grabbing and people will notice it. They will see the message, but they’re incredibly annoying. They could even lower conversions. You can test to see if that specific type of motivation works for people on your website. This isn’t necessary to optimize the website, this is to see if it has an impact, negative impact, or no impact at all. This is research using A/B testing to understand impact to then inform your optimization tests, which is the middle step in web and app optimization. Most testing is done like a deployment, which takes up too much resources and isn’t lean. Usually deployments done by marketing or IT tend to be more lean, but the more departments you are involved in the test optimization, the more opinions, the more hippos (highest paid person’s opinion) and the more approval needed, which hinders the entire process. I’ve learned this first hand. Make sure you have the necessary ownership and autonomy to perform A/B tests and get learning and optimization or else you will get bogged down in bureaucracy faster than you can say “Uncle.”

--

--