EXPERIMENTATION

“There are three principal means of acquiring knowledge… observation of nature, reflection, and experimentation. Observation collects facts; reflection combines them; experimentation verifies the result of that combination.”

– Denis Diderot, 17th Century French Philosopher

 

Personally, dreaming up, executing, and seeing the results from experiments can be one of the most rewarding aspects of strategic leadership. One of my favorite experiments was a simple promotion to drive online reviews for a consumer goods product company. The company had a bunch of little portable speakers they were discontinuing that cost $7 but retailed for $40. The experiment was simple. For three days we would give anyone who wrote an online review of the company’s products a free portable speaker. I wanted to see how much a $40 speaker would drive people to spend 5 minutes writing a review. We posted the promotion on Facebook and emailed the customer list, and within three days the company doubled the number of reviews that took them three years to amass!

As a strategic leader, ideas are always popping up inside your head. Instead of either doing nothing or fully implementing the idea, you should often pick the middle road and run an experiment to see what happens when your idea comes to life. Experimentation is a low-cost and fast way to tweak your organization’s strategies continually.

 

What is experimentation?


The scientific method is alive and well in business. Experiments are structured tests to verify a hypothesis or idea and create insight into potential cause and effect. Experimentation is used a considerable amount in marketing, services, and retail to understand such things as:

• The impact of advertising on sales
• How different messaging, promotions and creative in advertisements or emails creates different engagement and response rates
• The interaction between pricing and demand
• The customer response to different service models or call scripts
• What effect edits to a website have to user behavior
• How changes in a store impact consumer behavior
• And about anything else you can imagine

 

Testing a Hypothesis with Experiment vs. Control Group


An experiment involves testing the impact of a hypothesis or idea on an experimental group. To objectively understand the cause and effect of the hypothesis, the results from the experimental group are compared to a control group, which is a group that is similar to the composition of the experimental group but didn’t have the stimulus of the experiment group.

 

business experiment exampe

 

The idea behind experimental versus control groups is the two groups had the same set of variables and conditions throughout the timeframe of the experiment, with only one difference, the variable(s) tied to the hypothesis. This way, once you understand the difference in the output between the experimental group and the control group, you can attribute the difference to the experimental hypothesis.

At a retailer I worked with, there was an entire team devoted to experimentation focused on dreaming up and testing improvements to the loyalty program, online advertising, email, promotions, ecommerce, pricing, service, store hours, store layout and remodels. One of my favorite experiments was a Google Adwords saturation test to understand online advertising’s effect on online and in-store sales. The digital marketing team picked six categories (e.g., Soccer, Baseball, Running Shoes) and eight markets and spent a considerable amount of money to place millions of Sports Authority ads every time someone searched a term within the six categories in the eight markets. After the test, we calculated online and in-store sales in those six categories in the eight markets and compared them to the online and in-store sales of the control markets. The results were fascinating. There was a significant lift in ROI on team-specific sports, but not on more generic categories such as running shoes. The interpretation of the results was customers responded better to advertising in categories where the retailer had a higher market share and not as much competition, such as team sports like soccer and baseball. On the other hand, while running shoes were a big business for the retailer, the actual market share was low, hence a low ROI on the advertising experiment.

 

Why is experimentation important?


It is a blend of art and science to develop an effective marketing strategy throughout the customer journey. Continuous experimentation with incremental and innovative ideas is the science behind optimizing the customer journey. Experimentation is a low-cost, low-risk, and empirical way for organizations to test new ideas. Strategic leaders that embrace the discipline of experimentation continually grow, evolve, and build upon the fundamental truths of how their customers respond to ideas and changes. Some companies built their empire off the back of experimentation. Capital One, a leader in credit cards, created a $40 billion business by executing hundreds of thousands of controlled experiments optimizing credit card designs, offers, and messaging.

 

How do you conduct experiments?


At a high level, the process of experimentation involves building a hypothesis, designing an experiment around the hypothesis, executing the experiment, and analyzing the results. In conducting experiments, here are some important elements to consider:

 

Building Hypotheses

What are some of your best ideas to improve your organization’s customer journey? If you are unsure of their potential effect on behavior and want to or need to understand their effect empirically, then you should turn your ideas into hypotheses, which are theories that should be proven or disproven through experimentation. Transform these hypotheses into one or two variables you can test through an experiment. In building out variables to test, make sure they are meaningful and different enough from the status quo to make you think there will be a difference in customer behavior. It happens a lot when someone might test two different email subject lines, but the difference is so minute, that the results are meaningless. As an illustrative example:

Poor hypothesis – Adding the word All to email subject lines will improve the email open rate

Control Ad: Save at Least 25% off Baseball Bats
Test Ad: Save at Least 25% off All Baseball Bats

Good Hypothesis – Changing promotion messaging from at least 25% to 25-60% will improve the email open rate

Control Ad: Save at Least 25% off Baseball Bats
Test Ad: Save 25-60% off Baseball Bats

 

Designing an Experiment

Most experiments change only one variable in a system to isolate the impact. You can test changing multiple variables, but the conditions and analysis get much more complex. In designing an experiment you need to understand what variable(s) you are going to change and the corresponding result you expect (e.g., increase sales, conversion rate). You also need to determine how long the test will last. Most experiments necessitate an experimental group and a control group to be able to compare the impact of changing one variable. In creating a control group, you should probably use 10% of your population to get a statistically significant read on the results of an experiment. The control group also needs to be random to ensure there isn’t bias or commonality in the control group that could affect the results. There is some excellent, although very expensive test & learn software from APT, that picks control groups, runs all of the statistics, and produces insights related to an experiment. Another option for designing experiments is a factorial design. Factorial design can test more than one variable at a time. Factorial designs are a much more sophisticated experimental designs typically necessitating the guidance of an expert.

 

Executing an Experiment

A critical aspect of executing an experiment is ensuring there isn’t much variance and movement in the variables not being tested. To have a valid experiment the conditions between the control group and the experimental group need to be pretty much equal, except for the change in the tested variable(s).

The communication of an experiment can also be a bit tricky. When communicating an experiment, you want to do it in a way that doesn’t change behavior between the control and experiment groups. It is also important to ensure the experiment time is long enough to be meaningful. If you are trying to measure the effect of advertising on sales, you have to take into account the typical length of a sales cycle of a customer. If you are trying to understand the immediate response, like a click-through response to an ad experiment, then you should be fine conducting shorter experiments.

 

Understanding Results

Once an experiment is complete, hopefully, you’re able to compare the results of the experimental group to the results of the control group. We won’t go too much into statistics, but the larger the difference in results the more you can be sure there the effect was driven because of the change in the test variable. If there is a small difference in the results, that could have been simply because of noise or a few people doing one thing versus another. You can base the accuracy and the validity of the results on a few drivers, including the magnitude of the difference in the results, the size of the control group as a percentage of the entire group, and the level of confidence you are looking for (e.g., +/- 5%). You don’t need a course in statistics to better understand how you interpret results. This sample size calculator can help determine sample sizes for the control group: http://www.surveysystem.com/sscalc.htm. Below is a straightforward analysis that plots out the conversion rates of the control group versus the experimental group during a website test

 

control chart

 

A/B Testing

A/B testing is one of the most used controlled experimentation methodologies, especially in marketing. A/B Testing is typically used to understand the incremental effects of an ad, webpage, email subject lines, direct mail, and other marketing vehicles. You take two versions that are identical except for one variable (e.g., a different message, offer, image) and compare their performance (e.g., click-through rate, conversion rate) against each other. A/B Testing is a strategy used to continuously improve websites, digital advertising, email campaigns, and direct response ads.

 

Pre/Post Experiments

Sometimes you don’t have the luxury to create control groups and conduct a controlled experiment. In these cases you can still conduct a simple pre/post-experiment, it just might not be as refined and accurate as a controlled experiment. A pre/post-experiment is simply making a change to a system and calculating the lift in performance before and after the experiment. Pre/post experiments introduce a lot of potential bias and noise to results, since there may be very different conditions before and during the experiment. In practice, most organizations rely on pre/post experiments for most of their experimentation.

 

 

NEXT SECTION: BENCHMARKING

 





 Learn more about Joe Newsum, the author of all this free content and a McKinsey Alum. I provide a suite of coaching and training services to realize the potential in you, your team, and your business. Learn more about me and my coaching philosophy.
sm icons linkedIn In tmfacebookicontwittericon
linkedin profile