As part of our series on how organizations can lead their store optimization projects to success, we will now explore how to run experiments and validate their results.

Innovate or perish

A retailer who is doing a great job selling might ask himself: why do I need to experiment when the business is prospering? While the current configuration might work for now it does not mean it will be like this forever. As history teaches us, the companies which fail to innovate are the ones which eventually go out of business. Furthermore, there is a problem in not knowing how to introduce new functionalities or clothing lines inside stores. Without analytics, the retailer will not know the effect of the introduced change and whether to keep it or search for a better solution. For some strange reason this "decision roulette" seems normal for the retail world, but unheard of in other newer industries like e-commerce or industrial automation.

A/B testing

A/B testing is a method of testing the success of new changes. Something so logical has been used by humans since the beginning of time by and can be explained simply as: "if A does not work, try B" or "plan A and plan B".
However, in modern data-powered decision-making systems, when we refer to A/B testing we usually think of web analytics.

An e-commerce webpage owner has hired a designer to choose the style of a new sign-up button. The designer has presented his solution but the owner is not quite satisfied. He faces a dilemma: to abandon the designer solution or to trust the designer who swears this color and style work best in converting users. The truth is the website owner does not have to face this dilemma; he can run a simple A/B test directing 50% of users to the old and 50% of users to the new design. Conversions for both solutions are measured for a statistically significant period of time. After this is done, the owner can be sure which solution works better without having to rely on his instincts.

The previous example is simple and understandable but physical retail provides a few obstacles which make A/B testing harder than in the online world.
First of all, retail stores are not location independent as e-commerce websites. The stores operate in different parts of the world, ranging from high-paced city centres to relaxed rural villages. In regard to this, it is possible to assume the behaviour of such shoppers will probably not be identical.
Furthermore, it is easy to provide an identical e-commerce online webshop for each user, but it takes effort to do so in the physical space. Creating stores with identical layouts is hard because of limitations imposed by the rented floorspace and differences in store types, targeted customers and overall atmosphere of the surrounding area.

Having seen the problems which can arise let us look at some best practices when designing and executing experiments in retail.

1. Analyze the observed data

Even if you do not have an advanced analytics solution installed inside your stores, it is still possible to validate results of experiments which are focused on increasing some form of sales data. Most retailers nowadays have installed POS systems in which sales data is easily accessible. All it takes is some number crunching in order to prove the validity of the experiment.

2. Allocate the duration for the experiment

To be statistically significant the experiment needs to run for a period of time. Unfortunately, there is no formula to come up to the perfect number of days to run the experiment. At StoreDNA, we utilize a framework which in our experience works for 90% of use-cases.
We call it the 3x3x3 framework and it consists of 3 periods:

  • 3 weeks before the experiment in which we define the baselines of the usual behavioural and sales patterns

  • 3 weeks designated for running the experiment

  • 3 weeks after the experiment to be used as an additional reference point

3. Validate the experiment by comparing to other stores

Suppose we ran an experiment for 3 weeks and the conversion shot up 2 percentage points in relation to the period before. Should we call this experiment a success? Not yet!
Before proclaiming the experiment successful we need to compare the same period with as many other similar stores in our fleet in which we did not run the experiment.
Only if the rise in these other stores does not match the one in the reference store in which we did our experiment, we can say with some certainty that the change was caused by the experiment we ran.

4. Take into account the overall performance

Often the experiments which are run inside retail spaces do not affect the whole store.
It is very common for retailers to introduce a new fixture or product display and want to know how it affected this category performance. Similar to our previous case, it would be a mistake to look at only the time periods before, during and after the experiment.
In this case, we can look at the performance of the whole store during the experimentation period and compare it with the specific category performance. The overall performance serves as a benchmark for the change which affects a single category.

5. Utilize your understanding of hidden processes

The best data scientist with no business knowledge will never be as successful as a data scientist who has the understanding of retail and inside-company secrets.
Seasonalities, introductions of new models or clothing lines, changes in marketing campaigns, shifts in company strategies all affect the results of experiments.
It is not possible to trust the numbers blindly without taking into account all the industry and company particularities which are unknown to a general data analytics system or an outside consultant who does not understand the inner workings of a big company.

6. Do not be afraid to run unsuccessful experiments

An experiment which produced negative sales results is still a valuable experiment because we have discovered what needs to be avoided.
Imagine if we didn't validate this change and scaled it across the fleet; the loss would be much greater.
Companies who want to progress need to have the patience in both time and temporary profitability loss in order to progress.

Successful companies understand this and are not afraid to invest heavily in testing new things.
In 2017, leading companies such as Intel, Amazon, Apple and Huawei spent more than 10% of revenue on R&D projects.

If you are interested in running experiments in your own stores but unsure how to proceed, drop us a line at grow@storedna.co.