Experimenting with placements allows you to try out different placement configurations (variants) to achieve the best returns. Experiments run on a subset of your traffic so that you can understand the effect of a change prior to making the change for your entire traffic. After running an experiment and analyzing the results, you can decide to either keep experiment running and change the percentage of traffic the variant receives, or end the experiment by deprecating the original placement configuration in favor of a more profitable variant or retaining the original placement if the experiment proved the change did not significantly improve your metrics.
Note
To run Experiments, use DT FairBid SDK 3.1.0 or later.
Setting Up an Experiment
An Experiment includes the following components:
- Control group
- The original active placement configurations. Use the control group as a benchmark to measure the effectiveness of the variant group.
- Variant group
- The proposed placement with modified configurations that you are testing.
- Allocation percentage
- The percentage of traffic you want to allocate to the variant.
Before you begin configuring an experiment, identify the following items:
- Identify the placement configuration that you want to test.
- Identify the key metric you want to optimize and the actionable metric value at which you plan to deprecate the control group in favor of the variant group.
- Identify how long to run the experiment. For example, if your placement usually gets 20K impressions and 10K unique impressions per day, DT recommends running the experiment for about a week to gather sufficient data for analysis.
To set up an experiment:
- From the App details screen, click the placement that you want to test.
The Placement details screen displays.
- Click Set up experiment.
The Placement details screen enters SETTING UP EXPERIMENT mode where the following experiment items are ready for you:- Your current placement configuration is now the Control Group (Test A) and no longer editable.
- A copy of the control group (Test B) is now your variant group ready for you to modify for the experiment.
- While in SETTING UP EXPERIMENT mode, complete the following tasks to configure your test:
- If you want to begin the experiment later, click Save.
Experimental placement configurations get saved.
Note
Saved experiments do not run until you click the Start experiment button.
- To begin the experiment right away, click Start experiment.
The Placement Details screen exits EXPERIMENT SETUP MODE. Variant configurations are no longer editable, and the End Experiment button appears. Additionally, on the Apps Details screen, the Placement for which you are running the experiment now displays as Experiment running.
Naming a Variant
To name a variant:
- On the Placement Details, ensure that the screen is in SETTING UP EXPERMIMENT mode.
- For the variant you want to name, hover over the default name, and click the Edit icon.
- Name the variant and click Update Experiment.
Tip
Consider appending the date you start the experiment on to later help you determine when to end the experiment.
Allocating Traffic to a Variant
Experiments route a subset of your traffic to use the variant configuration. The default variant allocation to 10%. You can modify this allocation both before and during the experiment.
To adjust traffic allocation for a variant:
- While on the Placement Details tab, for the variant you want to adjust traffic allocation, hover over the current traffic allocation setting, and click the Edit icon.
- Specify an allocation percentage using one of the following methods:
- Drag the slider to desired allocation percentage. The allocation for the Control Group automatically adjusts to ensure that total traffic allocation equals 100%.
- Enter the desired allocation percentage for the variant and control group.
- Click Update Experiment.
Modifying a Variant for Testing
Experimenting with placements allows you to modify placement properties and associated networks to analyze the effects on your monetization goal.
As a best practice, modify only one aspect at a time so that you can directly attribute outcomes to a particular change. To analyze the effects of modifying placement properties such as prices, targeting, and display frequencies, modify the variant placement as you would modify an actual placement. For more information about the placement properties you can modify, see Setting Up Placements. To analyze the effects of modifying the mediated networks instances associated with a placement, add the instance to the variant placement as you would add an instance to an actual placement. For more information about adding a mediated network instance to a placement, see Setting Up Instances.
Make all variant modifications on the appropriate variant tab while in SETTING UP EXPERIMENT mode. You cannot modify a variant once an experiment is running. When you have finished entering the variant modification, click Save variant changes.
Adding a Variant
You can run one experiment per placement, and for each experiment, you can test up to two variants. You can add a second variant only when the Placement Details screen is in SETTING UP EXPERIMENT mode. Base the new variant on either the control group or existing variant. If needed, name the new variant.
- On the Placement Details, ensure that the placement is in SETTING UP EXPERMIMENT mode.
- Click Duplicate for the test group (Control group or existing variant group) upon which you want to base the new variant.
A new variant (Test C) appears in the experiment.
- Modify Test C, as described in Modifying a Variant for Testing.
Analyzing Results
Experimenting with placements allows you to optimize the following metrics:
- ARPDEU
- Average Revenue (publisher payout) Per Daily Engaged User. Engaged Users are users that see at least one ad from the experimental placement.
- Publisher Payout
- Amount the publisher earns from the placement. This metric depends on the traffic allocation of the impressions.
- Fill Rate
- The number of times an ad request is filled by an ad network divided by the total number of ad requests received.
A deeper analysis of the experiment data can be carried out in the Dynamic Reports module of the DT Console:
- In the DT Console, go to Dynamic Reports→App Performance.
- Apply the following filters in order:
- Publisher
- App
- Placement
- Split the data using the Variant Name as a dimension
- Add further dimensions relevant to your testing, such as Publisher Name, App Name, and Placement Name.
- Select the metrics that you want to analyze.
- Based on the gathered data, determine which test group to retain, and end the experiment.
Ending an Experiment
Once you have gathered enough data, end the experiment.
To end an experiment:
- From the App details screen, click the placement running the experiment you want to stop.
- Click End experiment.
A dialog box displays for you to select which experiment group to retain.
- To keep your current configuration, select Control Group, and click End experiment.
- To switch to the new configuration, select the variant, and click End experiment.