Experiments

Test Paywall pricing and UI elements without updating

Run A/B Experiments to maximize app revenue. Find the most profitable combination of products on a paywall. Discover the price for each in-app purchase that maximizes app revenue.

Also, test any paywall configurations via a custom JSON Config.

Overview

  • More than two variations support, i.e. A/B/C test.
  • Custom JSON config for each variation, for example, to modify your UI.
  • Get the variation name and experiment name for the current user from SDK.
  • Base variation (aka variation A) editing.
  • Custom traffic allocation for each variation.
  • Run experiments for the custom audiences, for example, targeted for a specific country, app version, or other filters.

Using Experiments in Observer Mode

You can use A/B experiments on paywalls even in Observer mode. If you don't use Apphud SDK to purchase subscriptions, you need to specify the paywall identifier which was used to purchase a product. If you pass the paywall identifier correctly, experiment analytics will work as expected.

πŸ“˜

Note

Important Note: on iOS you need to specify a paywall identifier before purchase, however on Android, you can do it after the purchase.

Apphud.willPurchaseProductFromPaywall("test_paywall")
YourClass.purchase(product) { result in 
 ...
}
YourClass.purchase(product) { success ->
  Apphud.syncPurchases("test_paywall")
}

Create Experiment

While creating the experiment, you can set up a bunch of parameters. Read the detailed explanation below.

Main Settings

2354

Here you can put the test name and description (optional).

Platform – An experiment can be run only for one selected platform at one time. If your app has only an iOS or Android platform (not both), this selection will be hidden.

Target paywall – choose the paywall, that is being used as a baseline (control) variation. One of your existing paywalls should be used.

πŸ“˜

Note

You can run several experiments on the paywall at one time by choosing a different audience for each test. In case of overlapping audiences, such users will be marked to the one experiment only, the earliest by its creation date.

Variations

2442

Variations are paywalls to test.
All Variations are modified. You can create the as fully custom paywalsl and custom JSONs, build them using existing products. Another option here is to create the variation from the existing paywall. It's up to you.

You can add up to five variations.

πŸ“˜

Note

If you use the "Use existing paywall" option, the Variation B paywall can't be modified. Product and JSON Config will be migrated from the parent paywall.

Traffic allocation determine traffic (users) distribution between the variants. By default, it's 50/50. These values an be changed during the running experiment.

Audience

2442

Audiences are the new remarkable entity, that allows to group users by different criteria (such as new/existing ones, country, app version, etc.). Read more.

You can choose between one of the existing (default) audiences for the experiment or create a custom audience with desired parameters.

After you filled in all the experiment settings, click "Save and review" to do a final check and overview of the settings.

Run Experiment

Once you are ensured that all data is correct click "Run experiment" to start the test. Otherwise, edit the experiment.

Experiment States

2442

You may want to not run the test immediately. In such a case, the experiment will be saved as a draft. The draft can be edited and launched later.

Duplicate Experiment

2442

Click "Duplicate" in the context menu on the experiment in the list to create its copy. All experiment settings (such as target paywall, variations, and audience) will move to the new test.

Analyze Experiment

After the experiment is started, it will accumulate users' data and all related metrics.

2510

Views
A number of overall variant paywall views (every repetitive view is counted).

Marked Users
A number of unique users, marked (related) to a particular paywall variant. May not view the paywall.

Affected Users
A number of unique marked users have seen the paywall.

Trials
Count of started trials.

CR Trials
Conversion from paywall view to trial start.

CR Trial-Purchase
Conversion from a trial to an in-app purchase.

Purchases
Count of initial purchases (non-renewals) and trials are included.

CR Purchases
Conversion from paywall view to purchasing.

Last Purchase
Last in-app purchase date for the variant.

Sales
The total amount billed to customers for purchasing in-app purchases from the paywall variant.
Sales = Gross Revenue - Refunds.

Proceeds
Estimated amount you receive on sales of subscriptions. It excludes refunds and Apple’s commission.

Refunds
A number of purchases refunds.

ARPU
Average Revenue Per User. Calculated on a cohort basis. The cohort is users, who have installed the app and were marked to the paywall variant.

ARPPU
Average Revenue Per Paying Users. Calculated on a cohort basis. The cohort is users, who have installed the app and were marked to the paywall variant.

πŸ“˜

Note

Since ARPU/ARPPU will change during the time due to refunds and renewals, experiment results for these metrics may change.

Target Metrics

Conversion to purchase
Understand which variation is better in terms of conversion from a paywall view to purchasing.

Conversion to trial
Understand which variation is better in terms of conversion from a paywall view to a trial start.

❗️

Important Note

We calculate conversions by views by default. It's strictly recommended to use the logPaywallShown() method in SDK. Otherwise, we will have to calculate conversions by marked users and it's not so accurate.

ARPU

Understand which variation is better in terms of ARPU.

πŸ“˜

Note

ARPU in experiments is calculated as Paywall Proceeds / Paywall Marked Users

ARPPU

Understand which variation is better in terms of ARPPU.

πŸ“˜

Note

ARPPU in experiments is calculated as Paywall Proceeds / Paywall Paying Users

Effect

The effect shows the relative change between the selected metric on Variation B compared to Variation A.

Example: if Variation A purchase conversion is 5% and Variation B is 10%, then Effect = +50%, i.e. Variation B outperforms A by 50%.

P-value

It's a statistical value, a number between 0 and 1, which is used to test a hypothesis. It is used to determine whether the result obtained in an experiment is random.

For our experiments, we define a significance threshold of 5% (or P-value = 0.05).

Statistically significant (and allowing to reject the null hypothesis) in this case is the result whose P-Value is equal to the significance level or less than it (P-value ≀ 0.05).

When we assure the result of the test is significant in terms of P-value, we’ll inform you.

Complete and Evaluate

When you see significant results on the desired target metric, you can complete the experiment.

2442

If you don't see significance on the metric for continuous time, it's a good signal to re-think the experiment conditions and to run another test with more differences in the prices and paywall parameters (to increase the potential effect of these changes).