Run A/B experiments to maximize app revenue. Find the most profitable combination of products on a paywall. Discover the price for each in-app purchase that maximizes app revenue.
Also, test any paywall configurations via a custom JSON Config.
While creating the experiment, you can set up a bunch of parameters. Read the detailed explanation below.
Here you can put the test name and description (optional).
Platform – An experiment can be run only for one selected platform at one time. If your app has only iOS or Android platform (not both), this select will be hidden.
Target paywall – choose the paywall, that are being used as a baseline (control) variation. One of your existing paywalls should be used.
Variations are paywalls to test.
Variation A is a copy of target paywall with all products and config of its parent. Can't be modified.
Variation B is the second (modified) option to test. Can be created as a fully custom paywall, build from existing products. Another option here is to create the variation from the existing paywall. It's up to you.
Traffic allocation – determine traffic (users) distribution between the variants. By default, it's 50/50. Can be changed during the experiment.
Audiences is the new remarkable entity, that allows to group users by different criteria (such as, new/existing ones, country, app version etc.). Read more here.
You can choose between one of existing (default) audiences for the experiment, or create a custom audience with desired parameters.
After you filled all the experiment setting, click on the "Run experiment" button to run the test. A preview screen will be shown to do a final check and overview the settings.
If everything is okay, click "Run Experiment" button again. Otherwise, edit the experiment.
You may want to not run the test immediately. In such case, the experiment will be saved as a draft. The draft can be edited and launched later.
Click "Duplicate" in the context menu on the experiment in list to create its copy. All experiment settings (such as target paywall, variations and audience) will move to the new test.
After experiment is started, it will accumulate users data and all related metrics.
Views Number of overall variant paywall views (every repetitive view from user is counted).
Marked Users Number of unique users, marked (related) to particular paywall variant. May not view the paywall.
Affected Users Number of unique marked users, who have seen the paywall.
Trials Count of started trials.
CR Trials Conversion from paywall view to trial start.
CR Trial-Purchase Conversion started trial to in-app purchase.
Purchases Count of initial purchases (non renewals), trials are included.
CR Purchases Conversion from paywall view to purchase.
Last Purchase Last in-app purchase date for the variant.
Sales Total amount billed to customers for purchasing in-app purchases from the paywall variant. Sales = Gross Revenue - Refunds.
Proceeds Estimated amount you receive on sales of subscriptions. It excludes refunds and Apple’s commission.
Refunds Number of purchases refunds.
ARPU Average Revenue Per Users. Calculated on a cohort basis. The cohort is users, who have installed the app and were marked to the paywall variant.
ARPPU Average Revenue Per Paying Users. Calculated on a cohort basis. The cohort is users, who have installed the app and were marked to the paywall variant.
Conversion to purchase Understand which variation is better in terms of conversion from paywall view to a purchase.
Conversion to trial Understand which variation is better in terms of conversion from paywall view to a trial start.
ARPU Understand which variation is better in terms of ARPU.
ARPPU Understand which variation is better in terms of ARPPU.
Effect shows relative change between the selected metric on Variation B compared to Variation A.
Example: if Variation A purchase conversion is 5% and Variation B is 10%, then Effect = +50%, i.e. Variation B outperforms A by 50%.
P-value It's a statistical value, a number between 0 and 1, which is used to test a hypothesis. It is used to determine whether the result obtained in an experiment is random.
For our experiments, we define a significance threshold of 5% (or P-value = 0.05).
Statistically significant (and allowing to reject the null hypothesis) in this case is the result whose P-Value is equal to the significance level or less than it (P-value ≤ 0.05).
When we assure the result of the test is significant in terms of P-value, we’ll inform you.
When you see the significant results on the desired target metric, you can complete the experiment.
If you don't see significance on the metric for continuous time, it's a good signal to re-think the experiment conditions and to run another test with more differences in the prices and paywall parameters (to increase the potential effect of these changes).