Experiments

Test Paywall pricing and UI elements without updating

Utilize A/B Experiments to enhance your app's revenue generation. Determine the most effective combination of products on your paywall and the optimal pricing for each in-app purchase to maximize revenue. Customize and test various paywall configurations using a JSON configuration.

Overview

  • Experiment Modes: Conduct Paywall experiments in two distinct modes: 'Standalone' or 'Within Placements'.
  • Variation Support: Support for up to 5 variations, enabling comprehensive A/B/C/D/E testing.
  • Custom JSON Configurations: Each variation can be customized with a unique JSON configuration, allowing for modifications in user interface and other elements.
  • SDK Integration: Retrieve the name of the experiment for each user directly from the SDK.
  • Base Variation Editing: Edit the base variation (Variation A) to establish a standard for comparison.
  • Traffic Allocation: Allocate traffic to each variation as per requirement, allowing for controlled exposure and accurate data collection.
  • Targeted Experiments: Conduct experiments tailored to custom audiences, such as specific countries, app versions, or other user segments.

Experiments within Placements (New Feature)

Enhance your approach with the new 'Placements' feature. This allows for the strategic management of paywall appearances within different sections of the app, such as onboarding, settings, etc., targeting specific audiences. Execute A/B experiments on selected paywalls to optimize conversion rates for particular placements and user groups. See more details below.

Managing Experimented Paywalls in Your App

Integrating experimented paywalls in your app is straightforward but requires attention to detail regarding how you access paywalls from the SDK. Here's how to proceed:

Choose the Appropriate Experiment Mode: Your choice of experiment mode should align with your app's paywall implementation. If your app uses standalone paywalls, conduct A/B experiments on these standalone versions. Conversely, if your app utilizes paywall placements, your experiments should be within these placements.

Access Paywalls Correctly: It's crucial to retrieve paywalls correctly based on your chosen feature. If your app uses the placements feature, you must obtain the paywall object directly from the corresponding placement object. Avoid using the standalone paywalls array from the SDK in this context. Fetching a paywall through Apphud.paywalls() in a placement-based setup will disrupt your analytics, as purchases won't be accurately attributed to the respective placement.

Understand SDK Behavior with Experimented Paywalls: Be aware that the SDK's treatment of experimented paywalls varies based on the type of experiment. If you run an A/B experiment on a paywall within a placement, only that specific placement will return the modified version of the target paywall. Other areas, including the standalone paywalls array and other placements, will continue to present the original paywall. Similarly, experimenting on a standalone paywall will not affect the representation of that paywall within any placements.

Concurrent Paywall Experiments

It's possible to run multiple experiments on your paywall simultaneously, provided each test targets a distinct audience segment. This approach allows for diverse insights across various user demographics.

In scenarios where the audiences for these experiments overlap, the system will assign each overlapping user exclusively to a single experiment. This assignment is based on the creation date of the experiments, with priority given to the earliest created one. This ensures clarity in data analysis and avoids any potential confusion arising from a user being part of multiple concurrent tests.

Create Experiment

While creating the experiment, you can set up a bunch of parameters. Read the detailed explanation below.

Experiment Setup

2354

Experiment Name and Description: You can enter the name of your test here, and optionally add a description to provide more context.

Platform: Each experiment can be conducted on only one platform at a time. If your app is available exclusively on either iOS or Android (not both), the platform selection will be automatically tailored to your app's availability.

Target Paywall: Select one of your existing paywalls to use as the baseline (control) variation in your experiment. This paywall will act as the standard against which other variations are compared.

Test Paywall Standalone: This traditional approach to A/B testing involves altering the paywall object with one of the variations. Standalone paywalls can be accessed using the Apphud.paywalls() method in the SDK or a similar function.

Test Paywall within Placements: This newer and recommended method allows for more targeted paywall A/B testing. By testing within specific placements, you can assess different paywalls for specific audience segments. This approach ensures that the testing is confined to a particular area within the app, leaving other sections unaffected.

Variations

2442

You have the option to define up to 5 distinct variations for target paywall. For each variation, you are required to specify a unique combination of three elements: a custom JSON configuration, a distinct variation name, and a specific set of products to which it applies. Additionally, there is an option create a new variation by duplicating an existing paywall.

Traffic Allocation: This feature manages the distribution of traffic (users) among different variants. By default, the distribution is set to a 50/50 split. However, these allocation percentages can be adjusted at any point during the experiment's runtime to suit your testing needs.

Audience

2442

Audiences are designed to segment users based on various criteria, such as whether they are new or existing users, their country, app version, and more. Read more.

For your experiment, you can either select from pre-existing default audiences or create a custom audience tailored to your specific requirements. This flexibility allows for more targeted and effective experimentation.

Once all experiment settings are configured to your satisfaction, click "Save and review." This action takes you to a summary page where you can perform a final review and verification of all settings before finalizing your experiment setup.

Run Experiment

Once you are ensured that all data is correct click "Run experiment" to start the test. Otherwise, edit the experiment.

Experiment States

2442

You may want to not run the test immediately. In such a case, the experiment will be saved as a draft. The draft can be edited and launched later.

Duplicate Experiment

2442

Click "Duplicate" in the context menu on the experiment in the list to create its copy. All experiment settings (such as target paywall, variations, and audience) will move to the new test.

Analyze Experiment

After the experiment is started, it will accumulate users' data and all related metrics.

2510

Views
The number of Paywall views. Developer must send Paywall Shown event from SDK, otherwise N/A will be displayed (every repetitive view is counted).

Marked Users
The number of users who were distributed to the relevant variation of the experiment.

Affected Users
The number of unique users who viewed the given paywall. Developer must send Paywall Shown event from SDK, otherwise N/A will be displayed.

Trials
The number of Trial Started events. Applies only to purchases from the target paywall.

CR Trials
Conversion from a paywall view to a free trial. Developer must send Paywall Shown event from SDK, otherwise conversion will be calculated based on Users Marked metric. Applies only to purchases from the target paywall.

CR Trial-Purchase
Conversion from a free trial to a paid subscription. Applies only to purchases from the target paywall.

Purchases
Number of paid events excluding subsequent renewals. These include: Subscription Started, Trial Converted, and Non-Renewing Purchase events. Applies only to purchases from the target paywall.

CR Purchases
Conversion from a paywall view to a paid purchase, excluding free trials and subsequent renewals. Developer must send Paywall Shown event from SDK, otherwise conversion will be calculated based on Users Marked metric.

Last Purchase
Date of the last paid event excluding subsequent renewals. These include: Subscription Started, Trial Converted, and Non-Renewing Purchase events. Applies only to purchases from the target paywall.

Sales
Total amount billed to customers, including renewals. Applies only to purchases from the target paywall.
Sales = Gross Revenue - Refunds.

Proceeds
Estimated revenue developer receives after deducting taxes and store commission. Applies only to purchases from the target paywall.

Refunds
Amount of money refunded to users. Applies only to purchases from the target paywall.

ARPAS
Average proceeds revenue per paying user or free trial subscriber. Applies only to purchases from the target paywall. Includes revenue from non-subscription purchases.

ARPU
Average proceeds revenue per user including renewals. Applies only to purchases from the target paywall.

ARPPU
Average proceeds revenue per paying user including renewals. Applies only to purchases from the target paywall.

📘

Note

Since ARPU/ARPPU will change during the time due to refunds and renewals, experiment results for these metrics may change.

Target Metrics

Conversion to purchase
Understand which variation is better in terms of conversion from a paywall view to purchasing.

Conversion to trial
Understand which variation is better in terms of conversion from a paywall view to a trial start.

ARPU
Understand which variation is better in terms of ARPU value. Calculated as Proceeds per Users Marked.

ARPPU
Understand which variation is better in terms of ARPPU value. Calculated as Proceeds per Paying Marked Users.

❗️

Important Note

It's strictly recommended to send Paywall Shown event from SDK, otherwise metrics will be calculated based on the Users Marked metric. This may lead to inaccurate values.

Effect

The "Effect" metric quantifies the relative change between a selected metric in Variation B compared to Variation A. This measurement is expressed as a percentage, indicating how much one variation outperforms or underperforms compared to the other.

Example: If the purchase conversion rate in Variation A is 5% and in Variation B it is 10%, the Effect is +100%. This means Variation B's performance is double that of Variation A, or a 100% improvement.

P-value

The P-value is a statistical metric ranging from 0 to 1, utilized to test hypotheses in experiments. It helps determine whether the observed results are due to chance or are statistically significant.

In our experiments, we set a significance threshold of 5% (P-value = 0.05). A result is considered statistically significant, and thus allows for the rejection of the null hypothesis, if its P-value is less than or equal to this threshold (P-value ≤ 0.05).

We will notify you when the test results are confirmed to be significant based on the P-value criterion.

Complete and Evaluate

When you see significant results on the desired target metric, you can complete the experiment.

2442

If the metric fails to show significance over an extended period, it may be a signal to re-evaluate the experimental conditions. Consider conducting a new test with more pronounced variations in pricing and paywall parameters, as this could amplify the potential impact of these changes and lead to more significant results.

Using Experiments in Observer Mode

You can use A/B experiments on paywalls and placements even in Observer mode. If you don't use Apphud SDK to purchase subscriptions, you need to specify the paywall identifier and optionally placement identifier which was used to purchase a product. If you pass the paywall identifier correctly, experiment analytics will work as expected.

iOS:
You need to specify paywall identifier and optionally placement identifier before making a purchase using willPurchaseProductFrom method:

Apphud.willPurchaseProductFrom(paywallIdentifier: "paywallID", placementIdentifier: "placementID") 
YourClass.purchase(product) { result in 
 ...
}

Android:

You need to specify paywall identifier and optionally placement identifier after making a purchase using trackPurchase method:

// call trackPurchase method after successful purchase
// always pass offerIdToken in case of purchasing subscription
// pass paywallIdentifier and placementIdentifier to correctly track purchases in A/B experiments.
Apphud.trackPurchase(
        purchase: Purchase,
        productDetails: ProductDetails,
        offerIdToken: String?,
        paywallIdentifier: String? = null,
        placementIdentifier: String? = null)