Guides
GuidesLog In

Experiments Overview

Test Paywall pricing and UI elements without updating

Utilize A/B Experiments to enhance your app's revenue generation. Determine the most effective combination of products on your paywall and the optimal pricing for each in-app purchase to maximize revenue. Customize and test various paywall configurations using a JSON configuration.

Overview

  • Experiment Modes: Conduct Paywall experiments in two distinct modes: 'Standalone' or 'Within Placements'.
  • Variation Support: Support for up to 5 variations, enabling comprehensive A/B/C/D/E testing.
  • Custom JSON Configurations: Each variation can be customized with a unique JSON configuration, allowing for modifications in user interface and other elements.
  • SDK Integration: Retrieve the name of the experiment for each user directly from the SDK.
  • Base Variation Editing: Edit the base variation (Variation A) to establish a standard for comparison.
  • Traffic Allocation: Allocate traffic to each variation as per requirement, allowing for controlled exposure and accurate data collection.
  • Targeted Experiments: Conduct experiments tailored to custom audiences, such as specific countries, app versions, or other user segments.

Experiments within Placements (New Feature)

Enhance your approach with the new 'Placements' feature. This allows for the strategic management of paywall appearances within different sections of the app, such as onboarding, settings, etc., targeting specific audiences. Execute A/B experiments on selected paywalls to optimize conversion rates for particular placements and user groups. See more details below.

Experiments Priority

You can run multiple experiments on your paywalls simultaneously.

For both standalone and placement experiments, the latest experiment takes priority.

For instance, if a developer sets up an Onboarding Placement with paywalls and multiple experiments are created for overlapping audiences, the system will assign each overlapping user exclusively to the most recently launched experiment.

This approach ensures clarity in data analysis and avoids potential confusion from a user being part of multiple concurrent tests.

Managing Experimented Paywalls in Your App

Integrating experimented paywalls in your app is straightforward but requires attention to detail regarding how you access paywalls from the SDK. Here's how to proceed:

Choose the Appropriate Experiment Mode: Your choice of experiment mode should align with your app's paywall implementation. If your app uses standalone paywalls, conduct A/B experiments on these standalone versions. Conversely, if your app utilizes paywall placements, your experiments should be within these placements.

Access Paywalls Correctly: It's crucial to retrieve paywalls correctly based on your chosen feature. If your app uses the placements feature, you must obtain the paywall object directly from the corresponding placement object. Avoid using the standalone paywalls array from the SDK in this context. Fetching a paywall through Apphud.paywalls() in a placement-based setup will disrupt your analytics, as purchases won't be accurately attributed to the respective placement.

Understand SDK Behavior with Experimented Paywalls: Be aware that the SDK's treatment of experimented paywalls varies based on the type of experiment. If you run an A/B experiment on a paywall within a placement, only that specific placement will return the modified version of the target paywall. Other areas, including the standalone paywalls array and other placements, will continue to present the original paywall. Similarly, experimenting on a standalone paywall will not affect the representation of that paywall within any placements.

Create Experiment

While creating the experiment, you can set up a bunch of parameters. Read the detailed explanation below.

Experiment Setup

2354

Experiment Name and Description: You can enter the name of your test here, and optionally add a description to provide more context.

Platform: Each experiment can be conducted on only one platform at a time. If your app is available exclusively on either iOS or Android (not both), the platform selection will be automatically tailored to your app's availability.

Target Paywall: Select one of your existing paywalls to use as the baseline (control) variation in your experiment. This paywall will act as the standard against which other variations are compared.

Test Paywall Standalone: This traditional approach to A/B testing involves altering the paywall object with one of the variations. Standalone paywalls can be accessed using the Apphud.paywalls() method in the SDK or a similar function.

Test Paywall within Placements: This newer and recommended method allows for more targeted paywall A/B testing. By testing within specific placements, you can assess different paywalls for specific audience segments. This approach ensures that the testing is confined to a particular area within the app, leaving other sections unaffected.

Variations

2442

You have the option to define up to 5 distinct variations for target paywall. For each variation, you are required to specify a unique combination of three elements: a custom JSON configuration, a distinct variation name, and a specific set of products to which it applies. Additionally, there is an option create a new variation by duplicating an existing paywall.

Traffic Allocation: This feature manages the distribution of traffic (users) among different variants. By default, the distribution is set to a 50/50 split. However, these allocation percentages can be adjusted at any point during the experiment's runtime to suit your testing needs.

Audience

2442

Audiences are designed to segment users based on various criteria, such as whether they are new or existing users, their country, app version, and more. Read more.

For your experiment, you can either select from pre-existing default audiences or create a custom audience tailored to your specific requirements. This flexibility allows for more targeted and effective experimentation.

Once all experiment settings are configured to your satisfaction, click "Save and review." This action takes you to a summary page where you can perform a final review and verification of all settings before finalizing your experiment setup.

Test Mode

You can test your variation setups on a device before launching an experiment. This feature allows you to preview the paywalls that will be shown to users for each A/B test variation.

📘

Only Sandbox Users supported

Test Mode is available only for sandbox users, i.e. Xcode, TestFlight, Android Studio installs.

Key Points

  • Audience Step Skipped. During test mode, the audience selection step is bypassed. This enables you to test your variations regardless of the selected target audience.
  • Sandbox Only. Remember that Test Mode applies only for Sandbox Users at this time.
  • Keep in Mind about Experiments Priorty. If there is another Live Experiment or another Experiment with Test Mode on, it may override your current Experiment if it was created later than the current one. New Experiment has a greater priority than the older one.
  • Keep in Mind Paywalls Cache in the SDK. There is Paywalls cache in the SDK which varies between sandbox and production modes. For Sandbox installations cache should be around 60 seconds, however, if you still don't see your test variation, try to re-install the app.
  • Change Traffic Allocation to Test Another Variation. Change the Traffic Allocation of your desired variation to 100%.
    After testing, ensure you revert the traffic allocation value to its original setting.

This approach helps you ensure that each variation functions correctly and appears on your device as intended to your users.

Run Experiment

Once you are ensured that all data is correct click "Run experiment" to start the test. Otherwise, edit the experiment.

Experiment States

2442

You may want to not run the test immediately. In such a case, the experiment will be saved as a draft. The draft can be edited and launched later.

Duplicate Experiment

2442

Click "Duplicate" in the context menu on the experiment in the list to create its copy. All experiment settings (such as target paywall, variations, and audience) will move to the new test.

Using Experiments in Observer Mode

You can use A/B experiments on paywalls and placements even in Observer mode. If you don't use Apphud SDK to purchase subscriptions, you need to specify the paywall identifier and optionally placement identifier which was used to purchase a product. If you pass the paywall identifier correctly, experiment analytics will work as expected.

iOS:
You need to specify paywall identifier and optionally placement identifier before making a purchase using willPurchaseProductFrom method:

Apphud.willPurchaseProductFrom(paywallIdentifier: "paywallID", placementIdentifier: "placementID") 
YourClass.purchase(product) { result in 
 ...
}

Android:

You need to specify paywall identifier and optionally placement identifier after making a purchase using trackPurchase method:

// call trackPurchase method after successful purchase
// always pass offerIdToken in case of purchasing subscription
// pass paywallIdentifier and placementIdentifier to correctly track purchases in A/B experiments.
Apphud.trackPurchase(
        purchase: Purchase,
        productDetails: ProductDetails,
        offerIdToken: String?,
        paywallIdentifier: String? = null,
        placementIdentifier: String? = null)

Complete and Evaluate

When you see significant results on the desired target metric, you can complete the experiment.

2442

If the metric fails to show significance over an extended period, it may be a signal to re-evaluate the experimental conditions. Consider conducting a new test with more pronounced variations in pricing and paywall parameters, as this could amplify the potential impact of these changes and lead to more significant results.

Learn more how to analyze experiment data in the Analyzing Experiments guide.