In the competitive advertising landscape, every change you make to your campaigns can make a huge difference for your business. Sometimes, it’s not immediately clear whether a new bidding strategy, setting, or feature is the best move for you. How can you make sure that you’re making the right decisions for your Microsoft Advertising campaigns without wasting precious time and resources? With experiments
rolling out globally, you can now test out those campaign changes with full confidence.
What’s an experiment?
An experiment is a duplicate version of your campaign that provides you with a controlled environment to monitor a change without fully launching it across your whole campaign. This way, you can run a true A/B test within a campaign to determine whether a particular update will work well for you and your business. Some examples include:
- Ad copy - If you want to test various messages and calls-to-action on your ads, you can try it out in an experiment and compare it to real performance.
- Landing page URLs - See whether different landing pages result in better performance for your campaigns.
- Bidding strategies and modifiers - Allocate a percentage of your campaign budget towards a smart bidding tactic like Maximize Clicks, or test out different bid adjustments.
We worked with various pilot customers to ensure we could bring you the best possible experience for building experiments. Here’s what two of our advertisers from Performics
have been saying:
Using the experiments functionality within [Microsoft Advertising] proved simple, seamless, and effective in gaining an understanding of the tested feature (in this case, Max Clicks).
Steve Szuter, Senior Media Manager, Performics
Experiments allowed us to set up, execute and implement results from an automated bid strategy test with ease. We look forward to leveraging this new capability across upcoming tests.
Brian Hogue, Media Director, Performics
How do you get started?
Before you set up any kind of experiment, have a clear hypothesis and goal in mind. Once you’re set on what you want to test, navigate to the Experiments
tab on the Campaigns
page and select the campaign you want to experiment on.
The Create an experiment window on the Experiments tab.
Once you select a campaign, you can use whatever Experiment name
you'd like, along with a Start date
and End date
(if desired). Then you set your Experiment split
, which is the percentage of the original campaign’s daily budget and ad traffic that you want to allocate for this experiment. For you to get enough volume quickly and make the comparisons more easily, we recommend setting your experiment split at 50%.
After you create the experiment (i.e. duplicate campaign), make sure that it was successfully created with no errors by checking the Experiment status.
You now have your original campaign and experiment campaign.
How should you approach your experimentation?
For the best and most accurate results, we recommend running your experiment in A/A mode for two weeks. This means that your original campaign and experiment campaign remain the same for the first two weeks after getting the experiment created. This will allow time for the experiment campaign to ramp up and help validate that it’s running the same as the original, so that you can run a true A/B test.
After those two weeks are up and you validate the performance isn’t different in any statistically significant way, you can start the actual A/B test of your experiment. Once you make the desired change to your duplicate campaign, you should run the experiment for at least two weeks (and four or more for complex bidding strategies like Target CPA
and Maximize Conversions) to compare performance. For more information, please see our help documentation: Discover the possibilities with experiments
Let us know your thoughts
For any questions or feedback regarding campaign experiments, we encourage you to reach out to your Microsoft Advertising account manager or contact Support
. You can also ping us on Twitter
, or suggest a feature on the Microsoft Advertising Feature Suggestion Forum