Week 3
Measure the success of Marketing campaigns
You will investigate the metrics and outcomes that define a successful marketing campaign. You’ll examine different metrics that help you determine the ROI or ROAS of a marketing project so you can make adjustments to improve returns. You’ll also learn how to plan for and conduct an A/B test to optimize a marketing campaign. Finally, you’ll examine what a successful marketing campaign looks like and what makes it successful.
Dedication to study
-
Videos: 22 min
-
Leitura: 2 h 40 min
-
Teste: 1 Teste com avaliação
Learning Objectives
- Describe how to determine the ROI or ROAS of a marketing project.
- Prepare, conduct, and analyze the results from an A/B test to optimize a marketing campaign.
- Evaluate metrics against performance goals to make adjustments to a marketing budget or strategy.
- Describe what defines a successful marketing campaign.
Content
- Moving targets: Adjusting media and performance goals
- Performing and analyzing results from A/B tests
- What does a sucessful campaign achieve?
- Review: Measure the success of Marketing campaigns
1. Moving targets: Adjusting media and performance goals
Welcome to week 3
- Video Duration: 1 minute
Welcome back! Let’s talk about success. People love success because it makes them feel good. If you work on a successful marketing campaign, you’ll feel good about it, but success is more than a good feeling. A common way to measure a campaign’s success is by how well potential customers moved forward in the customer journeys. Ultimate success is measured by the number of completed sales. Metrics can help determine the success of marketing initiatives and campaigns. Return on investment is one metric, and you’ll probably hear it more often as an acronym formed from each of the first letters, ROI. For ad campaigns specifically, there’s return on ad spend, which is often referred to as another acronym, ROAS. A business always has to make a profit to stay in business. That’s why these metrics on returns are so critical. You’ll learn how to use these metrics to measure marketing success. To help increase returns, tests are sometimes performed to determine which of two variants generates better performing results. These tests are called A/B tests. They’re also called split tests or bucket tests. They help marketers make decisions that impact the success of a campaign. You’ll learn more about these tests in upcoming lessons. And finally, you’ll examine case studies of marketing campaigns with varying levels of success. You can learn a lot from a case study that describes how best practices were applied. You can also learn a lot from case studies describing what to avoid or what went wrong. Success feels good, so I wish you success in this part of the course and for the rest of the program.
ROI and ROAS calculations
- Video Duration: 4 minutes
How do you measure marketing success? You were just introduced to the terms return on investment, or ROI, and return on ad spend, or ROAS. Marketers use these calculations to help them determine the success of marketing campaigns. Now it’s your opportunity to learn how these are calculated. Let’s start with ROI. ROI is a measure of the profit generated from a marketing campaign. More specifically, it’s the ratio of net income, or money made, to investment, or money spent. ROI can be calculated one of two ways. In the first method, to calculate ROI, subtract the marketing costs from the total sales growth during the period the campaign was run and then divide the result by the marketing cost. Sales growth is a positive increase in sales when compared to prior sales. In other words, it’s the positive change in sales, or sales lift. For example, if sales growth was $200,000 and the marketing cost was $55,000, the marketing ROI is 200,000 minus 55,000 or 145,000 divided by 55,000. The ROI is 2.6. If you consider an ROI of 1 as the break even point, or when sales growth equals marketing cost, an ROI greater than 1 is on the upside. The higher the ROI, the greater the upside. A second method to measure ROI uses something called customer lifetime value, or LTV. Customer lifetime value is the average revenue generated per customer over a certain period of time. Software analytics tools can help with LTV measurement. Google analytics 4 reports LTV for users acquired through different channels over an acquisition date range you specify, up to 120 days. This allows you to monitor customer behavior over several months to observe the effect of a marketing campaign. You can find LTV in the acquisition overview report. An upward LTV trend indicates an increasing ROI. Now let’s shift from ROI to return on ad spend, or ROAS. ROAS is a number calculated as the revenue generated divided by the amount spent on advertising. Performance goals are created from and tie into higher-level marketing and business goals. ROAS targets for digital channels are usually set relative to an overall campaign level ROAS across a mix of media. A marketing goal of 5:1 ROAS usually produces related ROAS performance goals for each channel, such as 3:1 ROAS for search and 4:1 ROAS for display. The per channel ROAS results contribute to the overall ROAS across all media. Play video starting at :2:40 and follow transcript2:40 If you’re monitoring the analytics for a campaign, you might wonder when your intervention is needed. If the ROAS isn’t meeting your target, what can you do? This isn’t a complete list, but here are a few things you can consider doing. You can lengthen the campaign duration. Measuring conversions is a fluctuating or noisy process. ROAS can temporarily be lower than a target due to fluctuations in the market place. These fluctuations maybe outside of your control, so be patient. A good practice is to evaluate ROAS after a minimum of 50 conversions have been reported. You can set ROAS targets by product groups instead of using an overall ROAS. Suppose a clothing store sells evening gowns, dresses, tuxedos, and suits and wants a 4:1 overall ROAS but isn’t able to achieve that goal initially. Instead of sticking with the ROAS as an overall target, they can split out the targets by product groups like formal and more casual wear. ROAS by product groups may provide a more accurate measurement for the campaign’s success. Review how the ROAS target was set. If the ROAS wasn’t based on the ROAS from previous campaigns or was based on margins that were far too optimistic, you could reset the ROAS target to a more realistic number based on the data you now have. Lastly, if applicable, you can adjust the automated bidding strategy to see if that helps you meet the target ROAS. This is done for automated campaigns only. As you have now learned, ROAS is an effective measurement for performance. Measuring ROAS helps you evaluate the performance and success of the campaign. It may take more practice, but in time, measuring both marketing ROI and ROAS will become a natural part of your job. They’re both key metrics in marketing analytics.
Determine the ROI of a marketing project
- Reading Duration: 20 minutes
In a previous video, you learned a simple calculation for ROAS (return on ad spend): ROAS = Revenue generated/Ad spend. You also learned that **Lifetime Value (LTV)—**sometimes called Customer Lifetime Value—is the average revenue generated per customer over a certain period of time.
Both ROAS and LTV enable you to get an estimate of your return on investment, or ROI, for a campaign. ROAS gives you a way to measure the short-term performance of your campaign. It answers the basic question: Did the campaign bring in more revenue than what was spent on the campaign? LTV answers a more strategic question: Did the campaign increase or promote the “stickiness” of customers so they made additional purchases? When you consider both ROAS and LTV, you are placing a value on both the numeric and strategic aspects of ROI for your marketing effort.
The video covered ROAS calculations based on revenue and what you can do to improve ROI if ROAS doesn’t meet your expectations. This reading provides more information about using LTV as a measure of ROI.
ROI using LTV
The definition of LTV does not provide a specific time period for which to calculate the average revenue generated per customer. The amount of time can be years, quarters, or months. Months is most commonly used in retail. When the time period is from the past to the present, LTV is sometimes referred to as total LTV. When the time period includes future dates, the LTV is referred to as predicted lifetime value (pLTV). Because pLTV relies on transactions and customer behaviors to predict a future LTV, pLTV becomes more accurate with each additional purchase and customer interaction that occurs.
Using pLTV is a common method to estimate the impact of digital marketing efforts before every sale comes through. For example, when newly registered customers make their first purchase, you can use the historical performance of similar customer types to predict the amount of revenue they will bring in over time.
Pro tip: When determining the ROI of completed campaigns, use total LTV rather than pLTV. You can use pLTV to predict the ROI for campaigns that are still in progress.
Two ways you can use LTV to measure the success or ROI of a completed campaign are:
-
LTV by channel
-
LTV to CAC ratio (LTV/CAC)
LTV by channel
Just as you can measure ROAS for each channel in a campaign, you can measure LTV in the same manner. Attribution of conversions by channel must be enabled in advance.
For each channel in your campaign, calculate the following:
LTV = Average Order Value (AOV) x Purchase frequency
Comparing the LTV for each channel provides insights on ROI by channel.
LTV to CAC ratio
Customer acquisition cost (CAC) is the average cost of acquiring a paying customer. LTV and CAC are used to calculate an LTV to CAC ratio. This ratio is helpful to determine if the value gained from adding new customers during a campaign is enough to cover the cost to acquire them. The higher the ratio, the better the ROI.
You can calculate the LTV to CAC ratio for a campaign or channel using the following:
LTV to CAC ratio = LTV/CAC
A result of 2 or higher is normally considered good. An ideal result is around 3. A result below 2 could occur if you’re intentionally spending more to gain market share. However, if that isn’t the case, you might need to cut budget spend to reduce the CAC and increase the LTV to CAC ratio. If the result is above 3, your ROI is solid and you have a steady and robust revenue stream. With a result above 3, you would presumably have enough budget to expand your business. For example, if your goal is to diversify the kinds of products associated with your brand, you could support that goal with advertising campaigns that are adequately funded.
Marketing mix models
Marketing mix models, sometimes called media mix models, are statistical models advertisers use to predict the effectiveness and ROI of their advertising spend. These models rely on at least two years of historical data from previous campaigns. Since they were first developed in the 1960s, they have become more reliable in predicting campaign ROI because of the recent benefits of artificial intelligence (AI) and machine learning. The actual models are beyond the scope of this reading, but you should be familiar with these terms.
Key takeaways
ROAS and LTV are a good place to start when trying to determine the short-term ROI of a campaign. Marketing mix models can help predict the ROI for campaigns, but building them requires additional technical skills, like computer programming and knowledge of statistics.
What would you change in a campaign?
- Discussion Prompt Duration: 10 minutes
What would you change in a campaign? You just learned that return on ad spend (ROAS) targets for digital channels are usually set relative to an overall campaign-level ROAS across a mix of media.
Consider the following scenario. You are monitoring the analytics for your campaign when you recognize that the ROAS is not meeting your target.
What actions can you take to make adjustments? In your response, consider:
-
Campaign duration
-
Media mixes
-
Budgets
-
Performance goals
-
Bidding strategy
Please respond in 5–10 sentences. Then, visit the discussion forum to learn about your peers’ approach, and reply to at least two posts.
Answer
When you discover that the return on ad spend (ROAS) in your campaign is not meeting your target, there are several actions you can take to make adjustments and optimize your campaign’s performance:
-
Campaign Duration: You can assess the campaign’s duration. If it’s a short-term campaign, extending it may provide more time to achieve the desired results. Conversely, if it’s a long-term campaign, you might consider making interim adjustments to improve ROAS.
-
Media Mixes: Examine the mix of media channels you are using. Allocate more budget and resources to the most effective channels while reducing or reallocating resources from underperforming ones. Analyze which channels are generating the best results and focus on them.
-
Budgets: Adjust your budget allocation based on the channels and strategies that are driving better ROAS. You may need to increase budgets for high-performing channels or decrease budgets for low-performing ones. Be flexible in budget allocation to align with performance.
-
Performance Goals: Reevaluate your performance goals. If your initial ROAS targets were too ambitious, consider revising them to be more realistic. Conversely, if your goals were too conservative, challenge your team to achieve better results.
-
Bidding Strategy: Review your bidding strategy. Implement automated bidding strategies provided by advertising platforms like Google Ads or adjust your manual bidding to maximize ROAS. Experiment with different bidding strategies to find the one that works best for your campaign.
-
Keyword and Ad Optimization: Continuously analyze and optimize your keywords, ad copy, and landing pages. Ensure that your keywords are relevant and that your ad copy is compelling. A/B testing can help identify what works best.
-
Audience Targeting: Refine your audience targeting to focus on high-converting segments. Use audience insights to tailor your messaging and ad creatives to specific customer segments.
-
Ad Creative: Evaluate your ad creatives. Ensure they are engaging and relevant to the audience. Experiment with different ad formats, visuals, and ad messaging to see which resonates best.
-
Ad Schedule: Adjust the timing of your ad placements to align with when your target audience is most active. This can help maximize the impact of your ads.
-
Tracking and Analytics: Ensure that your tracking and analytics setup is accurate and comprehensive. Review the data regularly to identify areas for improvement and optimize your campaign based on data-driven insights.
Remember that optimization is an ongoing process. Monitor the impact of your adjustments, gather data, and refine your strategy accordingly. It’s crucial to be agile and willing to adapt to changing market conditions and customer behaviors. Regularly testing and learning from your campaign data will help you achieve and exceed your ROAS targets.
Activity: Make campaign budget decisions
- Practice Quiz. 1 question. Grade: 100%
- Access Quiz:
-
On Step 1: Access the template: ROI calculations for campaign debriefing
-
On Step 2: Access supporting materials: Multi-channel campaign data
Activity Exemplar: Make campaign budget decisions
- Reading. Duration: 10 minutes
Here is a completed exemplar along with an explanation of how the exemplar fulfills the expectations for the activity.
Completed Exemplar
To review the exemplar for this course item, click the Link to exemplar: ROI calculations for campaign debriefing
Assessment of Exemplar
Compare the exemplar to your completed document. Review your work using each of the sections in the exemplar. What did you do well? Where can you improve? Use your answers to these questions to guide you as you continue to progress through the course.
If one or more of your calculations didn’t match the results in the exemplar, you can view the detailed calculations here to understand what you may have missed.
ROAS
Formula: ROAS = Revenue/Ad spend
-
Campaign ROAS = $502,358/$250,000 = 2.01 or 201%
-
ROAS for search ads = $320,943/$187,500 = 1.71 or 171%.
-
ROAS for display ads = $56,288/$32,000 = 1.76 or 176%.
-
ROAS for social ads = $70,101/$15,000 = 4.67 or 467%.
-
ROAS for shopping ads = $55,026/$15,500 = 3.55 or 355%.
Average order value (AOV)
Formula: AOV = Revenue / Number of orders
-
Campaign AOV = $502,358/6,237 = $80.55
-
AOV for search ads = $320,943/2,494 = $128.69
-
AOV for display ads = $56,288/2,039 = $27.61
-
AOV for social ads = $70,101/802 = $87.41
-
AOV for shopping ads = $55,026/902 = $61.00
Lifetime value (LTV)
Formula: LTV = Average order value (AOV) x Purchase frequency
-
Campaign LTV = $80.55 x 1.6 = $128.88
-
LTV for search ads = $128.69 x 1.5 = $193.04
-
LTV for display ads = $27.61 x 2 = $55.22
-
LTV for social ads = $87.41 x 1.5 = $131.12
-
LTV for shopping ads = $61.00 x 1.5 = $91.50
LTV to CAC ratios
Formula: LTV
= LTV/CAC-
Campaign LTV to CAC ratio: $128.88/$65 = 1.98
-
LTV to CAC ratio for search ads: $193.04/$112 = 1.72
-
LTV to CAC ratio for display ads: $55.22/$31 = 1.78
-
LTV to CAC ratio for social ads: $131.12/$28 = 4.68
-
LTV to CAC ratio for shopping ads: $91.50/$25 = 3.66
Percentages of new customers making purchases
Formula: Percentage of new customers making purchases =
(Number of unique new account purchasers / Number of new accounts) x 100
-
Campaign: (3,819/20,790) x 100 = 18.37%
-
Search ads: (1,663/8,420) x 100 = 19.75%
-
Display ads: (1,020/5,816) x 100 = 17.53%
-
Social ads: (535/3,959) x 100 = 13.51%
-
Shopping ads: (601/2,595) x 100 = 23.16%
Test your knowledge: Adjust a media mix and performance goals
- Practice Quiz. 6 questions. Grade: 100%
2. Performing and analyzing results from A/B tests
Introduction to A/B tests
- Video Duration: 4 minutes
In marketing, there are a lot of choices about content and strategy. Choices supported by the data to prove it are the best kinds of decisions. An A/B test can help you choose the best content or marketing strategy for an online business. An A/ B test, also known as a split test or bucket test, is an online experiment with two variants, and a random 50% split of users between the variants, to determine the better performing option. Almost all A/B tests randomly send 50% of users to one variant and 50% of users to the other variant. For example, an A/B test can randomly direct web traffic to two different versions of the same web page. Responses from those pages are monitored in the version that achieves higher performance based on the chosen metrics wins. An A/B test is typically performed on live web pages. It’s possible to perform an A/B test on a website that isn’t live as long as the number of users tested is large enough to produce a statistically meaningful result. Let’s say you want to test two versions of a direct response Ad to join a loyalty program. One version appears alongside a celebrity endorsement and the other appears with the savings offer. Which version will result in more conversions, or people signing up for the program? The A/B test will tell you which version is the better one using a sample population tested during a short period of time. In this particular test, it appears that a greater number of conversions occurred with the savings offer. An A/B test relies on statistical tests to determine which of the two options being tested is more effective. The statistical tests used during an A/B test depend on whether discrete or continuous metrics are used for comparison. Discrete metrics have specific values, can be counted, or are binary, like on/off or true/false settings. Examples of specific values are, click through rate, or CTR, conversion rate, and bounce rate. Examples of counts are new and returning user counts. And finally, a binary response could be if a user clicked or didn’t click on a direct response. Continuous metrics are measured and change over time. Examples of continuous metrics are revenue per user, average session duration, and average order value. The data is continuous because the measurement changes with each additional session or order. The key takeaway is that statistical tests are critical for A/B testing. The A/B testing software you use will perform the statistical analysis on a large enough sample of users to be informative. When you conduct an A/B test, you’re usually testing if a new version will improve a metric compared to the original version. When you plan for an A/B test, it’s helpful to document past performance, desired improvement, and the performance metric you’ll use for the test. Your organization might even have a template to help record this information for each test. There are many software tools available for A/B tests. You can conduct A/B tests on Ads with Google Ads. Performance marketers also rely on A/B testing to identify pages with the best performance. Google optimize lets you run A/B tests on your website content to understand what works best for your visitors. Hubspot performs A/B testing on email messages and landing pages. Optimizely offers A/B testing for a variety of touch points in the customer journey. Intellemize helps personalize webpages with machine learning and what could be considered as continuous A/B testing. These tools aren’t being recommended above others, but are mentioned to point out that a wide variety of solutions are available from different vendors. Whichever tools you use, if you conduct A/B tests, you’ll have greater confidence that you’ll meet performance goals, increase the number of conversions, and benefit from improved customer experiences through redesigned web pages that have been tested. Your choices do make a difference.
Perform A/B tests in Google Ads
- Video Duration: 3 minutes
Door number 1 or door number 2? Sometimes surprises are fun. But if you’re running a campaign, you want to minimize surprises when changing an ad. You’ll learn that an A/B test typically tests whether a new version of an ad or web page will improve metrics compared to the original version. This video will demonstrate how to set up an A/B test in Google Ads to compare the performance of two ad variations. An ad variation, you can test changes to ads, like a change to a URL, headline, or a call to action. For example, you can test how changing a headline from “Act now while supplies last” to “Huge savings, limited time offer” impacts sales across ads in multiple campaigns. Even a small change to a call to action from “Buy today” to “Buy now” can be tested. But not all changes require a test. It’s certainly reasonable to make minor text changes to ads without fully testing them first. Using the first example of alternate headlines, here is how you would set up an ad variation test, a type of A/B test in Google Ads, to test them. Assume that the current or existing headline is “Act now while supplies last.” In the Campaigns page in Google Ads, click “All campaigns”, then click on “Experiments” in the navigation panel. Then, create a new ad variation by selecting the plus icon. Select whether this ad variation applies to all campaigns, or select which campaigns this variation affects. Let’s assume this variation applies to all campaigns. Next, select that you’re updating text and that the edit is for “Headlines.” Finally, enter “Huge savings, limited time offer” as the text for the headline in the variation. Let’s name this variation Huge Savings. Use the default start and end dates for a 30-day run and keep the default 50 percent split for the test. To create the variation, click “Create variation.” After the A/B test for this ad variation runs, you’ll see the results in the ad variations table. Here is an example of what the results would look like in the table. You’ll still see metrics like clicks and impressions moving horizontally across the table. Each metric will show a positive or negative percent, which tells you the amount of change there was between the two variations tested. You’ll want to pay attention to the percentages with blue stars next to them. These stars indicate that there was a statistically significant amount of change between the two variations tested. If many metrics are displayed with stars, you might consider making the new variation a permanent ad change. This was an introduction to A/B testing using an ad variation test in Google Ads. You can also use experiments in Google Ads to test campaign bidding strategies. Refer to the Google Ads help or reading for more information about testing bidding strategies. Finally, as mentioned earlier, remember that you don’t need to test every change or variation. Because A/B tests take time and budget, it’s important to be strategic and selective about the ads that you and your team decide to test. These decisions are unique for every team.
A/B test ad variations and bidding strategies in Google Ads
- Reading Duration: 20 minutes
A video in this course demonstrated how to perform an ad variation test in Google Ads. This is one type of A/B test. Google Ads allows you to test variations for text display and responsive search ads. This reading reviews the steps to set up an ad variation test and introduces another type of A/B test in Google Ads, one that tests automated bidding strategies so you can decide which one is best.
Note: All tests are performed using the Drafts and Experiments feature in Google Ads.
Steps to set up and test ad variations
Here is an overview of the steps to set up and test a text or responsive search ad variation in the Drafts and Experiments section of Google Ads:
-
Decide if the variation is for all campaigns, or for certain campaigns only.
-
Select that the variation is for text or responsive search ads. Note: Starting June 30, 2022, you’ll no longer be able to create or edit expanded text ads in Google Ads. You should use responsive search ads instead.
-
Filter ads by the text ad component that will be tested, such as headlines or descriptions.
-
Modify the text for the text ad component to be tested.
-
Name the variation.
-
Choose the start and end dates for the A/B test.
-
Configure the split; almost always 50%.
-
Save the settings. The test will automatically run on the start date.
Text ad variations
The following are descriptions of components of text ads. You can create a variation for any of these components and test it against the original version.
Final URL
Also known as the landing URL, the final URL is the URL for the page that people reach after they click an ad. If your original URL is www.website.com/members
, one possible variation could be www.website.com/rewards
.
Final mobile URL
The final mobile URL is the URL of the page that users reach after they click an ad from a mobile device. If your original URL is www.website.com/m/members
, one possible variation could be www.website.com/m/rewards
.
Headline
A headline is the text that appears at the top of an ad. Google Ads allows up to three headlines with a 30-character limit for each. If your original headline is “Act now while supplies last,” one possible variation could be “Huge savings, limited time only.”
Display path
The display path is the URL that appears under a headline in an expanded text ad. Google Ads allows up to two paths with a limit of 15 characters displayed for each path. If your original display path is www.website.com/mens/shoes
, one possible variation could be www.website.com/shoes/men
.
Description
A description appears in an expanded text ad. Google Ads allows up to two descriptions with a 90-character limit for each description.
Suppose your original description is “Top athletic, outdoor, casual, and dress shoes. Free shipping on purchases of $75 or more.” One variation could be “Free shipping with any $75 purchase. Top shoe brands: athletic, outdoor, casual, and dress.”
Responsive search ad variations
Responsive search ads display best-fitting headlines and descriptions based on user search queries. The more headlines and descriptions that are configured, the more opportunities Google Ads has to serve ads that more closely match your potential customers’ search queries, which can improve search ad performance. Testing variations may enable you to select the headlines and descriptions that attract more customers.
Steps to set up and test alternate bidding strategies
You can use campaign drafts and experiments to compare your current bidding strategy to an alternate strategy. For example, you can run an experiment to compare Target CPA (cost-per-action) automated bidding to manual bid changes. This will help you determine if automated bidding will improve the performance of a campaign.
Here is an overview of the steps to set up and test an alternate bidding strategy in the Drafts and Experiments section of Google Ads; for example, you can test a Target CPA bidding strategy (an experiment) against a maximum CPC (cost-per-click) bidding strategy (a current campaign).
-
Name the experiment.
-
Choose the start and end dates for the A/B test. You should allow a bidding strategy test to run for four to six weeks.
-
Configure the split; almost always 50%.
-
Decide if the test will be a search-based or cookie-based split.
-
In a search-based split, users are evenly split between the existing and experimental campaigns.
-
In a cookie-based split, users who perform repeated searches are directed back to the same campaign. A cookie-based split may give more accurate test results, especially for remarketing campaigns that are directed towards returning users.
- Save the settings. The test will automatically run on the start date.
Best practices to test alternate bidding strategies A/B testing on bidding strategies must use an existing, live campaign for comparison. Choose a campaign that has an adequate budget to split between the variations and a large enough audience for a statistical comparison of the results. For more information about statistically significant data for A/B tests, refer to the reading about monitoring A/B test results in this course. Only choose a campaign that you’re willing to experiment with because the experiment may impact the performance of the existing campaign. And finally, focus the test on one change or variable only. Testing multiple changes at the same time prevents you from knowing for sure which variable caused a change in performance.
Resources for more information
You can refer to the following links for more information about testing ad variations and bidding strategies in Google Ads:
-
About campaign drafts and experiments: This article provides an overview of draft campaigns and experiments.
-
Set up an ad variation: This article provides instructions to set up an ad variation.
-
Test your automated bid strategies: This article describes how to test bidding strategies.
How to plan for A/B testing
- Reading Duration: 20 minutes
As you’ve already learned, an A/B test compares two variants of the same content, to determine which yields better results. An A/B testing plan helps you structure this experiment by outlining key information about the test and its success metrics. In this reading, you’ll learn how to prepare an effective plan for an A/B test.
Define the problem
You can run A/B tests on almost any digital content or design element. But no matter what you choose to test—from an ad headline to the color of a button—you should first identify a specific problem or goal for your experiment. It might be that you want to improve low conversion rates or find a way to fill a new customer need. Even if the problem or goal seems large, it’s best to start with small changes. Understanding how minor adjustments affect performance will give you a baseline for testing more ambitious changes.
Elements of an A/B testing plan
The details of an A/B testing plan may differ by company or testing tool, but the fundamentals of an effective plan are often the same. Below are some examples of common elements you might find in an A/B testing plan.
Hypothesis
Once you have a clear idea of what you want to achieve, it’s time to create a hypothesis. In an A/B test, the hypothesis describes the “why,” “what,” and “how” of the experiment. It also makes a prediction about the outcome. A hypothesis should be backed by research or data and focus on a single issue.
At minimum, the hypothesis should describe:
-
The problem or insight the test will address
-
What you plan to change to address the problem
-
The expected impact or result of the change
For example, imagine that a company wants to increase the percentage of marketing email recipients who click through to their website. After examining the data, they determine that subscribers are more likely to click elements that appear near the top of an email. Their A/B test hypothesis could be:
- “Because we discovered that customers are more likely to click elements near the top of an email, we expect that changing the position of a link will boost our click-to-open rate by 15%.”
A strong hypothesis makes it easier to report on the test’s results and to share your insights with stakeholders. But the process of creating a hypothesis is just as important. That’s because it forces you to state the reason for the test in concrete terms. If you find it difficult to create a compelling hypothesis, it might mean you should gather more data before running the test.
Variants
With a hypothesis in place, your team can begin to plan the variants. Variants are the different versions of the content served to users during an A/B test. Variant A represents the original content, while variant B usually differs in some meaningful way. It’s generally a good idea to limit the number of modifications to a single variant, however. Changing too many things at once can make it harder to interpret the test results.
For example, in the email marketing scenario, the link in variant B might be moved to the top of the message. But what if this variant also included new call to action (CTA) text and turned the link into a button? How would you measure the impact of each change individually? By limiting the changes to each variant, you’re more likely to get clear and actionable results.
Note: Even a “failed” test can provide valuable data. If your B variant doesn’t produce the expected improvement, that doesn’t necessarily mean your hypothesis is wrong. You may need to test different variants to get the outcome you want.
Metrics
Before you begin testing, your team should decide how to measure results. While you’ll likely track several metrics for each test, some will be more important than others. In the email marketing example, the primary metric is the click-to-open rate (percentage of recipients who clicked one or more links after opening an email). But the team might also track the conversion rate to find out what percentage of people who clicked that link eventually made a purchase.
Current performance and expected improvement
You’ll also need to agree on a definition of success. How big of an improvement do you want or expect from the test? Is a 5% increase enough? How about 10%? The goal you set can be ambitious, but it should also be realistic according to the available data.
Other testing details
An A/B testing plan can also contain other vital details about the test. Remember that different companies may put different information in their A/B testing plans, but a basic plan might include:
-
A brief overview of the test and its purpose
-
The channel being tested (e.g., Google Ads, Google Optimize, etc.)
-
Type of asset being tested (e.g., display ad, button copy, etc.)
-
The duration of the test (start and end date)
-
The number of users per variant
-
The confidence level (the estimated probability that the test results would remain consistent if the test ran longer)
Key takeaways
A/B testing is a valuable tool for improving digital marketing and e-commerce performance. An A/B testing plan helps you organize your efforts and results, which can lead to more efficient improvements. With a data-driven hypothesis, carefully selected variants, and a plan to measure success, you’re more likely to reach your testing goals.
Activity: Plan for A/B testing
- Practice Quiz. 1 question. Grade: 100%
- Access Quiz:
- On Step 1: Access the template: A/B testing plan
Activity Exemplar: Plan for A/B testing
- Reading Duration: 10 minutes
Here is a completed exemplar along with an explanation of how the exemplar fulfills the expectations for the activity.
Completed Exemplar
To review the exemplar for this course item, click Link to exemplar: A/B testing plan
Assessment of Exemplar
Compare the exemplar to your completed A/B testing plan. Review your work using each of the criteria in the exemplar. What did you do well? Where can you improve? Use your answers to these questions to guide you as you continue to progress through the course.
Current conversion rate
Based on the information provided in the scenario, the current conversion rate for the hotel’s Google Ads campaign is 2%.
Expected conversion rate
Based on the information provided in the scenario, the expected conversion rate for the hotel’s Google Ads campaign is 7%.
Hypothesis
-
Research insight: The hotel recently did research on their target audience. The insight from this research led them to the discovery that their target audience was made up of customers who were deal seekers.
-
Change: Based on this research insight, the hotel thinks it would be a good idea to change the ad’s headline to focus on deals.
-
Impact: After making this change, the hotel expects the conversion rate to increase by 5 percentage points.
Headline for Variant B
Since the hotel would like to feature their current promotion, the exemplar uses the headline “Deals Up to 20% Off” to appeal to customers who are deal seekers. The character count is 19 (including spaces), which fits within the 30-character limit.
Other tools for A/B testing
- Reading Duration: 20 minutes
You learned about performing A/B tests in Google Ads, but there are other tools that offer A/B testing capabilities. This reading lists similar tools and links to find more information about them. It also introduces a general process checklist to help you organize A/B testing efforts and defines additional tests that can be performed.
Note: This certificate program does not promote or endorse any of the tools listed. The purpose of this reading is to provide you with a sampling of A/B testing tools that are available.
Other tools for A/B testing
Here is a list of several other tools for A/B testing:
-
AB Tasty: Pricing is available by custom quote
-
Convert: Free trial is available
-
Crazy Egg: Free trial is available (requires billing information)
-
Instapage: Free trial is available (requires billing information)
-
Optimizely: Pricing is available upon request
-
Unbounce: Free trial is available (requires billing information)
-
VWO: Free trial is available
Process checklist for A/B testing
Regardless of which tool you choose, the following is a process checklist that you can use to help you organize your A/B testing efforts.
-
Choose a variable to test.
-
Identify the goal of the test.
-
Clearly identify the control and the variant.
-
Verify that your test is the only A/B test running for a campaign or webpage.
-
Split sample groups randomly (can be managed by A/B testing platform).
-
Determine the required sample size (can be managed by A/B testing platform).
-
Decide on the confidence level for statistical significance (can be managed by A/B testing platform).
-
Select an A/B testing platform.
-
Test both variants at the same time.
-
Allow the test to run long enough to collect enough data.
-
Review test results against your goal for the test.
-
Decide on the appropriate actions to take based on the test results.
-
Plan for additional A/B tests, if required.
Additional tests
You were previously introduced to A/B tests for a single variation, or variant. For example, you learned that you could run an experiment to test two variations of the same ad or landing page. Many tools that offer basic A/B testing also offer additional tests. It’s important to know the capabilities of the tools you choose so you’re able to run the tests you need. Two other types of tests are redirect and multivariate tests.
Redirect tests
A redirect test enables you to test separate webpages against each other. This is different from a basic A/B test in which you test changes to the same webpage. In a redirect test, variants are identified by a URL or path instead of by a certain element on the page, such as a banner. Redirect tests are useful if you want to test two completely different landing pages, or are involved in a complete redesign of a website.
Multivariate tests
A multivariate test, sometimes referred to as a multivariable test, is used to simultaneously test variants of two or more elements on a page to determine which combination yields the best results. This is different from a basic A/B test in which you only test a single variable or change. A multivariate test identifies the most effective variant for each element but also provides insights on how the variants work together when they are combined. This allows you to identify the best combination of variants.
Key takeaway
A/B testing is an essential part of marketing. It can help you create better customer experiences for e-commerce. Whether you need to test ads, webpages, social media posts, or other content, A/B testing tools with different capabilities and pricing are available. You can compare your testing needs to the features and plans offered to select the right tool for your organization.
Monitor A/B test results in Google Ads
- Reading Duration: 20 minutes
Some people enjoy digging more deeply into numbers, while others lose interest once numbers are brought up. If you work in the field of performance marketing, you’ll learn to be comfortable using numeric data to make decisions about campaign performance. When you monitor A/B test results for ad variations or bidding strategies in Google Ads, you’re presented with a few statistical results. This reading provides an introduction to these so you’ll be prepared to analyze data from your experiments. Based on the numbers, you can then decide which changes you tested are relevant and beneficial to your campaigns.
Statistics in Google Ads
When you monitor the results of experiments, you’ll encounter the following statistical terms:
-
Confidence level
-
Confidence interval (derived from a margin of error)
-
Statistical significance
Note: Read further to understand what these terms mean and what they tell you about an experiment.
Confidence level
How confident are you in the results of an experiment? For example, a 95% confidence level means that if you were to run the same A/B test 100 times, you would get similar results 95 of those 100 times. A confidence level is normally selected before any data is collected. Most researchers use a 95% confidence level. By default, Google Ads uses this confidence level for experiments. A minimum number of users must participate in a test for that level of confidence to be reached. That’s why you typically run an experiment for at least four weeks—to achieve a result that’s at a 95% confidence level.
Confidence interval (and margin of error)
Because you can’t run a test on an entire population of users, the comparative results of an A/B test is an estimate of the results you would get if you were able to test all users. The margin of error is the statistically calculated difference between the test result and the theoretical result you could have gotten if you had run the test with a lot more users. The confidence interval is the range of possible values after accounting for the margin of error. This range is the test result +/- the margin of error. For example, if your test result is a 5% difference between variations and the margin of error is 2%, the confidence interval would be 3% to 7%. The difference between variations tested could be as low as 3% or as high as 7%. When Google Ads lists an expected range of results, it is reporting the confidence interval.
Statistical significance
Statistical significance is the determination of whether your test result could be due to random chance or not. The greater the significance, the less due to chance. Google Ads performs the statistical calculations at a 95% confidence level in the background and lets you know if a result is statistically significant. A blue star (asterisk) displayed next to a result indicates that there was a statistically significant amount of change between the two variations tested.
Example from an experiment
As shown below, the Huge Savings ad variation experiment (or A/B test) in Google Ads resulted in fewer clicks and impressions. The number of clicks, 250, was an 8% decrease, while the number of impressions, 12,139, was an 11% decrease. The decrease in the number of impressions was marked with a blue asterisk to indicate that the result was statistically significant at a 95% confidence level. Since both clicks and impressions show a downward trend, you wouldn’t choose to apply this variation to your responsive search ads.
How long to run an experiment
As mentioned previously, an experiment normally runs for at least four weeks for a good chance at achieving a statistically significant test result. If you need more guidance, many sample size calculators for A/B testing are available online. One such calculator from AB Tasty (an A/B testing tool mentioned in the previous reading) allows you to enter the average number of daily visitors and the number of variations to calculate a possible duration for a test.
You also need to run an experiment long enough to account for normal swings in e-commerce sales. For example, if it’s normal for your e-commerce business to have the best sales on Sundays and the worst sales on Saturdays, running a test for at least two weeks, and preferably longer, reduces the impact those swings have on your test results. In the same manner, if a sales promotion happens during a test, running the test longer helps prevent that promotion from skewing the test results. If not permitted to run long enough, an ad variation test can appear to perform better than it normally would because a promotion can change user behavior for a few days or weeks.
Resources for more information
Refer to the following links for more information about monitoring your A/B tests for ad variation and bidding strategies in Google Ads:
-
Monitor your ad variations: This article describes how to monitor the performance of ad variations and interpret results.
-
Monitor your campaign experiments: This article describes how to monitor the performance of a campaign experiment in a scorecard.
To try a sample size calculator for A/B testing that includes a feature to estimate the required duration for a statistically valid test, refer to the following link:
- Sample size calculator: Use this calculator to estimate how many users you need and how long you should run an A/B test.
Jordan - Interpret A/B test results
- Video Duration: 2 minutes
I’m Jordan, and I work at Google Marketing in Experimentation and Insights. And what that means is that I work with product teams across Google to optimize their products using research and data. An A/B test is a way for you to cleanly compare two different versions of your marketing content. So imagine that you’ve had a version A and a version B of something that you’re looking to market, and you want to figure out which of those two perform better. An A/B test allows you to compare and figure out which your customers respond best to. Digital marketers use A/B tests in a number of different ways. This could be in the actual language or imagery that you use in your ads, it could be in the structure or format of an e-mail, or it could be the way that you’re formulating your website. All of these things can be A/B tested to compare differences and see what your customers respond best to. So there are a few steps that you’ll need to take in order to formulate an A/B test. The first is that you’ll want to make sure that you have a strong hypothesis about what your users are doing and how you might optimize for your customers. The second is you’ll actually want to create the different variations to address those user needs. Having a solid hypothesis allows you to draw a key insight about your customers, and this can be harder to develop than you might think. So it’s really more about thinking about who your customer is and the insight you want to gain, rather than the actual tool to implement the test. Once you run and execute your A/B test, then you’ll be able to draw some conclusions based on measurement and analysis of those different performance arms that you’ve launched. A/B tests allow you to gather causal insights into what your customers prefer. A causal insight means that by varying one specific aspect of the A or the B that we’re testing, we can compare and draw conclusions that that particular variation led to an improvement or a decline. And this is really powerful because it allows you to optimize for the marketing content, say it’s an email, an ad, or even parts of your website that will help attract the right kind of users to your product. Without A/B tests, we actually don’t know what might be causing one particular change in our customer’s behavior. It could be the time of day, it could be the content we’re using, or it could be a multitude of other factors. But by using A/B tests, we are able to actually isolate one particular variable over another to draw some of those causal insights. I love A/B tests, and I love A/B tests because they allow us to make conclusions that we can continually iterate on. Meaning that once we figure out one aspect of the way that our users behave, we can then figure out additional motivations that might drive their behavior.
Case study: How Good Boy Studios improves customer acquisition with A/B tests and analytics
- Reading Duration: 20 minutes
You have learned how A/B tests enable you to test ad variations to improve conversion rates. A/B tests are also helpful when designing or improving a website or mobile app. This case study describes how Good Boy Studios, located in Stamford, Connecticut, combines the power of Firebase and Google Analytics to conduct A/B tests to improve customer acquisition in its mobile app.
Note: Firebase is an app development platform that helps developers build apps that people enjoy. Google Analytics is integrated across Firebase platform features and provides unlimited reporting for up to 500 events defined in Firebase. Using Google Analytics reports, developers can clearly understand user behavior in apps developed on the Firebase platform. Armed with this data, they can make informed decisions about how to market the app or optimize it for performance. For more information about Firebase and how Firebase and Google Analytics work together, refer to the Firebase product page and Firebase documentation.
Company background
The American Pet Products Association reports that pet industry sales revenue in the U.S. was $90.5 billion in 2018 and grew to more than $123 billion in 2021.
When Good Boy Studios founder Viva Chu adopted his labradoodle, Coder, he didn’t know that Coder would spark an idea for apps and a marketing platform that caters to the needs of pet owners in a digital age.
People enjoy posting creative content of their pets on social media and even create digital identities to personify their furry friends. Building on this trend, Good Boy Studios released an app called Pet Parade that enables pet owners to earn gift cards by sharing their pet’s photos. Pet owners can also participate in contests sponsored by local businesses, where they can earn real prizes. Submissions and votes are collected on the Pet Parade app, and winners are announced on partners’ websites or at local stores.
The Pet Parade app paved the way for Good Boy Studios to become a leading digital marketing platform in the pet industry. Today, pet food, supplies, and healthcare companies use Good Boy Studios’ Pet Demographics Audience Platform for data insights and targeting of audiences.
The challenge
Good Boy Studios’ latest app, PetStar™, enables pet owners to create music videos of their pets from uploaded photos. People can create and share links to their videos for free, or subscribe to a premium service to download finished video files to their own devices. The premium service also gives subscribers unlimited access to the most popular songs in the company’s music library. Users are prompted to subscribe to the premium service if they select a song from the library that isn’t free.
Like many companies, Good Boy Studios uses a marketing funnel for customer acquisition. The touchpoints in the funnel mirror the steps that users take to create a music video in the PetStar™ app. Each step is defined as an event in Firebase, and each event in Firebase is monitored using a funnel exploration in Google Analytics. The more steps a user completes toward a finished video, the higher the chance of a conversion to the premium service. Here are the Firebase events for the PetStar™ app that are monitored in a funnel exploration in Google Analytics:
-
Event 1: Click to create a video
-
Event 2: Upload photo of pet
-
Event 3: Mark the center of the pet’s head
-
Event 4: Set markers for the pet’s eyes
-
Event 5: Set marker for the pet’s nose
-
Event 6: Set multiple points to define movement of the pet’s mouth
-
Event 7: Select a song for the music video (and depending on which song is chosen, potentially subscribe to the premium service)
-
Event 8: Record the video
-
Event 9: Preview the video
-
Event 10: Share a link to the video with others on social media
-
Event 11: Download the video (and subscribe to the premium service)
A set of mouth markers created in the app in Event 6 is shown below.
As normally observed in a marketing funnel, not everyone who begins creating a video in the PetStar™ app actually finishes. Using Google Analytics, Good Boy Studios discovered that users most often dropped off at Event 6 when they set the markers for a pet’s mouth. Users also dropped off during Event 8 when recording a video.
The approach
Good Boy Studios is using A/B tests to identify the best strategies to increase the number of users who successfully create a new video. The company’s goal is to decrease the number of users who drop off, especially at Event 6 when pet mouth markers are set and at Event 8 when users record their video.
The results
Good Boy Studios is modifying its PetStar™ app based on analysis of data from A/B testing.
Adding user notifications
To help combat users dropping off at Event 6 when markers are set for a pet’s mouth, Good Boy Studios started using the A/B testing feature in Firebase to try different user notifications and tips for marker placement. To determine what increases video completion within 24 hours of users opening the app for the first time, the company is testing interactions like:
-
User is prompted to check to ensure pet’s mouth is closed before setting a marker
-
User taps a lightbulb for tips
-
User pinches to zoom in where a marker is placed
In one A/B test, two user tips were tested:
-
Variant A: User Tip - ”When placing your mouth, pinch to zoom in for more control over each point.”
-
Variant B: User Tip - ”Make sure your pet’s mouth is closed to get the best results.”
Below are the A/B test results in Firebase.
The user tip represented in the A/B test as Variant A improved completion of the mouth marker event by 44% over the baseline with no user tip. The user tip represented in the A/B test as Variant B improved the completion of the mouth marker event by 25% over the baseline.
The full battery of A/B test results aren’t completely in yet, but Good Boy Studios is well-positioned to determine which combination of user notifications and tips will generate the greatest increase for event (mouth marker) completion and overall video completion.
Adding a RECORDING SONG indicator
Good Boy Studios also discovered using A/B tests of multiple variants that adding a RECORDING SONG indicator at the top of the app when users are recording a music video was the most effective at reducing the drop off at Event 8. Users receive instant feedback that they’re successfully producing their first video.
In the Google Analytics funnel exploration below, the average recording completion rate was 67.15% after the RECORDING SONG indicator was added. The improvement was most dramatic for iOS users with a completion rate of 70.01%. The baseline funnel exploration (not shown) had an average recording completion rate of 62%.
Conclusion
A/B tests and analytics help businesses market and improve their apps. For Good Boy Studios, A/B tests in Firebase and funnel explorations in Google Analytics offer clear advantages to monitor and improve the conversion rate. As the company continues to evaluate where new users are getting stuck in the funnel, its developers can improve user notifications and tips in the app. This process will be ongoing to increase the video completion and conversion rates and overall customer acquisition for the premium service.
Test your knowledge: Perform A/B tests
- Practice Quiz. 6 questions. Grade: 100%
3. What does a sucessful campaign achieve?
Indicators of a successful marketing campaign
- Video Duration: 4 minutes
Many metrics can be monitored during a marketing campaign. How can you conclude whether a campaign was successful or not? The insights you use to evaluate the success of a marketing campaign depend on the marketing goal or goals and what the campaign was trying to address specifically. But the main success factor is going to be whether you met the performance goals that were set. Let’s use two examples to understand success indicators. In the first example, an overall marketing goal was to increase the number of leads. Leads were measured as micro-conversions. A micro-conversion is a completed response that indicates that a potential customer is moving towards a completed purchase transaction. A micro-conversion performance goal of increasing email sign-ups by 20 percent was set. From past experience, people who signed up to be on an email list had a 50 percent chance of completing a purchase. Therefore, a macro-conversion performance goal was set for 60 percent of people signing up to complete purchases, a 10 percent improvement. Next, similar targets for chatbot conversations and blog page visits were set. The chatbot was new, so the goal was for at least 20 conversations to occur during the campaign. Blog page visits spiked by 40 percent recently, so a performance goal was set to increase new visitors to the blog page by 20 percent. Macro-conversion performance goals were set. The goals were that 30 percent of chatbot users would complete purchases, and 10 percent of new blog visitors would complete purchases. For email, the results show that the campaign increased email sign-ups by 21 percent, but it seemed to have no effect on completed purchases which remained flat. For chatbot conversations, both micro and macro-conversion goals were met or exceeded. For the blog page, both micro and macro-conversion goals were missed by quite a lot. Was the campaign a success? Based on the metrics, there were mixed results. Some parts of the campaign were under-performing, but other parts of the campaign were very successful. Several things did become clear for future work. Chatbot engagement should continue because of the conversion pull through. Email messages should be reviewed and modified to possibly provide a better lift in conversions. Finally, not too much time or energy should be spent on blog campaigns. In the second example, a marketing goal was to increase online sales by doubling the average order value. In other words, the average order value was $50 prior to the marketing initiative and the goal was to increase it to $100. This also served as the performance goal. The digital marketing campaign included direct responses so customers could click to view additional products before and during their checkout. There was also a promotional ad for customers to spend $100 and receive free shipping. Metrics monitored were the online sales revenue, number of orders, average order value, and the cost per sale to keep return on investment or ROI in check. Secondary metrics of interest were orders by geographic region, top-selling products contributing to larger orders, and the ratio of orders from new versus returning customers. Although the average order value increased by 45 percent to $95, just under the performance goal to double it, the overall trend showed a very successful campaign. Online sales revenue went up by 15 percent, the number of orders increased by 170, the cost per sale went up by three percent, orders by geography didn’t change. The top-selling products that contributed to the increase in order value were small kitchen appliances and electronics, and finally, the ratio of orders from new versus returning customers went up from 1-1.4. To summarize, the two examples in this video show that metrics of interests are always dictated by the goals of the marketing campaign. The metrics used to evaluate success were completely different. The benefit of using marketing analytics tools is the ability to view metrics and gather insights. These insights help you to evaluate and define the success of any campaign. Campaign success will vary, but the insights gained always have current or future value.
Evaluate the success of a marketing campaign
- Discussion Prompt Duration: 10 minutes
Lately, you have focused on how to use insights to measure the success of marketing campaigns. The main success factor of a marketing campaign is whether marketers were able to meet the performance goals that were set. Now, it’s time to practice measuring the success of a marketing campaign. Doing so will help you become more comfortable thinking like a digital marketer.
For this discussion prompt, consider the following marketing and performance goal made by a company that sells housewares:
Marketing goal: Increase online revenue by doubling the average order value from $40 to $80.
As a strategy to increase online revenue, the housewares company issued a promotional ad for customers to spend $200 in the next 24 hours to receive free shipping.
The metrics monitored include:
-
Online sales revenue
-
Number of orders
-
Cost per sale
-
Orders by geography
-
Top selling products
-
Ratio of orders from new versus returning customers
The results of the campaign show the following:
-
Average order value: an increase from $40 to $55
-
Online sales revenue: an increase of 38.5% when compared to the same day of week from the previous week
-
Number of orders: increased by 70 (1%) when compared to the same day of week from the previous week
-
Cost per sale: an increase of 2%
-
Orders by geography: no change
-
Top selling products: bathroom/kitchen
-
Ratio of orders from new versus returning customers: an increase from 1 to 1.2
Was the campaign successful? If not, what could have been done differently?
Please write a response of 3–4 sentences (60–80 words). Then, go to the discussion forum and, applying what you have learned, comment on at least two other learners’ posts.
Answer
The marketing campaign showed some positive results, such as an increase in average order value and online sales revenue. However, it fell short of doubling the average order value as initially aimed. While the cost per sale increased slightly, it was not the primary goal of the campaign.
The increase in orders by new customers indicates some success in attracting new business, but the overall success of the campaign could be improved by further incentivizing customers to reach the $80 goal. Perhaps adjusting the promotional offer or extending the campaign duration would have achieved the target.
This response evaluates the campaign’s success and suggests areas for improvement, like adjusting the promotional offer or extending the campaign to meet the performance goal.
Test your knowledge: Evaluate campaign success
- Practice Quiz. 5 questions. Grade: 100%
4. Review: Measure the success of Marketing campaigns
Wrap-up
- Video Duration: 1 minute
In this part of the course, you learned about using ROI and ROAS to quantify returns from campaign spending and justify changes to spending. You examined how success is defined for campaigns. Metrics are always dictated by the goals of a marketing campaign. The performance metrics used to evaluate one campaign can be entirely different than those used to evaluate another. You also learned that A/B tests can optimize variants in a campaign to achieve even greater success. A/B tests can also help you choose the best content for an online store based on user engagement. Lastly, in case studies, you compared marketing campaigns with varying degrees of success and identified marketing practices that help improve campaigns. The key takeaways are: ROI and ROAS help you measure and adjust campaigns for greater success. A/B tests enable you to optimize certain parts of a campaign. And lastly, the metrics and insights from analytics help determine a campaign’s success. The main success factor for a campaign is meeting performance goals. A successful campaign contributes to overall marketing and business goals. When on the job, ensuring that the metrics you monitor for a campaign relate back to marketing and business goals is critical. Knowing how to work with data to find insights for your team will take you on your own path to success.
Glossary terms from module 3
- Reading: Duration: 10 minutes
Module 3 challenge
- Quiz: 10 questions. Grade: 100%
- Link to challenge 3