Headline and image A/B testing in the HUD
A/B tests in the Heads Up Display let you compare headline and image variants to find what drives the most clicks. A story that is underperforming might in fact interest readers but merely need a new title or image. Run as many variants as you want, review CTR results, and let statistical confidence determine the winner. Find the no-code setup instructions here.
Test different headline and image combinations, or keep one element the same across variants to isolate its impact. Ongoing tests are visible at a glance through blue A/B labels in the HUD.
Marfeel Copilot is integrated into the HUD to generate headline suggestions in any language based on what has worked before on your own home page. The more tests you run, the better the suggestions become.
Test headlines
Section titled “Test headlines”Testing headlines is as simple as clicking on one of the suggestions from Copilot. This automatically creates an A/B test comparing the original headline to the suggested one, which you can edit as much as you want.

To test any other headline:
- Click on
New A/B test - Add new text
- Click
Add variantto test additional titles - Check the HUD for a preview of how the new title would look in context and adjust accordingly
- Click
Run test

Test images
Section titled “Test images”You can also set up image A/B testing to find the best visual for each story.

- Open a new A/B test
- Click the plus sign on the image to add a new image
- Upload a new image from your computer
- Slide the bar back and forth to crop and adjust
- Make sure to respect the original aspect ratio as indicated by the highlighted box
- Leave the title as-is to only test the image, or edit the text as well
- Click
SaveandRun test
Results
Section titled “Results”Return to the article detail window to find the test results for the corresponding link.
Testing metrics
Section titled “Testing metrics”- CTR: the Viewable CTR for that variant
- Impressions: the number of users who viewed that variant
- Clicks: the number of clicks on that test variant
- CTR lift: the difference in CTR between the original and winning variant

Check the HUD toolkit for all ongoing and archived A/B test results:

Conditions to declare a winner
Section titled “Conditions to declare a winner”Marfeel’s required statistical confidence to declare a winner starts at 99% and decreases over time to help tests conclude. If no variant reaches a minimum of 80% confidence when the test runs out of time, the result is “no winner.”
A test concludes when either a variant outperforms the original by the specified confidence margin (the variant with the highest CTR is selected) or the original performs better than all other variants by the stated confidence margin.
Each test must also reach a minimum number of impressions per variant: 200 impressions for tests lasting less than one hour, and 100 impressions for tests running one hour or longer. The maximum duration of the tests is adjustable within the experience settings.
Original won
Section titled “Original won”You got it right the first time. If a new title or image does not significantly increase the viewable CTR, the topic itself may not interest concurrent users. Consider moving the article down the page or removing it completely to make room for more impactful stories.
No winner
Section titled “No winner”Insufficient traffic across test variants or lack of a statistically significant difference between them will generate this result.
Winner found
Section titled “Winner found”Well done, you have found an optimization opportunity. See how much the new version outpaced the original in terms of viewable CTR. Replace the original title or image with the winning variant.
Update content with winning results
Section titled “Update content with winning results”The best practice for replacing the current headline or image with the new winner is to permanently change it in the CMS.
It is also possible to set up an Experience that automatically updates the content with the winner. This is a convenient way to ensure users always see the winning variant, however best practice remains to update the CMS directly at your convenience.
Historical results
Section titled “Historical results”The Historical AB Tests playbook is the archive of past A/B tests. Use it to compare whole periods, measure adoption, gauge the true impact of A/B testing, and replicate what works.
- Use the system to identify tests that have yielded significant results in the past
- Dive into the specifics of these tests to see what trends or elements you can replicate, ensuring continual page improvement

To analyze your A/B tests effectively, follow these steps:
- Select a time window
- All metrics will provide benchmarking data, comparing them to the previous six periods. For example, if you choose “yesterday” and it was a Monday, the system will compare the metrics to the preceding six Mondays
- Filter the information as needed. Here are some examples:
- Use the filter
folder = sportsto analyze only A/B tests conducted on articles under the /sports section - If you have a multi-site account, filter by host to focus on tests conducted on one of your domains
- Use the filter
A/B tests of a specific article
Section titled “A/B tests of a specific article”You can analyze the individual historical performance of any article that has undergone an A/B test from the article detail screen. At any given time, review the tests conducted on that particular article, including the date, duration, the user who conducted the test, and the results.

- Navigate to the article detail screen of the article you wish to analyze
- Expand the “A/B Test” tab to access the data analysis
- Review the performance of headlines or images to identify the best-performing variants
How does Marfeel declare an A/B test winner?
Marfeel starts with a 99% statistical confidence threshold that decreases over time. A winner is declared when a variant outperforms the original by the required confidence margin, selecting the variant with the highest CTR. If no variant reaches at least 80% confidence before the test ends, the result is “no winner.” Each variant must also reach a minimum number of impressions: 200 for tests under one hour, or 100 for tests lasting one hour or longer.
Can I A/B test images and headlines separately?
Yes. You can keep images the same across variants to test only headlines, keep headlines the same to test only images, or combine different headlines and images in each variant.
What should I do when the original headline wins the A/B test?
If the original title or image does not show a significant CTR increase from variants, the topic itself may not interest concurrent users. Consider moving the article lower on the page or removing it to make room for more impactful stories.