Ad creative review using asset tagging + results

Link post

When you run ads, it’s not always clear which parts of the creative are driving results. I recently helped an org evaluate their Meta ad creative, and it felt pretty helpful for the time it took.

The basic approach was:

  1. Pull a report that breaks performance down by each image/​video asset (asset-level breakdown) instead of by ad, then export it to CSV.

  2. Manually tag each asset using binary tags (yes/​no) based on major creative levers, not minor design details. For example: human present, eye contact, CTA presence, urgency language.

  3. Use that tagged sheet to compare cost per result for creatives that have each element vs those that don’t.

I uploaded the CSV to ChatGPT to quickly calculate cost per result and summarize the findings, then spot-checked the math.

A few things I did to avoid misleading conclusions:

  • I focused on assets with enough volume so I wasn’t basing conclusions on tiny sample sizes.

  • I split the analysis around a major campaign change (budget structure) and emphasized patterns that held up in both periods.

  • I treated everything as correlation, not causation, until it’s A/​B tested properly.

I’m curious if other orgs have done analyses like this that go beyond what’s readily available in an ad platform.