This is a very good writeup, thanks for this. Everything strikes me as correct on the merits of the experiment. I think the objection that we don’t know how long people watched it misses the mark as you say, since we are interested in the effect of viewing an online ad, not watching an entire video (it can become relevant if we try to extrapolate to contexts where people do watch the entire movie).
As I’ve said elsewhere, I’m skeptical that the approach to take is to do more such RCTs. I worry about us having to spend extremely large sums of money for such things. Certainly it seems we should compare with other options, like investigations, and not try too hard to find effect sizes that don’t dominate those other options.
On this note, what effect size are you using for power calculations? Is it the effect size in the study? You probably want to power it for a smaller effect size—the smallest such effect such that MFA or another org would choose to invest more or less in online ads based on that (so the effect that would determine whether online ads are or are not competitive with investigations and corporate campaigns most likely).
As I’ve said elsewhere, I’m skeptical that the approach to take is to do more such RCTs. I worry about us having to spend extremely large sums of money for such things.
It’s probably a good idea to consider the global amount of money being spent on an AR intervention when evaluating the cost to investigate it. Like how much money is being spent across the different AR orgs on FB ads? If a proper study costs $200K and there is only $500K a year being spent globally, then it’s hard to see the value proposition. If the total being spent annually is $20M, then a full fledged RCT is probably in order.
Does anyone know of estimates of how much the AR movement as a whole is investing in different interventions? This might help prioritize which interventions to study first and how much to pay for those studies.
I have heard that farm animal welfare as a whole is in the $10m-$100m range, so I would be surprised if something like online ads was $20m a year. That being said, it’s worth accounting for long term effects. For example, if online ads were proven not to work for $100k and only $200k gets spent on it a year, the first year might seem like a waste, but if over the next ten years 50% of funding for online ads moves to more effective interventions, this definitely makes it worth it.
Additionally, if something is proven to work, then the amount of total AR funding that goes to it could increase to well past the amount it’s getting now. For example, if online ads get strong evidence showing they work, they might get $500k a year instead of $200k and other less proven interventions might get less.
Not to mention that the study itself is delivering the intervention to the treatment group, so the marginal cost of adding the control group for randomization is only a portion of the nominal outlay.
This is a very good writeup, thanks for this. Everything strikes me as correct on the merits of the experiment. I think the objection that we don’t know how long people watched it misses the mark as you say, since we are interested in the effect of viewing an online ad, not watching an entire video (it can become relevant if we try to extrapolate to contexts where people do watch the entire movie).
As I’ve said elsewhere, I’m skeptical that the approach to take is to do more such RCTs. I worry about us having to spend extremely large sums of money for such things. Certainly it seems we should compare with other options, like investigations, and not try too hard to find effect sizes that don’t dominate those other options.
On this note, what effect size are you using for power calculations? Is it the effect size in the study? You probably want to power it for a smaller effect size—the smallest such effect such that MFA or another org would choose to invest more or less in online ads based on that (so the effect that would determine whether online ads are or are not competitive with investigations and corporate campaigns most likely).
It’s probably a good idea to consider the global amount of money being spent on an AR intervention when evaluating the cost to investigate it. Like how much money is being spent across the different AR orgs on FB ads? If a proper study costs $200K and there is only $500K a year being spent globally, then it’s hard to see the value proposition. If the total being spent annually is $20M, then a full fledged RCT is probably in order.
Does anyone know of estimates of how much the AR movement as a whole is investing in different interventions? This might help prioritize which interventions to study first and how much to pay for those studies.
I have heard that farm animal welfare as a whole is in the $10m-$100m range, so I would be surprised if something like online ads was $20m a year. That being said, it’s worth accounting for long term effects. For example, if online ads were proven not to work for $100k and only $200k gets spent on it a year, the first year might seem like a waste, but if over the next ten years 50% of funding for online ads moves to more effective interventions, this definitely makes it worth it.
Additionally, if something is proven to work, then the amount of total AR funding that goes to it could increase to well past the amount it’s getting now. For example, if online ads get strong evidence showing they work, they might get $500k a year instead of $200k and other less proven interventions might get less.
Not to mention that the study itself is delivering the intervention to the treatment group, so the marginal cost of adding the control group for randomization is only a portion of the nominal outlay.