Which threshold was that and how did you arrive at that conclusion? I don’t really know one way or another yet, but upgrading or downgrading confidence seems premature without concrete numbers.
The threshold was a 10% difference in animal product consumption between the experiment and the control group. I arrived at this conclusion because I thought that there was some chance that these ads would cause the experiment group to report a 10% or more decrease in animal product consumption when compared to the control group. Since the study didn’t detect a difference at this level I assign a lower probability to a change of this magnitude being present than I did previously.
A predicted change of 10% or more might have been overly optimistic and I didn’t have a great sense of what I thought the effectiveness of online ads would be prior to this experiment. The ads were targeted at what was thought to be the most receptive demographic and those who click on these ads seem particularly predisposed to decreasing their animal product consumption. You’re right though, upgrading or downgrading confidence might be premature without concrete numbers.
I think there are some other reasons for why I seem to be updating in the negative direction for the effectiveness of online ads. These other reasons are:
I feel that that my lower bound for the effectiveness of online ads also moved in the negative direction. I previously assigned next to no probability that the ads caused an increase in animal product consumption. However the results seem to suggest that there may have been an increase in animal product consumption in the experiment group. So I have increased the probability that I put on that outcome.
ACE also seems to be updating in the negative direction.
I did a very rough and simple calculation in this spreadsheet using that the experiment group would have 1% of people reduce their animal product consumption by 10%, 1% of people convert to vegetarianism and .1% of people convert to veganism. I don’t put too much weight on this because I did do these calculations after I had already somewhat committed to negatively updating in this post which may have induced a bias towards producing negative results. Still, this suggests that something like my best guess was systematically too positive across the board.
On this last bullet point I wonder if there is a way that we can do a bayesian analysis of the data. If we were to set our prior and then inform it with the results from this experiment. It would be very interesting to see if this would cause us to update.
It seems unfair to deallocate money from online ads where studies are potentially inconclusive to areas where studies don’t exist, unless you have strong pre-existing reasons to distinguish those interventions as higher potential.
I think we agree that if the study is inconclusive it shouldn’t cause us to change the allocation of resources to online ads. However, I think if the study causes updates in the negative direction or positive direction about the effectiveness of online ads this is reason to change the allocation of resources to online ads. I currently interpret the study as causing me to update in the negative direction for online ads. I think this means that other interventions appear relatively more effective in comparison to online advertising compared to my prior views of their effectiveness in comparison to online advertising. This seems to be reason to allocate some increased amount of resources to these other interventions and some decreased amount of resources to online ads.
The threshold was a 10% difference in animal product consumption between the experiment and the control group. I arrived at this conclusion because I thought that there was some chance that these ads would cause the experiment group to report a 10% or more decrease in animal product consumption when compared to the control group. Since the study didn’t detect a difference at this level I assign a lower probability to a change of this magnitude being present than I did previously.
A predicted change of 10% or more might have been overly optimistic and I didn’t have a great sense of what I thought the effectiveness of online ads would be prior to this experiment. The ads were targeted at what was thought to be the most receptive demographic and those who click on these ads seem particularly predisposed to decreasing their animal product consumption. You’re right though, upgrading or downgrading confidence might be premature without concrete numbers.
I think there are some other reasons for why I seem to be updating in the negative direction for the effectiveness of online ads. These other reasons are:
I feel that that my lower bound for the effectiveness of online ads also moved in the negative direction. I previously assigned next to no probability that the ads caused an increase in animal product consumption. However the results seem to suggest that there may have been an increase in animal product consumption in the experiment group. So I have increased the probability that I put on that outcome.
ACE also seems to be updating in the negative direction.
I did a very rough and simple calculation in this spreadsheet using that the experiment group would have 1% of people reduce their animal product consumption by 10%, 1% of people convert to vegetarianism and .1% of people convert to veganism. I don’t put too much weight on this because I did do these calculations after I had already somewhat committed to negatively updating in this post which may have induced a bias towards producing negative results. Still, this suggests that something like my best guess was systematically too positive across the board.
On this last bullet point I wonder if there is a way that we can do a bayesian analysis of the data. If we were to set our prior and then inform it with the results from this experiment. It would be very interesting to see if this would cause us to update.
I think we agree that if the study is inconclusive it shouldn’t cause us to change the allocation of resources to online ads. However, I think if the study causes updates in the negative direction or positive direction about the effectiveness of online ads this is reason to change the allocation of resources to online ads. I currently interpret the study as causing me to update in the negative direction for online ads. I think this means that other interventions appear relatively more effective in comparison to online advertising compared to my prior views of their effectiveness in comparison to online advertising. This seems to be reason to allocate some increased amount of resources to these other interventions and some decreased amount of resources to online ads.