In that case, a better title would probably be something like “Tell people why they should donate, not that they morally obligated to.”*
I had a strong prior that telling people they were morally obligated to donate would not have a positive effect and if anything backfire. So I have actually updated a bit in the other direction regarding the backfire effect.
However, given we have evidence that moral demandingness didn’t produce any positive outcomes, I would currently tell people not to use them and instead stick to moral arguments (which may even be underused given how effective they are).
In saying that, further research is needed, as there isn’t much of a literature at the moment.
But you don’t want to imply that the morally demanding argument backfired either. Donations were higher in the morally demanding case, no?
So we should update our beliefs in that direction I think, even if you don’t have statistical power to “rule out” that this difference was due to chance.
Can you tell us: in a simple bayesian updating model what is the ~ posterior probability that the strong moral demandingness condition performed
In experiment 1, condition on them donating they actually donated significantly less in the Moral Demandingness condition (but this didn’t replicate in E2).
Can you DM me about the model, I am happy to run that analysis. We ran mean equivalence tests to provide evidence of the bounds of the null result, but I believe what you are suggesting is quite different.
In experiment 1, condition on them donating they actually donated significantly less in the Moral Demandingness condition (but this didn’t replicate in E2).
Can you DM me about the model, I am happy to run that analysis. We ran mean equivalence tests to provide evidence of the bounds of the null result, but I believe what you are suggesting is quite different.
‘Conditional on positive’ results are less reliable because of the potential for differential selection, but that is still a bit interesting. (But it could be e.g., ’bigger push to get people to donate means you attract less interested people on average, so they respond with smaller amounts.)
The equivalence testing is close to what I meant (do you want to expand on/link those), but no, not quite the same.
Quickly, what I had in mind is a ‘Bayesian regression’. You input a model, priors over all parameters (perhaps ‘weakly informative’ priors centered at a 0 effect) and you can then compute the posterior belief for these parameters. R’s BRMS package is good for this. Then you can report ‘what share of the posterior falls into each of the categories I mentioned below’.
I’ll try to follow up on this more specifically, and perhaps share some code.
That’s very fair! I’m not familiar with the norms for EA Forum title posts. What do you think a better title would be?
I guess something that summarises your research results. But also I genuinely want your expert view on this.
Should we? I guess not, right?
In that case, a better title would probably be something like “Tell people why they should donate, not that they morally obligated to.”*
I had a strong prior that telling people they were morally obligated to donate would not have a positive effect and if anything backfire. So I have actually updated a bit in the other direction regarding the backfire effect.
However, given we have evidence that moral demandingness didn’t produce any positive outcomes, I would currently tell people not to use them and instead stick to moral arguments (which may even be underused given how effective they are).
In saying that, further research is needed, as there isn’t much of a literature at the moment.
*Can I change the title?
You can change the title, though I actually think I was being a bit snotty to pull you up on it.
But you don’t want to imply that the morally demanding argument backfired either. Donations were higher in the morally demanding case, no?
So we should update our beliefs in that direction I think, even if you don’t have statistical power to “rule out” that this difference was due to chance.
Can you tell us: in a simple bayesian updating model what is the ~ posterior probability that the strong moral demandingness condition performed
equal or worse than the regular moral argument
no more than 10% better
more than 10% better (1- the last thing)
more than 20% better ?
In experiment 1, condition on them donating they actually donated significantly less in the Moral Demandingness condition (but this didn’t replicate in E2).
Can you DM me about the model, I am happy to run that analysis. We ran mean equivalence tests to provide evidence of the bounds of the null result, but I believe what you are suggesting is quite different.
In experiment 1, condition on them donating they actually donated significantly less in the Moral Demandingness condition (but this didn’t replicate in E2).
Can you DM me about the model, I am happy to run that analysis. We ran mean equivalence tests to provide evidence of the bounds of the null result, but I believe what you are suggesting is quite different.
‘Conditional on positive’ results are less reliable because of the potential for differential selection, but that is still a bit interesting. (But it could be e.g., ’bigger push to get people to donate means you attract less interested people on average, so they respond with smaller amounts.)
The equivalence testing is close to what I meant (do you want to expand on/link those), but no, not quite the same.
Quickly, what I had in mind is a ‘Bayesian regression’. You input a model, priors over all parameters (perhaps ‘weakly informative’ priors centered at a 0 effect) and you can then compute the posterior belief for these parameters. R’s BRMS package is good for this. Then you can report ‘what share of the posterior falls into each of the categories I mentioned below’.
I’ll try to follow up on this more specifically, and perhaps share some code.
Thanks David, that would be great! I’ll check to see if there is a way to run it on STATA, but if not I can just run it on R.