I think an important thing to consider with this study (as with most psychology-style experiments) is the generalisability/​external validity of the results, and in particular the extent to which the effects may only be short-lived and may primarily reflect things like demand characteristics and social desirability bias.
These results might not matter much if they just reflect the best ways to get people to give an extra dollar right after being shown some relevant text. What matters more is the bets ways to get people to give substantially more, or to give moderate amounts in an ongoing way (even when they’re not being observed and haven’t seen some relevant text right beforehand).
And I think this is worth bearing in mind when we think about the value of arguments that may induce some degree of guilt. These results suggests that those sorts of arguments may work best for influencing people to give slightly more in this low-stakes and unusual setting (though personally Argument 12 didn’t seem very guilt-inducing to me). But it still seems plausible that those sorts of arguments don’t work especially well—or even work fairly badly—for leading to larger or more ongoing donations (e.g., because people get annoyed or stressed by these arguments and thus stop engaging over time or when no one’s looking).
That said, I have no specific evidence for that, and I’m not saying I think that’s especially likely. Anecdotally, it does seem that arguments that could induce some degree of guilt have been important in many EAs’ journeys into EA, and in many EAs’ current thinking. (That’s the case for me, for example.)
I think an important thing to consider with this study (as with most psychology-style experiments) is the generalisability/​external validity of the results, and in particular the extent to which the effects may only be short-lived and may primarily reflect things like demand characteristics and social desirability bias.
These results might not matter much if they just reflect the best ways to get people to give an extra dollar right after being shown some relevant text. What matters more is the bets ways to get people to give substantially more, or to give moderate amounts in an ongoing way (even when they’re not being observed and haven’t seen some relevant text right beforehand).
And I think this is worth bearing in mind when we think about the value of arguments that may induce some degree of guilt. These results suggests that those sorts of arguments may work best for influencing people to give slightly more in this low-stakes and unusual setting (though personally Argument 12 didn’t seem very guilt-inducing to me). But it still seems plausible that those sorts of arguments don’t work especially well—or even work fairly badly—for leading to larger or more ongoing donations (e.g., because people get annoyed or stressed by these arguments and thus stop engaging over time or when no one’s looking).
That said, I have no specific evidence for that, and I’m not saying I think that’s especially likely. Anecdotally, it does seem that arguments that could induce some degree of guilt have been important in many EAs’ journeys into EA, and in many EAs’ current thinking. (That’s the case for me, for example.)