I suspect that it is a bad idea to publicly advocate this (though using it is fine). I’m not worried so much about moral licensing; rather, I think the amount of money being moved in this way is so tiny, relative to the amount of attention required in order to move it, that in a genuinely impact-focused discussion of possible ways to do good it would not even come up. I fear that bringing it up in association with EA gives a misleading impression of what the EA approach to prioritization looks like.
I think this is a valid concern, and certainly don’t think presenting ‘Amazon smile is the sort of thing EAs do’ is particularly useful or accurate. To try to be sightly more clear about why I do think the mention is a useful starting point:
Full EA can be quite a lot to try to introduce to people all at once, even when those people already want to help.
Asking people to carefully consider how they make a specific donation is a gentle way in, at least to ‘soft EA’. (Giving games are another example of this)
Amazon Smile is a specific donation that you can ask people to consider how they make. If they haven’t heard of it before, it’s likely that their net experience of hearing about it and setting it up will be positive (they are getting to donate to a charity with no downside, again rather like a giving game).
My hope is that this positive experience will make people more likely to consider where their donations go in future, and/or to respond positively to future things they hear about EA. I’m uncertain about how large the effects in each case will be, but don’t think they will be negative. I am concerned, however about the effect of someone setting up Amazon smile on the total amount that they donate in future, which I think will be negative if you ignore any potential introduction to EA. This means the probability of the exercise being positive depends on how likely you are to be able to use the conversations as a productive starting point.
I suspect that it is a bad idea to publicly advocate this (though using it is fine). I’m not worried so much about moral licensing; rather, I think the amount of money being moved in this way is so tiny, relative to the amount of attention required in order to move it, that in a genuinely impact-focused discussion of possible ways to do good it would not even come up. I fear that bringing it up in association with EA gives a misleading impression of what the EA approach to prioritization looks like.
I think this is a valid concern, and certainly don’t think presenting ‘Amazon smile is the sort of thing EAs do’ is particularly useful or accurate. To try to be sightly more clear about why I do think the mention is a useful starting point:
Full EA can be quite a lot to try to introduce to people all at once, even when those people already want to help.
Asking people to carefully consider how they make a specific donation is a gentle way in, at least to ‘soft EA’. (Giving games are another example of this)
Amazon Smile is a specific donation that you can ask people to consider how they make. If they haven’t heard of it before, it’s likely that their net experience of hearing about it and setting it up will be positive (they are getting to donate to a charity with no downside, again rather like a giving game). My hope is that this positive experience will make people more likely to consider where their donations go in future, and/or to respond positively to future things they hear about EA. I’m uncertain about how large the effects in each case will be, but don’t think they will be negative. I am concerned, however about the effect of someone setting up Amazon smile on the total amount that they donate in future, which I think will be negative if you ignore any potential introduction to EA. This means the probability of the exercise being positive depends on how likely you are to be able to use the conversations as a productive starting point.