One thing I like about offsetting is that it creates a more cooperative and inclusive EA community. I.e., animal advocates might be put off less by meat-eating EAs if they learn they offset their consumption, or poverty reducers might be less concerned about long-termists making policy recommendations that (perhaps as a side effect) slow down AI progress (and thereby the escape from global poverty) if they also support some poverty interventions (especially when doing so is particularly cheap for them). In general, there seem to be significant gains from cooperation, and given repeated interaction, it’s fairly easy to actually move towards such outcomes, including by starting to cooperate unilaterally.
Of course, this is best achieved not through offsetting, but by thinking about who we will want to cooperate with and trying to help their values as cost-effectively as possible.
this is best achieved not through offsetting, but by thinking about who we will want to cooperate with and trying to help their values as cost-effectively as possible.
Couldn’t one argue that offsetting harms that people outside EA care about counts as cooperating with mainstream people to some degree? In practice the way this often works is by improved public relations or general trustworthiness, rather than via explicit tit for tat. Anyway, whether this is worthwhile depends how costly the offsets are (in terms of money and time) relative to the benefits.
Thanks, I agree. It still seems to me that a) mainstream people probably matter somewhat less than specific groups, b) we should think about how mainstream people would like to be helped, and that may or may not be through offsetting.
One thing I like about offsetting is that it creates a more cooperative and inclusive EA community. I.e., animal advocates might be put off less by meat-eating EAs if they learn they offset their consumption, or poverty reducers might be less concerned about long-termists making policy recommendations that (perhaps as a side effect) slow down AI progress (and thereby the escape from global poverty) if they also support some poverty interventions (especially when doing so is particularly cheap for them). In general, there seem to be significant gains from cooperation, and given repeated interaction, it’s fairly easy to actually move towards such outcomes, including by starting to cooperate unilaterally.
Of course, this is best achieved not through offsetting, but by thinking about who we will want to cooperate with and trying to help their values as cost-effectively as possible.
Good point.
Couldn’t one argue that offsetting harms that people outside EA care about counts as cooperating with mainstream people to some degree? In practice the way this often works is by improved public relations or general trustworthiness, rather than via explicit tit for tat. Anyway, whether this is worthwhile depends how costly the offsets are (in terms of money and time) relative to the benefits.
Thanks, I agree. It still seems to me that a) mainstream people probably matter somewhat less than specific groups, b) we should think about how mainstream people would like to be helped, and that may or may not be through offsetting.