It might be interesting to compare that to everyday environmentalism or everyday antispeciesism. EAs have already thought about these areas a fair bit and have said interesting things about in the past.
In both of these areas, the following seems to be the case:
donating to effective nonprofits is probably the best way to help at this point,
some other actions look pretty good (avoiding unnecessary intercontinental flights and fuel-inefficient cars, eating a plant-based diet),
other actions are making a negligibly small difference per unit of cost (unplugging your phone charger when you’re not using it, avoiding animal-based food additives),
there are some harder-to-quantify aspects that could be very good or not (activism, advocacy, etc.),
there are some virtues that seem helpful for longer-term, lasting change (becoming more aware of how products you consume are made and what the moral cost is, learning to see animals as individuals with lives worth protecting).
EAs are already thinking a lot about optimizing #1 by default, so perhaps the project of “everyday longtermism” could be about exploring whether actions fall within #2 or #3 or #4 (and what to do about #4), and what the virtues corresponding to #5 might look like.
It might be interesting to compare that to everyday environmentalism or everyday antispeciesism. EAs have already thought about these areas a fair bit and have said interesting things about in the past.
In both of these areas, the following seems to be the case:
donating to effective nonprofits is probably the best way to help at this point,
some other actions look pretty good (avoiding unnecessary intercontinental flights and fuel-inefficient cars, eating a plant-based diet),
other actions are making a negligibly small difference per unit of cost (unplugging your phone charger when you’re not using it, avoiding animal-based food additives),
there are some harder-to-quantify aspects that could be very good or not (activism, advocacy, etc.),
there are some virtues that seem helpful for longer-term, lasting change (becoming more aware of how products you consume are made and what the moral cost is, learning to see animals as individuals with lives worth protecting).
EAs are already thinking a lot about optimizing #1 by default, so perhaps the project of “everyday longtermism” could be about exploring whether actions fall within #2 or #3 or #4 (and what to do about #4), and what the virtues corresponding to #5 might look like.