One thought that struck me is that most of the objections seem most likely to come up in response to ‘GiveWell style EA’.
I expect the objections that would be raised to a longtermist-first EA would be pretty different, though with some overlap. I’d be interested in any thoughts on what they would be.
I also (speculatively) wonder if a longtermist-first EA might ultimately do better with this audience. You can do a presentation that starts with climate change, and then point out that the lack of political representation for future generations is a much more general problem.
In addition, longtermist EAs favour hits based giving, and that makes it clear that policy change is among the best interventions, while acknowledging it’s very hard to measure effects, which seems more palatable than an approach highly focused on measurement of narrow metrics.
There might be a risk that some view the (very) long-run future as a “luxury problem”, and that focusing on that, rather than short-term problems in your own country, reveals your privilege. (That attitude may be particularly common concerning causes like AI risk.) My guess is that people are less likely to have such an attitude towards someone who is focusing on global poverty.
Longtermism isn’t just AI risk, but concern with AI-risk is associated with a Elon Musk-technofuturist-technolibertarian-Silicon Valley idea cluster. Many progressives dislike some or all of those things and will judge AI alignment negatively as a result.
I wonder if it’s a good or bad thing that AI alignment (of existing algorithms) is increasingly being framed as a social justice issue, once you’ve talked about algorithmic bias it seems less privileged to then say “I’m very concerned about a future in which AI is given even more power”.
In talking to many Brown University students about EA (most of who are very progressive), I have noticed that longtermist-first and careers-first EA outreach does better and seems to be because of these objections that come up in response to ‘GiveWell style EA’.
Thank you for this summary!
One thought that struck me is that most of the objections seem most likely to come up in response to ‘GiveWell style EA’.
I expect the objections that would be raised to a longtermist-first EA would be pretty different, though with some overlap. I’d be interested in any thoughts on what they would be.
I also (speculatively) wonder if a longtermist-first EA might ultimately do better with this audience. You can do a presentation that starts with climate change, and then point out that the lack of political representation for future generations is a much more general problem.
In addition, longtermist EAs favour hits based giving, and that makes it clear that policy change is among the best interventions, while acknowledging it’s very hard to measure effects, which seems more palatable than an approach highly focused on measurement of narrow metrics.
There might be a risk that some view the (very) long-run future as a “luxury problem”, and that focusing on that, rather than short-term problems in your own country, reveals your privilege. (That attitude may be particularly common concerning causes like AI risk.) My guess is that people are less likely to have such an attitude towards someone who is focusing on global poverty.
Longtermism isn’t just AI risk, but concern with AI-risk is associated with a Elon Musk-technofuturist-technolibertarian-Silicon Valley idea cluster. Many progressives dislike some or all of those things and will judge AI alignment negatively as a result.
I wonder if it’s a good or bad thing that AI alignment (of existing algorithms) is increasingly being framed as a social justice issue, once you’ve talked about algorithmic bias it seems less privileged to then say “I’m very concerned about a future in which AI is given even more power”.
In talking to many Brown University students about EA (most of who are very progressive), I have noticed that longtermist-first and careers-first EA outreach does better and seems to be because of these objections that come up in response to ‘GiveWell style EA’.