Over the last ~6 months I’ve noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly… My guess is these changes are (almost entirely) driven by PR concerns about longtermism.
It seems worth flagging that whether these alternative approaches are better for PR (or outreach considered more broadly) seems very uncertain. I’m not aware of any empirical work directly assessing this even though it seems a clearly empirically tractable question. Rethink Priorities has conducted some work in this vein (referenced by Will MacAskill here), but this work, and other private work we’ve completed, wasn’t designed to address this question directly. I don’t think the answer is very clear a priori. There are lots of competing considerations and anecdotally, when we have tested things for different orgs, the results are often surprising. Things are even more complicated when you consider how different approaches might land with different groups, as you mention.
We are seeking funding to conduct work which would actually investigate this question (here), as well as to do broader work on EA/longtermist message testing, and broader work assessing public attitudes towards EA/longtermism (which I don’t have linkable applications for).
I think this kind of research is also valuable even if one is very sceptical of optimising PR. Even if you don’t want to maximise persuasiveness, it’s still important to understand how different groups are understanding (or misunderstanding) your message.
It seems worth flagging that whether these alternative approaches are better for PR (or outreach considered more broadly) seems very uncertain. I’m not aware of any empirical work directly assessing this even though it seems a clearly empirically tractable question. Rethink Priorities has conducted some work in this vein (referenced by Will MacAskill here), but this work, and other private work we’ve completed, wasn’t designed to address this question directly. I don’t think the answer is very clear a priori. There are lots of competing considerations and anecdotally, when we have tested things for different orgs, the results are often surprising. Things are even more complicated when you consider how different approaches might land with different groups, as you mention.
We are seeking funding to conduct work which would actually investigate this question (here), as well as to do broader work on EA/longtermist message testing, and broader work assessing public attitudes towards EA/longtermism (which I don’t have linkable applications for).
I think this kind of research is also valuable even if one is very sceptical of optimising PR. Even if you don’t want to maximise persuasiveness, it’s still important to understand how different groups are understanding (or misunderstanding) your message.