Would it work to have a stronger distinction in our minds between discourse, which should stay pure from politics, and interventions, which can include, e.g. seeking a political office or advocating for a ballot measure?
Since EA political candidacies are happening whether we all agree or not, maybe we should take measures to insulate the two. I like the “discourse v intervention” frame as a tool for doing that, either as a conversational signpost or possibly to silo conversations entirely. Maybe people involved in political campaigns should have to recuse themselves from meta discourse?
Relatedly, I’m a bit worried that EA involvement in politics may lead to an increased tendency for reputational concerns to swamp object-level arguments in many EA discussions; and for an increasing number of claims and arguments to become taboo. I think there’s already such a tendency, and involvement in politics could make it worse.
What’s so weird to me about this is that EA has the clout it does today because of these frank discussions. Why shouldn’t we keep doing that?
I’m in favor of not sharing infohazards but that’s about the extent of reputation management I endorse—and I think that leads to a good reputation for EA as honest!
What’s so weird to me about this is that EA has the clout it does today because of these frank discussions. Why shouldn’t we keep doing that?
I think the standard thing for many orgs and cultures to start off open and transparent and move towards closedness and insularity. There are good object-level reasons for the former, and good object-level reasons for the latter, but taken as a whole, it might just better be viewed as a lifecycle thing rather than one of principled arguments.
Open Phil is an unusually transparent and well-documented example in my mind (though perhaps this is changing again in 2022)
Agree there’s little reason for political candidates to comment on meta-EA. There would, however, be reasons for them to comment on EA analysis of public policies. Their experiences in politics might also have a bearing on big picture EA strategy, which would be a greyer area.
We all know EAs and rationalists are anxious about getting involved in politics because of the motivated reasoning and soldier mindset that it takes to succeed there (https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer).
Would it work to have a stronger distinction in our minds between discourse, which should stay pure from politics, and interventions, which can include, e.g. seeking a political office or advocating for a ballot measure?
Since EA political candidacies are happening whether we all agree or not, maybe we should take measures to insulate the two. I like the “discourse v intervention” frame as a tool for doing that, either as a conversational signpost or possibly to silo conversations entirely. Maybe people involved in political campaigns should have to recuse themselves from meta discourse?
Relatedly, I’m a bit worried that EA involvement in politics may lead to an increased tendency for reputational concerns to swamp object-level arguments in many EA discussions; and for an increasing number of claims and arguments to become taboo. I think there’s already such a tendency, and involvement in politics could make it worse.
What’s so weird to me about this is that EA has the clout it does today because of these frank discussions. Why shouldn’t we keep doing that?
I’m in favor of not sharing infohazards but that’s about the extent of reputation management I endorse—and I think that leads to a good reputation for EA as honest!
I think the standard thing for many orgs and cultures to start off open and transparent and move towards closedness and insularity. There are good object-level reasons for the former, and good object-level reasons for the latter, but taken as a whole, it might just better be viewed as a lifecycle thing rather than one of principled arguments.
Open Phil is an unusually transparent and well-documented example in my mind (though perhaps this is changing again in 2022)
I can see good reasons for individual orgs to do that, but way fewer for EA writ large to do this. I’m with Rob Bensinger on this.
Agree there’s little reason for political candidates to comment on meta-EA. There would, however, be reasons for them to comment on EA analysis of public policies. Their experiences in politics might also have a bearing on big picture EA strategy, which would be a greyer area.