I think we should reject a binary framing of prioritising epistemic integrity vs prioritising social capital.
My take is:
Veering too far from Overton Windows too quickly makes it harder to have an impact, and staying right in the middle probably means you’re having no impact—there is a sweet spot to hit where you are close enough to the middle that your reputation is intact and are taken seriously, but you are far enough away from the middle that you are still having impact.
In addition to this, I think EA’s focus on philanthropy over politics is misguided, and much of EA’s longterm impact will come from influencing politics, for which good PR is very important.
I’d be very interested in seeing a more political wing of EA develop. If folks like me who don’t really think the AGI/longtermist wing is very effective can nonetheless respect it, I’m sure those who believe political action would be ineffective can tolerate it.
I’m not really in the position to start a wing like this myself (currently in grad school for law and policy) but I might be able to contribute efforts at some point in the future (that is, if I can be confident that I won’t tank my professional reputation through guilt-by-association with racism).
I think it’s unlikely (and probably not desirable) for “EA Parties” to form, but instead it’s more likely for EA ideas to gain influence in political parties across the political spectrum
Despite us being on seemingly opposite sides of this divide, I think we arrived at a similar conclusion. There is an equilibrium between social capital and epistemic integrity that achieves the most total good, and EA should seek that point out.
We may have different priors as to the location of that point, but it is a useful shared framing that works towards answering the question.
I think we should reject a binary framing of prioritising epistemic integrity vs prioritising social capital.
My take is:
Veering too far from Overton Windows too quickly makes it harder to have an impact, and staying right in the middle probably means you’re having no impact—there is a sweet spot to hit where you are close enough to the middle that your reputation is intact and are taken seriously, but you are far enough away from the middle that you are still having impact.
In addition to this, I think EA’s focus on philanthropy over politics is misguided, and much of EA’s longterm impact will come from influencing politics, for which good PR is very important.
I’d be very interested in seeing a more political wing of EA develop. If folks like me who don’t really think the AGI/longtermist wing is very effective can nonetheless respect it, I’m sure those who believe political action would be ineffective can tolerate it.
I’m not really in the position to start a wing like this myself (currently in grad school for law and policy) but I might be able to contribute efforts at some point in the future (that is, if I can be confident that I won’t tank my professional reputation through guilt-by-association with racism).
I think it’s unlikely (and probably not desirable) for “EA Parties” to form, but instead it’s more likely for EA ideas to gain influence in political parties across the political spectrum
I agree! When I say “wing” I mean something akin to “AI risk” or “global poverty”—i.e., an EA cause area that specific people working on.
Despite us being on seemingly opposite sides of this divide, I think we arrived at a similar conclusion. There is an equilibrium between social capital and epistemic integrity that achieves the most total good, and EA should seek that point out.
We may have different priors as to the location of that point, but it is a useful shared framing that works towards answering the question.
Agree strongly here. In addition if we are truly in a hinge moment like many claim, large political decisions are likely going to be quite important.