While I’m not sure we’re using terms like “political” and “power” in the same way, as far as I can tell this worry makes a lot of sense to me.
However, I think there is an opposite failure mode: mistakenly believing that because of one’s noble goals and attitudes one is immune to the vices of power, and can safely ignore the art of how to navigate a world that contains conflicting interests.
A key assumption from my perspective is that political and power dynamics aren’t something one can just opt out of. There is a reason why thinkers from Plato over Macchiavelli to Carl Schmitt have insisted that politics is a separate domain that merits special attention (and I’m not saying this as someone who is not particularly sympathetic to any of these three on the object level). [ETA: Actually I’m not sure if Plato says that, and I’m confused why I included him originally. In a sense he may suggest the opposite view since he sometimes compares the state to the individual.]
Internally, community members with influence over more financial or social capital have power over those whose projects depend on such capital. There certainly are different views with respect to how this capital is best allocated, and at least for practical purposes I don’t think these are purely empirical disagreements and instead involve ‘brute differences in interests’.
Externally, EAs have power over beneficiaries when they choose to help some but not others. And a lot of EA projects are relevant to the interests of EA-external actors that form a complex network of partly different and partly aligned interests and different amounts of power over each other. Perhaps most drastically, a lot of EA thought around AI risk is about how to best influence how essentially the whole world will be reshaped (if not an outright plan for how to essentially take over the world).
Therefore, I think we will need to deal with ‘politics’ anyway, and we will attract people who are motivated by seeking power anyway. Non-EA political structures and practice contain a lot of accumulated wisdom on how to navigate conflicting interests while limiting damage from negative-sum interactions, on how to keep the power of individual actors in check, and on how to shape incentives in such a way that power-seeking individuals make prosocial contributions in their pursuit of power. (E.g. my prior is that any head of government in a democracy is at least partly motivated by pursuing power.)
To be clear, I think there are significant problems with these non-EA practices. (Perhaps most notably negative x-risk externalities from international competition.) And if EA can contribute technological or other innovations that help with reducing these problems, I’m all for it.
Yet overall I feel like I more often see EAs make the mistake of naively thinking they can ignore their externally imposed entanglement in political and power dynamics, and that there is nothing to be learned from established ways for how to reign in and shape these dynamics (perhaps because they view established practice and institutions largely as a morass of corruption and incompetence one better steers clear of). E.g. some significant problems I’ve seen at EA orgs could have been avoided by sticking more closely to standard advice of having e.g. a functional board that provides accountability to org leadership.
My best guess is that, on the margin, it would be good to attract more people with a more common-sense perspective on politics and power-seeking as opposed to people who lack the ability or willingness to understand how power operates in the world, and how to best navigate this. If rebranding to “Global Priorites” would have that effect (which I think I’m less confident in than you), then I’d count that as a reason for rebranding (though I doubt it would be among the top 5 most important pro or con reasons).
While I’m not sure we’re using terms like “political” and “power” in the same way, as far as I can tell this worry makes a lot of sense to me.
However, I think there is an opposite failure mode: mistakenly believing that because of one’s noble goals and attitudes one is immune to the vices of power, and can safely ignore the art of how to navigate a world that contains conflicting interests.
A key assumption from my perspective is that political and power dynamics aren’t something one can just opt out of. There is a reason why thinkers from Plato over Macchiavelli to Carl Schmitt have insisted that politics is a separate domain that merits special attention (and I’m not saying this as someone who is not particularly sympathetic to any of these three on the object level). [ETA: Actually I’m not sure if Plato says that, and I’m confused why I included him originally. In a sense he may suggest the opposite view since he sometimes compares the state to the individual.]
Internally, community members with influence over more financial or social capital have power over those whose projects depend on such capital. There certainly are different views with respect to how this capital is best allocated, and at least for practical purposes I don’t think these are purely empirical disagreements and instead involve ‘brute differences in interests’.
Externally, EAs have power over beneficiaries when they choose to help some but not others. And a lot of EA projects are relevant to the interests of EA-external actors that form a complex network of partly different and partly aligned interests and different amounts of power over each other. Perhaps most drastically, a lot of EA thought around AI risk is about how to best influence how essentially the whole world will be reshaped (if not an outright plan for how to essentially take over the world).
Therefore, I think we will need to deal with ‘politics’ anyway, and we will attract people who are motivated by seeking power anyway. Non-EA political structures and practice contain a lot of accumulated wisdom on how to navigate conflicting interests while limiting damage from negative-sum interactions, on how to keep the power of individual actors in check, and on how to shape incentives in such a way that power-seeking individuals make prosocial contributions in their pursuit of power. (E.g. my prior is that any head of government in a democracy is at least partly motivated by pursuing power.)
To be clear, I think there are significant problems with these non-EA practices. (Perhaps most notably negative x-risk externalities from international competition.) And if EA can contribute technological or other innovations that help with reducing these problems, I’m all for it.
Yet overall I feel like I more often see EAs make the mistake of naively thinking they can ignore their externally imposed entanglement in political and power dynamics, and that there is nothing to be learned from established ways for how to reign in and shape these dynamics (perhaps because they view established practice and institutions largely as a morass of corruption and incompetence one better steers clear of). E.g. some significant problems I’ve seen at EA orgs could have been avoided by sticking more closely to standard advice of having e.g. a functional board that provides accountability to org leadership.
My best guess is that, on the margin, it would be good to attract more people with a more common-sense perspective on politics and power-seeking as opposed to people who lack the ability or willingness to understand how power operates in the world, and how to best navigate this. If rebranding to “Global Priorites” would have that effect (which I think I’m less confident in than you), then I’d count that as a reason for rebranding (though I doubt it would be among the top 5 most important pro or con reasons).