FWIW I also don’t particularly understand the normative appeal of democratizing funding within the EA community. It seems to me like the common normative basis for democracy would tend to argue for democratizing control of resources in a much broader way, rather than within the self-selected EA community. I think epistemic/efficiency arguments for empowering more decision-makers within EA are generally more persuasive, but wouldn’t necessarily look like “democracy” per se and might look more like more regranting, forecasting tournaments, etc.
Also, the (normative, rather than instrumental) arguments for democratisation in political theory are very often based on the idea that states coerce or subjugate their members, and so the only way to justify (or eliminate) this coercion is through something like consent or agreement. Here we find ourselves in quite a radically different situation.
It seems like the critics would claim that EA is, if not coercing or subjugating, at least substantially influencing something like the world population in a way that meets the criteria for democratisation. This seems to be the claim in arguments about billionaire philanthropy, for example. I’m not defending or vouching for that claim, but I think whether we are in a sufficiently different situation may be contentious.
This argument seems to be fair to apply towards CEA’s funding decisions as they influence the community, but I do not think I as a self described EA have more justification to decide over bed net distribution than the people of Kenya who are directly affected.
That argument would be seen as too weak in the political theory context. Then powerful states would have to enfranchise everyone in the world and form a global democracy. It also is too strong in this context, since it implies global democratic control of EA funds, not community control.
I guess I would think that if one wants to argue for democracy as an intrinsic good, that would get you global democracy (and global control of EA funds), and it’s practical and instrumental considerations (which, anyway, are all the considerations in my view) that bite against it.
This is a great point, Alexander. I suspect some people, like ConcernedEAs, believe the specific ideas are superior in some way to what we do now, and it’s just convenient to give them a broad label like “democratizing”. (At Asana, we’re similarly “democratizing” project management!)
Others seem to believe democracy is intrinsically superior to other forms of governance; I’m quite skeptical of that, though agree with tylermjohn that it is often the best way to avoid specific kinds of abuse and coercion. Perhaps in our context there might be more specific solutions along those lines, like an appeals board for COI or retaliation claims. The formal power might still lie with OP, but we would have strong soft reasons for wanting to defer.
In the meantime, I think the forum serves that role, and from my POV we seem reasonably responsive to it? Esp. the folks with high karma.
I probably should have been clearer in my first comment that my interest in democratizing the decisions more was quite selfish: I don’t like having the responsibility, even when I’m largely deferring it to you (which itself is a decision).
Others seem to believe democracy is intrinsically superior to other forms of governance; I’m quite skeptical of that, though agree with tylermjohn that it is often the best way to avoid specific kinds of abuse and coercion.
My guess is that the current non-democratic EA institutions have serious flaws, and democratic replacement institutions would have even more serious flaws, and it’s still worth trying the democratic institutions (in parallel to the current ones) because 2 flawed structures are better than 1. (For example, because the democratic institutions fund important critical work that the current institutions do not.)
I think this likely depends on who else is funding work in a given area, and what the other funders’ flaws/blind spots are. For instance, if the democratic EA alternative has many of the same flaws/blind spots of larger funders in a cause area, diverting resources from current EA efforts would likely lead to worse outcomes in the cause area as a whole.
An idea I’ve been kicking around in my head for a while is ‘someone should found an organization that investigates what existing humans’ moral priorities are’ - like, if there were a world democracy, what would it vote for?
An idea for a limited version of this within EA could be representatives for interest groups or nations. E.g., the Future Design movement suggests that in decision-making bodies, there should be some people whose role is to advocate for the interests of future generations. There could similarly be a mechanism where (eg) animals got a certain number of votes through human advocates.
GiveWell did some of this research in 2019 (summary, details):
We provided funding and guidance to IDinsight, a data analytics, research, and advisory organization, to survey about 2,000 people living in extreme poverty in Kenya and Ghana in 2019 about how they value different outcomes.
(Sorry, the formatting here doesn’t seem to work but I don’t know how to fix it)
I think there are two aspects that make “the EA community” a good candidate for who should make decisions:
The need to balance between “getting all perspectives by involving the entire world” and “making sure it’s still about doing the most good possible”. It’s much less vetting over value-alignment than the current state, but still some. I’m not sure it’s the best point on the scale, but I think it might be better than where we are currently.
1.1. another thought about this is that maybe we ought to fix the problem where “value alignment” is, as the other post argues, actually taken much more narrowly than agreeing about “doing the most good”.
The fact that EA is, in the end, a collaborative project and not a corporation. It seems wrong and demotivating to me that EAs have to compete and take big risks on themselves individually to try to have a say about the project they’re still expected to participate in.
2.1. Maybe a way for funders to test this is to ask yourselves—if there weren’t an EA community, would your plans still work as you expect them to? If not, than I think the community ought to also have some say on making decisions.
Hi Dustin :)
FWIW I also don’t particularly understand the normative appeal of democratizing funding within the EA community. It seems to me like the common normative basis for democracy would tend to argue for democratizing control of resources in a much broader way, rather than within the self-selected EA community. I think epistemic/efficiency arguments for empowering more decision-makers within EA are generally more persuasive, but wouldn’t necessarily look like “democracy” per se and might look more like more regranting, forecasting tournaments, etc.
Also, the (normative, rather than instrumental) arguments for democratisation in political theory are very often based on the idea that states coerce or subjugate their members, and so the only way to justify (or eliminate) this coercion is through something like consent or agreement. Here we find ourselves in quite a radically different situation.
It seems like the critics would claim that EA is, if not coercing or subjugating, at least substantially influencing something like the world population in a way that meets the criteria for democratisation. This seems to be the claim in arguments about billionaire philanthropy, for example. I’m not defending or vouching for that claim, but I think whether we are in a sufficiently different situation may be contentious.
This argument seems to be fair to apply towards CEA’s funding decisions as they influence the community, but I do not think I as a self described EA have more justification to decide over bed net distribution than the people of Kenya who are directly affected.
Yes, that seems right.
That argument would be seen as too weak in the political theory context. Then powerful states would have to enfranchise everyone in the world and form a global democracy. It also is too strong in this context, since it implies global democratic control of EA funds, not community control.
I guess I would think that if one wants to argue for democracy as an intrinsic good, that would get you global democracy (and global control of EA funds), and it’s practical and instrumental considerations (which, anyway, are all the considerations in my view) that bite against it.
This is a great point, Alexander. I suspect some people, like ConcernedEAs, believe the specific ideas are superior in some way to what we do now, and it’s just convenient to give them a broad label like “democratizing”. (At Asana, we’re similarly “democratizing” project management!)
Others seem to believe democracy is intrinsically superior to other forms of governance; I’m quite skeptical of that, though agree with tylermjohn that it is often the best way to avoid specific kinds of abuse and coercion. Perhaps in our context there might be more specific solutions along those lines, like an appeals board for COI or retaliation claims. The formal power might still lie with OP, but we would have strong soft reasons for wanting to defer.
In the meantime, I think the forum serves that role, and from my POV we seem reasonably responsive to it? Esp. the folks with high karma.
I probably should have been clearer in my first comment that my interest in democratizing the decisions more was quite selfish: I don’t like having the responsibility, even when I’m largely deferring it to you (which itself is a decision).
My guess is that the current non-democratic EA institutions have serious flaws, and democratic replacement institutions would have even more serious flaws, and it’s still worth trying the democratic institutions (in parallel to the current ones) because 2 flawed structures are better than 1. (For example, because the democratic institutions fund important critical work that the current institutions do not.)
I think this likely depends on who else is funding work in a given area, and what the other funders’ flaws/blind spots are. For instance, if the democratic EA alternative has many of the same flaws/blind spots of larger funders in a cause area, diverting resources from current EA efforts would likely lead to worse outcomes in the cause area as a whole.
Yeah, I definitely agree with this!
An idea I’ve been kicking around in my head for a while is ‘someone should found an organization that investigates what existing humans’ moral priorities are’ - like, if there were a world democracy, what would it vote for?
An idea for a limited version of this within EA could be representatives for interest groups or nations. E.g., the Future Design movement suggests that in decision-making bodies, there should be some people whose role is to advocate for the interests of future generations. There could similarly be a mechanism where (eg) animals got a certain number of votes through human advocates.
GiveWell did some of this research in 2019 (summary, details):
Oh awesome! I’ll check that out.
(Sorry, the formatting here doesn’t seem to work but I don’t know how to fix it)
I think there are two aspects that make “the EA community” a good candidate for who should make decisions:
The need to balance between “getting all perspectives by involving the entire world” and “making sure it’s still about doing the most good possible”. It’s much less vetting over value-alignment than the current state, but still some. I’m not sure it’s the best point on the scale, but I think it might be better than where we are currently. 1.1. another thought about this is that maybe we ought to fix the problem where “value alignment” is, as the other post argues, actually taken much more narrowly than agreeing about “doing the most good”.
The fact that EA is, in the end, a collaborative project and not a corporation. It seems wrong and demotivating to me that EAs have to compete and take big risks on themselves individually to try to have a say about the project they’re still expected to participate in. 2.1. Maybe a way for funders to test this is to ask yourselves—if there weren’t an EA community, would your plans still work as you expect them to? If not, than I think the community ought to also have some say on making decisions.