I like the analogy, and I found myself agreeing a lot with your suggestions, but I think there is a danger in this analogy:
It portrays EAs as functioning very individualistically, and this could ultimately be ineffective.
Imagine a market where many of the buyers want to make a cake (say the cake represents improving global health—other people may go to the market for other reasons, but many go for the same reason: in this case, cake). Each only has a little bit of money though, not enough to buy all the ingredients for a cake, so they buy one hoping that others will buy the others, and that they will be working off the same recipe. Inevitably, though, they have different ideas of the cake they want to make and what you end up with is a mess of different ingredients. The buyers DO get to eat something, but it isn’t cake. What this represents in the real world is that EAs have targeted several individual health problems, but they haven’t actually worked towards reducing global health/poverty as a whole.
Now imagine an alternative market where the buyers coordinate. They know they all want cake, so they discuss together what the best recipe would be. Then, when they have one, they organise who should buy what—and then together they are able to make a damn good cake. In the real world, this would mean tackling global poverty in a coordinated way, which involves addressing systemic change.
What I’m pointing out is that if the buyers were to coordinate they might be able to do far more good. I think this is currently a big problem in EA, that we don’t coordinate or think strategically enough. We focus on what individual donors can do, and thereby miss out on tackling the bigger problems. So, I like your analogy, but I don’t want people to think that what we ought to have is a classic free market with self-interested actors. Rather, we should have a free market with (at least some) actors working for (at least some) common good(s).
In global health, one challenge is that there are a massive number of players, each with their own agendas. You’ve got developing countries, Western governments, Gates, traditional NGOs, EA, and many other players besides.
Only a small fraction of the funding is EA-aligned, so it’s unclear how much benefit tighter EA coordination would bring. Moreover, my guess is that having so much of the EA funding routing through GiveWell has some coordinating effects (e.g., GW would likely know and react if two programs it recommended were duplicating efforts).
I think my argument will be even clearer if I talk about mitigating AI risk. Imagine if all AI safety orgs operated independently, even competing with each other. It would be (or arguably already is) a mess! There would be no ‘open letter’, just different people shouting separately. And surely AI safety could be advanced further if the already existing orgs worked together better.
So yes, choice is good, but to some degree we are and should be working towards common goals.
I like the analogy, and I found myself agreeing a lot with your suggestions, but I think there is a danger in this analogy:
It portrays EAs as functioning very individualistically, and this could ultimately be ineffective.
Imagine a market where many of the buyers want to make a cake (say the cake represents improving global health—other people may go to the market for other reasons, but many go for the same reason: in this case, cake). Each only has a little bit of money though, not enough to buy all the ingredients for a cake, so they buy one hoping that others will buy the others, and that they will be working off the same recipe. Inevitably, though, they have different ideas of the cake they want to make and what you end up with is a mess of different ingredients. The buyers DO get to eat something, but it isn’t cake. What this represents in the real world is that EAs have targeted several individual health problems, but they haven’t actually worked towards reducing global health/poverty as a whole.
Now imagine an alternative market where the buyers coordinate. They know they all want cake, so they discuss together what the best recipe would be. Then, when they have one, they organise who should buy what—and then together they are able to make a damn good cake. In the real world, this would mean tackling global poverty in a coordinated way, which involves addressing systemic change.
What I’m pointing out is that if the buyers were to coordinate they might be able to do far more good. I think this is currently a big problem in EA, that we don’t coordinate or think strategically enough. We focus on what individual donors can do, and thereby miss out on tackling the bigger problems. So, I like your analogy, but I don’t want people to think that what we ought to have is a classic free market with self-interested actors. Rather, we should have a free market with (at least some) actors working for (at least some) common good(s).
In global health, one challenge is that there are a massive number of players, each with their own agendas. You’ve got developing countries, Western governments, Gates, traditional NGOs, EA, and many other players besides.
Only a small fraction of the funding is EA-aligned, so it’s unclear how much benefit tighter EA coordination would bring. Moreover, my guess is that having so much of the EA funding routing through GiveWell has some coordinating effects (e.g., GW would likely know and react if two programs it recommended were duplicating efforts).
I think my argument will be even clearer if I talk about mitigating AI risk. Imagine if all AI safety orgs operated independently, even competing with each other. It would be (or arguably already is) a mess! There would be no ‘open letter’, just different people shouting separately. And surely AI safety could be advanced further if the already existing orgs worked together better.
So yes, choice is good, but to some degree we are and should be working towards common goals.