Re: what goes wrong with the market metaphor: I mostly just think it raises all sorts of questions about whether or not the relevant assumptions hold to model this like an efficient market. Even if the answer is yes (and I’m skeptical), I think the fact that it pushes my (and seemingly other people’s) thoughts there isn’t idea. It feels like a distraction from the core issue you’re pointing to.
I think this is probably better framed as a governance problem. I think you’re asking institutions that provide public goods to the “spokes” or EA to not pick favourites and to be responsive to the community. I think that point can be made well without reference to an EA market or perfect competition. I prefer the phrasing in 1-2-3 in your reply.
Points taken. The reaction I’d have anticipated, if I’d just put it the way I did now, would be
(1) the point of EA is to do the most good
(2) we, those who run the central functions of EA, need to decide what that is to know what to do
(3) once we are confident of what “doing the most good” looks like, we should endeavour to push EA in that direction—rather than to be responsive to what others think, even if those others consider themselves parts of the EA community.
You might think it’s obvious that the central bits of EA should not and would not ‘pick favourites’ but that’s been a more or less overt goal for years. The market metaphor provides a rationale for resisting that approach.
I think point 2 is highly questionable though. Just from an information aggregation POV, it seems like we should want key public goods providers to be open to all ideas and to do rather little to filter or privilege some ideas. For example, the forum should not elevate posts on animals or poverty or AI or whatever (and they don’t). I’ve been upset with 80k for this.
I think HLI provides a good example of how this should be done. If you want to push EA in a direction, do that as a spoke and try to sway people to your spoke. “Capturing” a central hub is not how this should be done. I think having a norm against this would be helpful.
That said, I also unfortunately do not think the market metaphor is going to be convincing to people. I think concerns around monocultures and group-think might be more persuasive, but again I don’t have very well-formed thoughts here. But I do think that if the goal of EA is to do the most good and we think there might be a cause x out there or we aren’t confident that we have the right mix of resources across cause areas, then there is real value in having a norm where central public goods providers do not strongly advocate for specific causes.
Re: what goes wrong with the market metaphor: I mostly just think it raises all sorts of questions about whether or not the relevant assumptions hold to model this like an efficient market. Even if the answer is yes (and I’m skeptical), I think the fact that it pushes my (and seemingly other people’s) thoughts there isn’t idea. It feels like a distraction from the core issue you’re pointing to.
I think this is probably better framed as a governance problem. I think you’re asking institutions that provide public goods to the “spokes” or EA to not pick favourites and to be responsive to the community. I think that point can be made well without reference to an EA market or perfect competition. I prefer the phrasing in 1-2-3 in your reply.
Points taken. The reaction I’d have anticipated, if I’d just put it the way I did now, would be
(1) the point of EA is to do the most good (2) we, those who run the central functions of EA, need to decide what that is to know what to do (3) once we are confident of what “doing the most good” looks like, we should endeavour to push EA in that direction—rather than to be responsive to what others think, even if those others consider themselves parts of the EA community.
You might think it’s obvious that the central bits of EA should not and would not ‘pick favourites’ but that’s been a more or less overt goal for years. The market metaphor provides a rationale for resisting that approach.
Yeah, good points. You may well be right.
I think point 2 is highly questionable though. Just from an information aggregation POV, it seems like we should want key public goods providers to be open to all ideas and to do rather little to filter or privilege some ideas. For example, the forum should not elevate posts on animals or poverty or AI or whatever (and they don’t). I’ve been upset with 80k for this.
I think HLI provides a good example of how this should be done. If you want to push EA in a direction, do that as a spoke and try to sway people to your spoke. “Capturing” a central hub is not how this should be done. I think having a norm against this would be helpful.
That said, I also unfortunately do not think the market metaphor is going to be convincing to people. I think concerns around monocultures and group-think might be more persuasive, but again I don’t have very well-formed thoughts here. But I do think that if the goal of EA is to do the most good and we think there might be a cause x out there or we aren’t confident that we have the right mix of resources across cause areas, then there is real value in having a norm where central public goods providers do not strongly advocate for specific causes.