One thing I kind of want to say is this: I agree that EA probably would benefit by being more capitalist in it, I do want to point out why a central authority will still be necessary.
The reason is that in capitalism, the downside is essentially capped or even made equivalent to zero as measured by profit metrics, while the benefits are unlimited money. This is not the case, as the risks of harm are not limited at all, as well as benefits, thus efficiency is not always good. Differential progress is necessary, thus the need for at least a central authority to weed out ideas that are dangerous. Capitalism and markets, by default are too neutral towards where progress goes.
Second, infohazards matter. In a free market, all information, including infohazards are shared freely, because they don’t personally bear the cost. An example is AI progress insights would likely be infohazardous.
So free markets will need to have a central authority that can stop net-negative ideas.
I agree that open competition could lead to bad dynamics, but you absolutely don’t need a central authority for this, you just need a set of groups trusted to have high epistemic and infohazard standards. Within the “core EA” world, I’ll note that we have Openphil, Givewell, FTX Foundation, Survival and Flourishing Fund, Founders Pledge, and the various EA Funds, LTFF, Infrastructure, etc. and I’d be shocked if we couldn’t easily add another dozen—though I’d be happier if they had somewhat more explicitly divergent strategies, rather than overlapping to a significant extent. So we do need authorities who are responsible, but no, they don’t need to be centralized.
One thing I kind of want to say is this: I agree that EA probably would benefit by being more capitalist in it, I do want to point out why a central authority will still be necessary.
The reason is that in capitalism, the downside is essentially capped or even made equivalent to zero as measured by profit metrics, while the benefits are unlimited money. This is not the case, as the risks of harm are not limited at all, as well as benefits, thus efficiency is not always good. Differential progress is necessary, thus the need for at least a central authority to weed out ideas that are dangerous. Capitalism and markets, by default are too neutral towards where progress goes.
Second, infohazards matter. In a free market, all information, including infohazards are shared freely, because they don’t personally bear the cost. An example is AI progress insights would likely be infohazardous.
So free markets will need to have a central authority that can stop net-negative ideas.
I agree that open competition could lead to bad dynamics, but you absolutely don’t need a central authority for this, you just need a set of groups trusted to have high epistemic and infohazard standards. Within the “core EA” world, I’ll note that we have Openphil, Givewell, FTX Foundation, Survival and Flourishing Fund, Founders Pledge, and the various EA Funds, LTFF, Infrastructure, etc. and I’d be shocked if we couldn’t easily add another dozen—though I’d be happier if they had somewhat more explicitly divergent strategies, rather than overlapping to a significant extent. So we do need authorities who are responsible, but no, they don’t need to be centralized.