Thank you so much for this post. Your example of capitalism points the way. I plan to write a post soon suggesting that individuals following their personal passions for where to do good could lead to just this kind of effective distribution of altruistic energy that is globally optimized even though apparently less efficient at the local level.
One thing I kind of want to say is this: I agree that EA probably would benefit by being more capitalist in it, I do want to point out why a central authority will still be necessary.
The reason is that in capitalism, the downside is essentially capped or even made equivalent to zero as measured by profit metrics, while the benefits are unlimited money. This is not the case, as the risks of harm are not limited at all, as well as benefits, thus efficiency is not always good. Differential progress is necessary, thus the need for at least a central authority to weed out ideas that are dangerous. Capitalism and markets, by default are too neutral towards where progress goes.
Second, infohazards matter. In a free market, all information, including infohazards are shared freely, because they don’t personally bear the cost. An example is AI progress insights would likely be infohazardous.
So free markets will need to have a central authority that can stop net-negative ideas.
I agree that open competition could lead to bad dynamics, but you absolutely don’t need a central authority for this, you just need a set of groups trusted to have high epistemic and infohazard standards. Within the “core EA” world, I’ll note that we have Openphil, Givewell, FTX Foundation, Survival and Flourishing Fund, Founders Pledge, and the various EA Funds, LTFF, Infrastructure, etc. and I’d be shocked if we couldn’t easily add another dozen—though I’d be happier if they had somewhat more explicitly divergent strategies, rather than overlapping to a significant extent. So we do need authorities who are responsible, but no, they don’t need to be centralized.
That seems wrong, at least in the naïve way you suggest. Yes, I often encourage people to consider what they are interested in and enjoy doing, as those are critical inputs into effectiveness, but I’ve been concerned that too many people are failing in the rationalist virtues around actually evaluating the evidence and changing their mind to think that following passions alone is going to be helpful.
Thank you so much for this post. Your example of capitalism points the way. I plan to write a post soon suggesting that individuals following their personal passions for where to do good could lead to just this kind of effective distribution of altruistic energy that is globally optimized even though apparently less efficient at the local level.
One thing I kind of want to say is this: I agree that EA probably would benefit by being more capitalist in it, I do want to point out why a central authority will still be necessary.
The reason is that in capitalism, the downside is essentially capped or even made equivalent to zero as measured by profit metrics, while the benefits are unlimited money. This is not the case, as the risks of harm are not limited at all, as well as benefits, thus efficiency is not always good. Differential progress is necessary, thus the need for at least a central authority to weed out ideas that are dangerous. Capitalism and markets, by default are too neutral towards where progress goes.
Second, infohazards matter. In a free market, all information, including infohazards are shared freely, because they don’t personally bear the cost. An example is AI progress insights would likely be infohazardous.
So free markets will need to have a central authority that can stop net-negative ideas.
I agree that open competition could lead to bad dynamics, but you absolutely don’t need a central authority for this, you just need a set of groups trusted to have high epistemic and infohazard standards. Within the “core EA” world, I’ll note that we have Openphil, Givewell, FTX Foundation, Survival and Flourishing Fund, Founders Pledge, and the various EA Funds, LTFF, Infrastructure, etc. and I’d be shocked if we couldn’t easily add another dozen—though I’d be happier if they had somewhat more explicitly divergent strategies, rather than overlapping to a significant extent. So we do need authorities who are responsible, but no, they don’t need to be centralized.
That seems wrong, at least in the naïve way you suggest. Yes, I often encourage people to consider what they are interested in and enjoy doing, as those are critical inputs into effectiveness, but I’ve been concerned that too many people are failing in the rationalist virtues around actually evaluating the evidence and changing their mind to think that following passions alone is going to be helpful.