Thanks for sharing this Michael! I briefly discussed the idea with some of my coworkers, and we aren’t sure the argument goes through:
The arguments for free markets I usually hear are things like: if you make some assumptions about the market participants (e.g. perfect competition), then you can prove that the equilibrium price is optimal in some sense, and therefore distorting the market moves you from optimality. I think there is some metaphorical similarity between EA and a market, but it’s not clear to me that the assumptions of these theorems are actually satisfied by EA.[1]
Maybe more importantly though: the theorems usually show optimality for market participants, but EA is not optimizing for EA’s, we are optimizing for EA’s beneficiaries. These people do not participate in the EA “market,” and I don’t know of any reason to think that market efficiency within EA would necessarily result in their welfare being correctly priced.
While I agree that the market metaphor has some significant limitations here, I think there’s a separate set of arguments for free(ish) markets that is based more on experience rather than theorems. In many cases, they work well on the whole at achieving the ends to which market participants are working (which are admittedly usually self-interested ends). And they also often result in participants creating value for people they do not even consciously intend to benefit (as in the Adam Smith quote).
I’d also suggest that goodness-of-fit here should be evaluated in a relative sense. In light of experience, I’d submit that the base rate of more-centrally-controlled charitable governance structures effectively optimizing for the charitable endeavor’s stated beneficiaries is pretty low. One could adjust that base rate upward based on a conclusion that the people running the centralized governance structures in EA are more capable/selfless/suitable than the median people running other charitable endeavors. However, if one did so, one would likely think the EA community is also more capable/selfless/suitable than the median group of people making decisions in decentralized charitable governance structures. That would call for applying an upward adjustment to the base rate of decentralized governance approaches working well for stated beneficiaries, prior to evaluating the proposal presented in this post.
the base rate of more-centrally-controlled charitable governance structures effectively optimizing for the charitable endeavor’s stated beneficiaries is pretty low.
I would be interested in your data set here; this doesn’t seem obvious to me.[1]
I assume you mean to say that more centrally controlled charities are worse; if you’re just saying that the base rate amongst both centrally and non-centrally controlled charities is low, then I agree.
I assume you mean to say that more centrally controlled charities are worse; if you’re just saying that the base rate amongst both centrally and non-centrally controlled charities is low, then I agree.
I am just saying that without making an assertion that the central or non-central base rate is higher. My reference to low base rates among centrally-controlled charities was an attempt to explain that Michael’s market metaphor could have some significant limitations and yet could potentially be superior to alternative governance approaches.
My own view (noted in a separate comment) is that the nature of the specific community infrastructure function plays a significant role in whether I would predict a centralized vs. decentralized approach to work better.
but it’s not clear to me that the assumptions of these theorems are actually satisfied by EA
Definitely not. A small sample of the obstacles to applying the welfare theorems here:
Certain “trades” have very large negative externalities due to the risk of reputational harm
The largest few funders have huge amounts of market power.
We’re extremely far from perfect information. “Consumers” don’t even have good knowledge of their own utility functions in most cases.
EA is not a complete system of markets—the vast majority of possible charitable interventions are not on offer at any given time.
Perhaps most importantly: “sellers” aren’t profit-maximizers
But even if EA did approximate a perfectly competitive market, that would imply very little about its effectiveness. The first welfare theorem says that perfect competition gets you a Pareto-efficient outcome, but Pareto-efficient outcomes can be almost arbitrarily bad. “The king owns literally everything” is Pareto-efficient. The second welfare theorem is where all the oomph comes from: given perfect competition, you can reach any point on the Pareto-frontier—by redistributing resources and then letting the market reequilibrate. But EA is not a state and cannot carry out redistribution, so this gets us nothing.
As I said in my comment to Fin Moorhouse below, I’m not sure what difference it makes that market participants are buying for others in the EA market, but themselves in the normal market. Can you spell out what you take to the relevant feature, and what sort of ‘market distortions’ it specifically justifies? In both the EA and normal markets, are trying to get the ‘best value’, but will disagree over what that is and how it is to be achieved.
If the concern is about externalities, that seems to strongly count against intervening in the EA market. In normal markets, people don’t account for externalities, that is, their effects on others. But in the EA market, people are explicitly trying to do the most good: they are looking to do the things that have the best result when you account for all the effects on everyone; in economics jargon, they are trying to internalise all those externalities themselves. Hence, in the EA market -distinctively from any other market(!) - there are no clear grounds for intervening on the basis of externalities.
Hi Michael, unfortunately it is late where I am so the clarity of my comment may suffer, but I feel like if I do not answer now I may forget later, so with perfect being the enemy of the good, I hope to produce good enough intuition pump for my disagreements:
An example of a market where the buyer buys for others is the healthcare market, where insurances, hospitals, doctors, and patients all exist, patients buy insurances, insurances pay hospitals, which pay doctors (in the US doctors may work as small sole-traders within the hospital like a shop in a mall). As a result, a lot of market failure happens (moral hazard, adverse selection). In this case, you would have to model that each seller and buyer is perfectly informed, and perfectly able to communicate the needs of those they help, which is problematic. I know how much I need something, so I buy it, but here, a fund needs to guess how much donors wanted something improved, while a charity guesses how much improvement the recepients got, and then they meet in the middle? Troublesome; perhaps in this case you just pay the smartest charity people (a la Charity Entrepreneurship) and trust them to do the best they can, instead of spending energy competing with others to prove their worth.
This brings in the problem of market distortions through advertising—whichever charity spends more on looking good to buyers can get more than an equivalent one who does not, so the equilibrium tends to go towards “advertising”. This can be all sorts of signalling which creates noise.
Good is hard to measure, and most things that are hard to measure have markets that end up in very inefficient equilibria (healthcare, education, public transport) and are better off being centrally regulated (to an extent) in many cases.
Counting on the people in the market caring about externalities as you do above passes the buck, but is actually a vulnerability of the system—people who come there and do not care about externalities would then have better looking numbers. Also, humans are bad at noticing all of their externalities—I would hardly expect an AI safety researcher to be good at considering how much is the ecological footprint of their solution, or even to think about doing so. Instead, a regulatory body can set standards that have to be met, making it easier for sellers to know what they need to have in order to compete on the market. Free market is bad at solving this.
Hope these make sense, and serve as discussion points for further thinking! Let me know your thoughts on these, I am curious to better understand if this makes you update away from your position, or if you had thought of this in ways I did not fully grasp.
Thanks for sharing this Michael! I briefly discussed the idea with some of my coworkers, and we aren’t sure the argument goes through:
The arguments for free markets I usually hear are things like: if you make some assumptions about the market participants (e.g. perfect competition), then you can prove that the equilibrium price is optimal in some sense, and therefore distorting the market moves you from optimality. I think there is some metaphorical similarity between EA and a market, but it’s not clear to me that the assumptions of these theorems are actually satisfied by EA.[1]
Maybe more importantly though: the theorems usually show optimality for market participants, but EA is not optimizing for EA’s, we are optimizing for EA’s beneficiaries. These people do not participate in the EA “market,” and I don’t know of any reason to think that market efficiency within EA would necessarily result in their welfare being correctly priced.
And if there is already some market distortion, removing other market distortions might not help
While I agree that the market metaphor has some significant limitations here, I think there’s a separate set of arguments for free(ish) markets that is based more on experience rather than theorems. In many cases, they work well on the whole at achieving the ends to which market participants are working (which are admittedly usually self-interested ends). And they also often result in participants creating value for people they do not even consciously intend to benefit (as in the Adam Smith quote).
I’d also suggest that goodness-of-fit here should be evaluated in a relative sense. In light of experience, I’d submit that the base rate of more-centrally-controlled charitable governance structures effectively optimizing for the charitable endeavor’s stated beneficiaries is pretty low. One could adjust that base rate upward based on a conclusion that the people running the centralized governance structures in EA are more capable/selfless/suitable than the median people running other charitable endeavors. However, if one did so, one would likely think the EA community is also more capable/selfless/suitable than the median group of people making decisions in decentralized charitable governance structures. That would call for applying an upward adjustment to the base rate of decentralized governance approaches working well for stated beneficiaries, prior to evaluating the proposal presented in this post.
I would be interested in your data set here; this doesn’t seem obvious to me.[1]
I assume you mean to say that more centrally controlled charities are worse; if you’re just saying that the base rate amongst both centrally and non-centrally controlled charities is low, then I agree.
I am just saying that without making an assertion that the central or non-central base rate is higher. My reference to low base rates among centrally-controlled charities was an attempt to explain that Michael’s market metaphor could have some significant limitations and yet could potentially be superior to alternative governance approaches.
My own view (noted in a separate comment) is that the nature of the specific community infrastructure function plays a significant role in whether I would predict a centralized vs. decentralized approach to work better.
Definitely not. A small sample of the obstacles to applying the welfare theorems here:
Certain “trades” have very large negative externalities due to the risk of reputational harm
The largest few funders have huge amounts of market power.
We’re extremely far from perfect information. “Consumers” don’t even have good knowledge of their own utility functions in most cases.
EA is not a complete system of markets—the vast majority of possible charitable interventions are not on offer at any given time.
Perhaps most importantly: “sellers” aren’t profit-maximizers
But even if EA did approximate a perfectly competitive market, that would imply very little about its effectiveness. The first welfare theorem says that perfect competition gets you a Pareto-efficient outcome, but Pareto-efficient outcomes can be almost arbitrarily bad. “The king owns literally everything” is Pareto-efficient. The second welfare theorem is where all the oomph comes from: given perfect competition, you can reach any point on the Pareto-frontier—by redistributing resources and then letting the market reequilibrate. But EA is not a state and cannot carry out redistribution, so this gets us nothing.
Hello Ben and thanks for this!
As I said in my comment to Fin Moorhouse below, I’m not sure what difference it makes that market participants are buying for others in the EA market, but themselves in the normal market. Can you spell out what you take to the relevant feature, and what sort of ‘market distortions’ it specifically justifies? In both the EA and normal markets, are trying to get the ‘best value’, but will disagree over what that is and how it is to be achieved.
If the concern is about externalities, that seems to strongly count against intervening in the EA market. In normal markets, people don’t account for externalities, that is, their effects on others. But in the EA market, people are explicitly trying to do the most good: they are looking to do the things that have the best result when you account for all the effects on everyone; in economics jargon, they are trying to internalise all those externalities themselves. Hence, in the EA market -distinctively from any other market(!) - there are no clear grounds for intervening on the basis of externalities.
Hi Michael, unfortunately it is late where I am so the clarity of my comment may suffer, but I feel like if I do not answer now I may forget later, so with perfect being the enemy of the good, I hope to produce good enough intuition pump for my disagreements:
An example of a market where the buyer buys for others is the healthcare market, where insurances, hospitals, doctors, and patients all exist, patients buy insurances, insurances pay hospitals, which pay doctors (in the US doctors may work as small sole-traders within the hospital like a shop in a mall). As a result, a lot of market failure happens (moral hazard, adverse selection). In this case, you would have to model that each seller and buyer is perfectly informed, and perfectly able to communicate the needs of those they help, which is problematic. I know how much I need something, so I buy it, but here, a fund needs to guess how much donors wanted something improved, while a charity guesses how much improvement the recepients got, and then they meet in the middle? Troublesome; perhaps in this case you just pay the smartest charity people (a la Charity Entrepreneurship) and trust them to do the best they can, instead of spending energy competing with others to prove their worth.
This brings in the problem of market distortions through advertising—whichever charity spends more on looking good to buyers can get more than an equivalent one who does not, so the equilibrium tends to go towards “advertising”. This can be all sorts of signalling which creates noise.
Good is hard to measure, and most things that are hard to measure have markets that end up in very inefficient equilibria (healthcare, education, public transport) and are better off being centrally regulated (to an extent) in many cases.
Counting on the people in the market caring about externalities as you do above passes the buck, but is actually a vulnerability of the system—people who come there and do not care about externalities would then have better looking numbers. Also, humans are bad at noticing all of their externalities—I would hardly expect an AI safety researcher to be good at considering how much is the ecological footprint of their solution, or even to think about doing so. Instead, a regulatory body can set standards that have to be met, making it easier for sellers to know what they need to have in order to compete on the market. Free market is bad at solving this.
Hope these make sense, and serve as discussion points for further thinking! Let me know your thoughts on these, I am curious to better understand if this makes you update away from your position, or if you had thought of this in ways I did not fully grasp.