Hi Michael, unfortunately it is late where I am so the clarity of my comment may suffer, but I feel like if I do not answer now I may forget later, so with perfect being the enemy of the good, I hope to produce good enough intuition pump for my disagreements:
An example of a market where the buyer buys for others is the healthcare market, where insurances, hospitals, doctors, and patients all exist, patients buy insurances, insurances pay hospitals, which pay doctors (in the US doctors may work as small sole-traders within the hospital like a shop in a mall). As a result, a lot of market failure happens (moral hazard, adverse selection). In this case, you would have to model that each seller and buyer is perfectly informed, and perfectly able to communicate the needs of those they help, which is problematic. I know how much I need something, so I buy it, but here, a fund needs to guess how much donors wanted something improved, while a charity guesses how much improvement the recepients got, and then they meet in the middle? Troublesome; perhaps in this case you just pay the smartest charity people (a la Charity Entrepreneurship) and trust them to do the best they can, instead of spending energy competing with others to prove their worth.
This brings in the problem of market distortions through advertising—whichever charity spends more on looking good to buyers can get more than an equivalent one who does not, so the equilibrium tends to go towards “advertising”. This can be all sorts of signalling which creates noise.
Good is hard to measure, and most things that are hard to measure have markets that end up in very inefficient equilibria (healthcare, education, public transport) and are better off being centrally regulated (to an extent) in many cases.
Counting on the people in the market caring about externalities as you do above passes the buck, but is actually a vulnerability of the system—people who come there and do not care about externalities would then have better looking numbers. Also, humans are bad at noticing all of their externalities—I would hardly expect an AI safety researcher to be good at considering how much is the ecological footprint of their solution, or even to think about doing so. Instead, a regulatory body can set standards that have to be met, making it easier for sellers to know what they need to have in order to compete on the market. Free market is bad at solving this.
Hope these make sense, and serve as discussion points for further thinking! Let me know your thoughts on these, I am curious to better understand if this makes you update away from your position, or if you had thought of this in ways I did not fully grasp.
Hi Michael, unfortunately it is late where I am so the clarity of my comment may suffer, but I feel like if I do not answer now I may forget later, so with perfect being the enemy of the good, I hope to produce good enough intuition pump for my disagreements:
An example of a market where the buyer buys for others is the healthcare market, where insurances, hospitals, doctors, and patients all exist, patients buy insurances, insurances pay hospitals, which pay doctors (in the US doctors may work as small sole-traders within the hospital like a shop in a mall). As a result, a lot of market failure happens (moral hazard, adverse selection). In this case, you would have to model that each seller and buyer is perfectly informed, and perfectly able to communicate the needs of those they help, which is problematic. I know how much I need something, so I buy it, but here, a fund needs to guess how much donors wanted something improved, while a charity guesses how much improvement the recepients got, and then they meet in the middle? Troublesome; perhaps in this case you just pay the smartest charity people (a la Charity Entrepreneurship) and trust them to do the best they can, instead of spending energy competing with others to prove their worth.
This brings in the problem of market distortions through advertising—whichever charity spends more on looking good to buyers can get more than an equivalent one who does not, so the equilibrium tends to go towards “advertising”. This can be all sorts of signalling which creates noise.
Good is hard to measure, and most things that are hard to measure have markets that end up in very inefficient equilibria (healthcare, education, public transport) and are better off being centrally regulated (to an extent) in many cases.
Counting on the people in the market caring about externalities as you do above passes the buck, but is actually a vulnerability of the system—people who come there and do not care about externalities would then have better looking numbers. Also, humans are bad at noticing all of their externalities—I would hardly expect an AI safety researcher to be good at considering how much is the ecological footprint of their solution, or even to think about doing so. Instead, a regulatory body can set standards that have to be met, making it easier for sellers to know what they need to have in order to compete on the market. Free market is bad at solving this.
Hope these make sense, and serve as discussion points for further thinking! Let me know your thoughts on these, I am curious to better understand if this makes you update away from your position, or if you had thought of this in ways I did not fully grasp.