Thanks for this. Reading this, and other comments, I don’t think I’m managed to convey what I think could and should be distinctive about effective altruism. Let me try again!
In normal markets, people seek out the best value for themselves.
In effective altruism (as I’m conceiving of it) people seek out the best value for others. In both cases, people can and will have different ideas of what ‘value’ means in practice; and in the EA market, people may also disagree over how to think of who the relevant ‘others’ are too.
Both of these contrast with the ‘normal’ charity world, where people seek out value for others, but there is little implicit or explicit attempt to seek out the best value for others; it’s not something people have in mind. A major contribution of EA thinking is to point this out.
The normal market and EA worlds thus have something in common that distinguishes them from the regular charity world. The point of the post is to think about how, given this commonality, the EA market should be structured to achieve the best outcomes for its participants; my claim is that this, given this similarity, the presumption is that the EA market and normal market should run along similar lines.
If it helps, try to momentarily forget everything you know about the actual EA movement and ask “Okay, if wanted to design a ‘maximum altruist marketplace’ (MAM), a place people come to shop around for the best ways to use resources to help others, how would we do that?” Crucially, in MAM, just like a regular market, you don’t want to assume that you, as the social planner, have a better idea of what people want than they do themselves. That’s the Hayekian point (with apologies, I think you’ve got the wrong end of the stick here!)
Pace you and Ben West above, I don’t think it invalidates the analogy that people are aiming at value for others, rather than themselves. There seems to be a background assumption of “well, because people are buying for others in the MAM, and they don’t really know what others want, we (the social planner) should intervene”. But notice you can say the same thing in normal markets—“people don’t know really what they want, so we (the social planner) should intervene”. Yet we are very reluctant to intervene in the latter case. So, presumably, we should be reluctant here too.
Of course, we do think it’s justified to intervene in normal markets to some degree (e.g. alcohol sales restricted by age), but each intervention needs to be justified. Not all interventions are justified. The conversation I would like to have regarding MAM is about which interventions are justified, and why.
I get the sense we’re slightly speaking past each other. I am claiming (1) the maximum altruist market should exist, and then (2) suggesting how EA could be closer to that. It seems you, and maybe some others, are not sold on the value of (1): you’d rather focus on advocating for particular outcomes, and are indifferent about (1). Note it’s analogous to someone saying “look, I don’t care about whether there’s a free market: I just want my company to be really successful.”
I can understand that many people won’t care if a maximum altruism marketplace exists. I, for one, would like it to exist; it seems an important public good. I’d also like the central parts of the EA movement to fulfil that role, as they seem best placed to do it. If the EA movement (or, rather, its central parts) end up promoting very particular outcomes, then it loses much of what appeared to be distinctive about it, and it looks more like the rest of the charity world.
You point out that both in markets and in EA (at least its idealised version), people are deliberately seeking out the most value for themselves or others, contrasted to much of the charity world, where people don’t tend to think of what they’re doing as seeking out the most value for themselves or others. That sounds roughly right, but I don’t think it follows that EA is best imagined or idealised as a kind of market. Though I’m not suggesting you claim that it does follow.
It also seems worth pointing out that in some sense there are literal markets for ‘normal charity’ interventions — like the different options I can choose from to sponsor a cute animal as a Christmas gift for someone. And these are markets where people are in some sense choosing the best or most ‘valuable’ deal (insofar as I might compare charities, and those charities will do various things to vie for my donation). I think this shows that the “is this a market” test does not necessarily delineate your idealised version of EA from ‘normal charity’ alone. Again, not suggesting you make that exact claim, but I think it’s worth getting clear on.
Instead, as you suggest, it’s what the market is in that matters — in the case of EA we want a market for “things that do the most good”. You could construe this as a difference in the preferences of the buyers, where the preferences of EA donors are typically more explicitly consequentialist / welfarist / cosmopolitan than donors to other kinds of charity. So I guess your claim is not that being a market in charitable interventions would make EA distinctive, but rather that it is or should be a particular kind of market where the buyers want to do the most good. Is that a fair summary of your view?
If so, I think I’m emphasising that descriptively the ”...doing the most good” part may be more distinctive of the EA project than “EA is a market for...” Normatively I take you to want EA to be more like a competitive market, and there I think there are certainly features of competitive markets that seem good to move towards, but I’m also hesitant to make the market analogy, like, the central guide to how EA should change.
Couple other points:
I still don’t think the Hayekian motivation for markets carries over to the EA case, at least not as you’ve make the pitch. My (possibly poorly remembered) understanding was that markets are a useful way to aggregate information about individuals preferences and affordances via the price discovery mechanism. It’s true that the EA system as a whole (hopefully) discovers things about what are the best ways to help people, but not through the mechanism of price discovery! In fact, I’d say the way it uncovers information is just as similar to how a planner could uncover information — by commissioning research etc. Maybe I’m missing something here.[1]
I agree that the fact people are aiming at value for others doesn’t invalidate the analogy. Indeed, people buy things for other people in normal markets very often.
On your point about intervention, I guess I’m confused about what it means to ‘intervene’ in the market for doing the most good, and who is the ‘we’ doing the intervening (who presumably are neither funder nor org). Like, what is the analogy to imposing taxes or subsidies, and what is the entity imposing them?
You characterise my view as being indifferent on whether EA should be more like a market, and in favour of advocating for particular causes. I’d say my view is more that I’m just kinda confused about exactly what the market analogy prescribes, and as such I’m wary of using the market metaphor as a guide. I’d probably endorse some of the things you say it recommends.
However I strongly agree that if EA just became a vehicle for advocating a fixed set of causes from now on, then it would lose a very major part of what makes it distinctive. Part of what makes EA distinctive are all the features that identifies those causes — a culture of open discussion and curiosity, norms around good epistemic practice, a relatively meritocratic job market, and a willingness on the part of orgs, funders, and individuals to radically reassess their priorities on the grounds of new evidence. Those things have much in common with free markets, but I don’t think we need the market analogy to see their merit.
Another disanalogy might be that price discovery works through an adversarial relationship where (speaking loosely) buyers care about output for money and sellers care about money for input. But in the EA case, buyers care about altruistic value per dollar, but sellers (e.g. orgs) don’t care about profit — they often also care about altruistic value per dollar. So what is the analogous price discovery mechanism?
Thanks for this. Reading this, and other comments, I don’t think I’m managed to convey what I think could and should be distinctive about effective altruism. Let me try again!
In normal markets, people seek out the best value for themselves.
In effective altruism (as I’m conceiving of it) people seek out the best value for others. In both cases, people can and will have different ideas of what ‘value’ means in practice; and in the EA market, people may also disagree over how to think of who the relevant ‘others’ are too.
Both of these contrast with the ‘normal’ charity world, where people seek out value for others, but there is little implicit or explicit attempt to seek out the best value for others; it’s not something people have in mind. A major contribution of EA thinking is to point this out.
The normal market and EA worlds thus have something in common that distinguishes them from the regular charity world. The point of the post is to think about how, given this commonality, the EA market should be structured to achieve the best outcomes for its participants; my claim is that this, given this similarity, the presumption is that the EA market and normal market should run along similar lines.
If it helps, try to momentarily forget everything you know about the actual EA movement and ask “Okay, if wanted to design a ‘maximum altruist marketplace’ (MAM), a place people come to shop around for the best ways to use resources to help others, how would we do that?” Crucially, in MAM, just like a regular market, you don’t want to assume that you, as the social planner, have a better idea of what people want than they do themselves. That’s the Hayekian point (with apologies, I think you’ve got the wrong end of the stick here!)
Pace you and Ben West above, I don’t think it invalidates the analogy that people are aiming at value for others, rather than themselves. There seems to be a background assumption of “well, because people are buying for others in the MAM, and they don’t really know what others want, we (the social planner) should intervene”. But notice you can say the same thing in normal markets—“people don’t know really what they want, so we (the social planner) should intervene”. Yet we are very reluctant to intervene in the latter case. So, presumably, we should be reluctant here too.
Of course, we do think it’s justified to intervene in normal markets to some degree (e.g. alcohol sales restricted by age), but each intervention needs to be justified. Not all interventions are justified. The conversation I would like to have regarding MAM is about which interventions are justified, and why.
I get the sense we’re slightly speaking past each other. I am claiming (1) the maximum altruist market should exist, and then (2) suggesting how EA could be closer to that. It seems you, and maybe some others, are not sold on the value of (1): you’d rather focus on advocating for particular outcomes, and are indifferent about (1). Note it’s analogous to someone saying “look, I don’t care about whether there’s a free market: I just want my company to be really successful.”
I can understand that many people won’t care if a maximum altruism marketplace exists. I, for one, would like it to exist; it seems an important public good. I’d also like the central parts of the EA movement to fulfil that role, as they seem best placed to do it. If the EA movement (or, rather, its central parts) end up promoting very particular outcomes, then it loses much of what appeared to be distinctive about it, and it looks more like the rest of the charity world.
Thanks for the response.
You point out that both in markets and in EA (at least its idealised version), people are deliberately seeking out the most value for themselves or others, contrasted to much of the charity world, where people don’t tend to think of what they’re doing as seeking out the most value for themselves or others. That sounds roughly right, but I don’t think it follows that EA is best imagined or idealised as a kind of market. Though I’m not suggesting you claim that it does follow.
It also seems worth pointing out that in some sense there are literal markets for ‘normal charity’ interventions — like the different options I can choose from to sponsor a cute animal as a Christmas gift for someone. And these are markets where people are in some sense choosing the best or most ‘valuable’ deal (insofar as I might compare charities, and those charities will do various things to vie for my donation). I think this shows that the “is this a market” test does not necessarily delineate your idealised version of EA from ‘normal charity’ alone. Again, not suggesting you make that exact claim, but I think it’s worth getting clear on.
Instead, as you suggest, it’s what the market is in that matters — in the case of EA we want a market for “things that do the most good”. You could construe this as a difference in the preferences of the buyers, where the preferences of EA donors are typically more explicitly consequentialist / welfarist / cosmopolitan than donors to other kinds of charity. So I guess your claim is not that being a market in charitable interventions would make EA distinctive, but rather that it is or should be a particular kind of market where the buyers want to do the most good. Is that a fair summary of your view?
If so, I think I’m emphasising that descriptively the ”...doing the most good” part may be more distinctive of the EA project than “EA is a market for...” Normatively I take you to want EA to be more like a competitive market, and there I think there are certainly features of competitive markets that seem good to move towards, but I’m also hesitant to make the market analogy, like, the central guide to how EA should change.
Couple other points:
I still don’t think the Hayekian motivation for markets carries over to the EA case, at least not as you’ve make the pitch. My (possibly poorly remembered) understanding was that markets are a useful way to aggregate information about individuals preferences and affordances via the price discovery mechanism. It’s true that the EA system as a whole (hopefully) discovers things about what are the best ways to help people, but not through the mechanism of price discovery! In fact, I’d say the way it uncovers information is just as similar to how a planner could uncover information — by commissioning research etc. Maybe I’m missing something here.[1]
I agree that the fact people are aiming at value for others doesn’t invalidate the analogy. Indeed, people buy things for other people in normal markets very often.
On your point about intervention, I guess I’m confused about what it means to ‘intervene’ in the market for doing the most good, and who is the ‘we’ doing the intervening (who presumably are neither funder nor org). Like, what is the analogy to imposing taxes or subsidies, and what is the entity imposing them?
You characterise my view as being indifferent on whether EA should be more like a market, and in favour of advocating for particular causes. I’d say my view is more that I’m just kinda confused about exactly what the market analogy prescribes, and as such I’m wary of using the market metaphor as a guide. I’d probably endorse some of the things you say it recommends.
However I strongly agree that if EA just became a vehicle for advocating a fixed set of causes from now on, then it would lose a very major part of what makes it distinctive. Part of what makes EA distinctive are all the features that identifies those causes — a culture of open discussion and curiosity, norms around good epistemic practice, a relatively meritocratic job market, and a willingness on the part of orgs, funders, and individuals to radically reassess their priorities on the grounds of new evidence. Those things have much in common with free markets, but I don’t think we need the market analogy to see their merit.
Another disanalogy might be that price discovery works through an adversarial relationship where (speaking loosely) buyers care about output for money and sellers care about money for input. But in the EA case, buyers care about altruistic value per dollar, but sellers (e.g. orgs) don’t care about profit — they often also care about altruistic value per dollar. So what is the analogous price discovery mechanism?