Moderate long-run EA doesn’t look close to having fully formed ideas to me, and therefore it seems to me a strange way to introduce people to EA more generally.
you’ll want to make investments in technology
I don’t understand this. Is there an appropriate research fund to donate to? Or are we talking about profit-driven capital spending? Or just going into applied science research as part of an otherwise unremarkable career?
and economic growth
Who knows how to make economies grow?
This will mean better global institutions, smarter leaders, more social science
What is a “better” global institution, and is there any EA writing on plans to make any such institutions better? (I don’t mean this to come across as entirely critical—I can imagine someone being a bureaucrat or diplomat at the next WTO round or something. I just haven’t seen any concrete ideas floated in this direction. Is there a corner of EA websites that I’m completely oblivious to? A Facebook thread that I missed (quite plausible)?)
I have even less idea of how you plan to make better politicians win elections.
More social science I can at least understand: more policy-relevant knowledge --> hopefully better policy-making.
Underlying some of what you write is, I think, the idea that political lobbying or activism (?) could be highly effective. Or maybe going into the public service to craft policy. And that might well be right, and it would perhaps put this wing of EA, should it develop, comfortably within the sort of common-sense ideas that you say it would. (I say “perhaps” because the most prominent policy idea I see in EA discussions—I might be biased because I agree with and read a lot of it—is open borders, which is decidedly not mainstream.)
But overall I just don’t see where this hypothetical introduction to EA is going to go, at least until the Open Philanthropy Project has a few years under its belt.
Is there an appropriate research fund to donate to? Or are we talking about profit-driven capital spending? Or just going into applied science research as part of an otherwise unremarkable career? Who knows how to make economies grow? What is a “better” global institution, and is there any EA writing on plans to make any such institutions better?
I’d also find it helpful to know the answers to these questions. In particular, to compare like with like, it would be interesting to know how advocates of long-run focused interventions would recommend spending a thousand dollars rather than funding, say, bednet distribution.
This is a key action-relevant question for me and others. I’ve asked quite a few people, but haven’t yet heard an answer that I’ve personally been impressed by. I also haven’t been given many specific charities or interventions, which leaves the argument in the realm of intellectually interesting theory rather than concrete practicality. Of course this isn’t to say that there aren’t any, which is why I ask! (I have made an effort to ask quite a few far-future focused people though.)
(I know some people advocate saving your money until a good opportunity comes up. Paul has an interesting discussion of this here.)
I agree, and I’d add that what I see as one of the key ideas of effective altruism, that people should give substantially more than is typical, is harder to get off the ground in this framework. Singer’s pond example, for all its flaws, makes the case for giving a lot quite salient, in a way that I don’t think general considerations about maximizing the impact of your philanthropy in the long term are going to.
That’s true, though you can just present the best short-run thing as a compelling lower bound rather than an all considered answer to what maximizes your impact.
To clarify, I was defining the different forms of EA more along the lines of ‘how they evaluate impact’, rather than which specific projects they think are best.
Short-run focused EA focuses on evaluating short-run effects.
Long-run focused EA also tries to take account of long-run effects.
Extreme long-run EA combines a focus on long-run effects with other unintuitive positions such as a focus on specific xrisks. Moderate long-run EA doesn’t.
The point of moderate long-run EA is that it’s much less clear which interventions are best by these standards.
I wasn’t trying to say that moderate long-run EA should focus on promoting economic growth and building better institutions, just that these are valuable outcomes, and it’s pretty unclear that we should prefer malaria nets (which were mainly selected on the basis of short-run immediate impact) to other efforts to do good that are widely pursued by smart altruists outside of the EA community.
A moderate long-run EA could even think that malaria nets are the best thing (at least for money, if not human capital), but they’ll be more uncertain and give greater emphasis to the flow through effects.
Yes, moderate long-run EA is more uncertain and doesn’t have “fully formed” answers—but that’s the situation we’re actually in.
EA’s haven’t been as substantially involved in science funding, but it’s a pretty common target for philanthropy. And many people invest in technology, or pursue careers in technology, in the interests of making the world better. My best guess is that these activities have a significantly larger medium term humanitarian impact than aid. I think this is a common view amongst intellectuals in the US. We probably all agree that it’s not a clear case either way.
The story with social science, political advocacy, etc., is broadly similar to the story with technology, though I think it’s less likely to be as good as poverty alleviation (or at least the case is more speculative).
Note that e.g. spending money to influence elections is a pretty common activity, it seems weird to be so skeptical. And while open borders is very speculative, immigration liberalization isn’t. I think the prevailing wisdom is that immigration liberalization is good for welfare, and there are many other technocratic policies in the same boat, where you’d expect money to be helpful.
It seems like this comes down to a distinction between effective altruism, meaning altruism which is effective, and EA referring to a narrower group of organizations and ideas. I am more interested in the former, which may account for my different view on this point. The point of the introduction also depends on who you are talking to and why (I mostly talk with people whose main impact on the world is via their choice of research area, rather than charitable donations; maybe that means I’m not the target audience here).
It seems like this comes down to a distinction between effective altruism, meaning altruism which is effective, and EA referring to a narrower group of organizations and ideas.
I’m happy to go with your former definition here (I’m dubious about putting the label ‘altruism’ onto something that’s profit-seeking, but “high-impact good things” are to be encouraged regardless). My objection is that I haven’t seen anyone make a case that these long-term ideas are cost-effective. e.g.,
My best guess is that these activities have a significantly larger medium term humanitarian impact than aid. I think this is a common view amongst intellectuals in the US. We probably all agree that it’s not a clear case either way.
Has anyone tried to make this case, discussing the marginal impact of an extra technology worker? We’d agree that as a whole, scientific and technological progress are enormously important, and underpin the poverty-alleviation work that we’re comparing these longer-term ideas to. But, e.g., if you go into tech and help create a gadget, and in an alternative world some sort of similar gadget gets released a little bit later, what is your impact?
The answer to that last question might be large in expectation-value terms (there’s a small probability of you making a profoundly different sort of transformative gadget), but I’d like to see someone try to plug some numbers in before it becomes the main entry point for Effective Altruism.
Note that e.g. spending money to influence elections is a pretty common activity, it seems weird to be so skeptical.
When Ben wrote “smarter leaders”, I interpreted it as some sort of qualitative change in the politicians we elect—a dream that would involve changing political party structures so that people good at playing internal power games aren’t rewarded, and instead we get a choice of more honest, clever, and dedicated candidates. If, on the other hand, electing smarter leaders it means donating to your preferred party’s or candidate’s get-out-the-vote campaign… well I would like to see the cost-effectiveness estimate.
(Ben might also be referring to EA’s going into politics themselves, and… fair enough. I doubt it’ll apply to more than a small minority of EA’s, but he only spent a small minority of his post writing about it.)
there are many other technocratic policies in the same boat, where you’d expect money to be helpful.
I think this is reasonable, and expectation-value impact estimates should be fairly tractable here, since policy wonks have often done cost-benefit analyses (leaving only the question of how much marginal donated dollars can shift the probability of a policy being enacted).
Overall I still feel like these ideas, as EA ideas, are in an embryonic stage since they lack cost-effectiveness guestimates.
Moderate long-run EA doesn’t look close to having fully formed ideas to me, and therefore it seems to me a strange way to introduce people to EA more generally.
I don’t understand this. Is there an appropriate research fund to donate to? Or are we talking about profit-driven capital spending? Or just going into applied science research as part of an otherwise unremarkable career?
Who knows how to make economies grow?
What is a “better” global institution, and is there any EA writing on plans to make any such institutions better? (I don’t mean this to come across as entirely critical—I can imagine someone being a bureaucrat or diplomat at the next WTO round or something. I just haven’t seen any concrete ideas floated in this direction. Is there a corner of EA websites that I’m completely oblivious to? A Facebook thread that I missed (quite plausible)?)
I have even less idea of how you plan to make better politicians win elections.
More social science I can at least understand: more policy-relevant knowledge --> hopefully better policy-making.
Underlying some of what you write is, I think, the idea that political lobbying or activism (?) could be highly effective. Or maybe going into the public service to craft policy. And that might well be right, and it would perhaps put this wing of EA, should it develop, comfortably within the sort of common-sense ideas that you say it would. (I say “perhaps” because the most prominent policy idea I see in EA discussions—I might be biased because I agree with and read a lot of it—is open borders, which is decidedly not mainstream.)
But overall I just don’t see where this hypothetical introduction to EA is going to go, at least until the Open Philanthropy Project has a few years under its belt.
I’d also find it helpful to know the answers to these questions. In particular, to compare like with like, it would be interesting to know how advocates of long-run focused interventions would recommend spending a thousand dollars rather than funding, say, bednet distribution.
This is a key action-relevant question for me and others. I’ve asked quite a few people, but haven’t yet heard an answer that I’ve personally been impressed by. I also haven’t been given many specific charities or interventions, which leaves the argument in the realm of intellectually interesting theory rather than concrete practicality. Of course this isn’t to say that there aren’t any, which is why I ask! (I have made an effort to ask quite a few far-future focused people though.)
(I know some people advocate saving your money until a good opportunity comes up. Paul has an interesting discussion of this here.)
I agree, and I’d add that what I see as one of the key ideas of effective altruism, that people should give substantially more than is typical, is harder to get off the ground in this framework. Singer’s pond example, for all its flaws, makes the case for giving a lot quite salient, in a way that I don’t think general considerations about maximizing the impact of your philanthropy in the long term are going to.
That’s true, though you can just present the best short-run thing as a compelling lower bound rather than an all considered answer to what maximizes your impact.
To clarify, I was defining the different forms of EA more along the lines of ‘how they evaluate impact’, rather than which specific projects they think are best.
Short-run focused EA focuses on evaluating short-run effects. Long-run focused EA also tries to take account of long-run effects.
Extreme long-run EA combines a focus on long-run effects with other unintuitive positions such as a focus on specific xrisks. Moderate long-run EA doesn’t.
The point of moderate long-run EA is that it’s much less clear which interventions are best by these standards.
I wasn’t trying to say that moderate long-run EA should focus on promoting economic growth and building better institutions, just that these are valuable outcomes, and it’s pretty unclear that we should prefer malaria nets (which were mainly selected on the basis of short-run immediate impact) to other efforts to do good that are widely pursued by smart altruists outside of the EA community.
A moderate long-run EA could even think that malaria nets are the best thing (at least for money, if not human capital), but they’ll be more uncertain and give greater emphasis to the flow through effects.
Yes, moderate long-run EA is more uncertain and doesn’t have “fully formed” answers—but that’s the situation we’re actually in.
EA’s haven’t been as substantially involved in science funding, but it’s a pretty common target for philanthropy. And many people invest in technology, or pursue careers in technology, in the interests of making the world better. My best guess is that these activities have a significantly larger medium term humanitarian impact than aid. I think this is a common view amongst intellectuals in the US. We probably all agree that it’s not a clear case either way.
The story with social science, political advocacy, etc., is broadly similar to the story with technology, though I think it’s less likely to be as good as poverty alleviation (or at least the case is more speculative).
Note that e.g. spending money to influence elections is a pretty common activity, it seems weird to be so skeptical. And while open borders is very speculative, immigration liberalization isn’t. I think the prevailing wisdom is that immigration liberalization is good for welfare, and there are many other technocratic policies in the same boat, where you’d expect money to be helpful.
It seems like this comes down to a distinction between effective altruism, meaning altruism which is effective, and EA referring to a narrower group of organizations and ideas. I am more interested in the former, which may account for my different view on this point. The point of the introduction also depends on who you are talking to and why (I mostly talk with people whose main impact on the world is via their choice of research area, rather than charitable donations; maybe that means I’m not the target audience here).
I’m happy to go with your former definition here (I’m dubious about putting the label ‘altruism’ onto something that’s profit-seeking, but “high-impact good things” are to be encouraged regardless). My objection is that I haven’t seen anyone make a case that these long-term ideas are cost-effective. e.g.,
Has anyone tried to make this case, discussing the marginal impact of an extra technology worker? We’d agree that as a whole, scientific and technological progress are enormously important, and underpin the poverty-alleviation work that we’re comparing these longer-term ideas to. But, e.g., if you go into tech and help create a gadget, and in an alternative world some sort of similar gadget gets released a little bit later, what is your impact?
The answer to that last question might be large in expectation-value terms (there’s a small probability of you making a profoundly different sort of transformative gadget), but I’d like to see someone try to plug some numbers in before it becomes the main entry point for Effective Altruism.
When Ben wrote “smarter leaders”, I interpreted it as some sort of qualitative change in the politicians we elect—a dream that would involve changing political party structures so that people good at playing internal power games aren’t rewarded, and instead we get a choice of more honest, clever, and dedicated candidates. If, on the other hand, electing smarter leaders it means donating to your preferred party’s or candidate’s get-out-the-vote campaign… well I would like to see the cost-effectiveness estimate.
(Ben might also be referring to EA’s going into politics themselves, and… fair enough. I doubt it’ll apply to more than a small minority of EA’s, but he only spent a small minority of his post writing about it.)
I think this is reasonable, and expectation-value impact estimates should be fairly tractable here, since policy wonks have often done cost-benefit analyses (leaving only the question of how much marginal donated dollars can shift the probability of a policy being enacted).
Overall I still feel like these ideas, as EA ideas, are in an embryonic stage since they lack cost-effectiveness guestimates.