I don’t know Paul’s reply, but my guess is it would be a similar argument as he made about cash transfers: There might be some short-term multiplier, but in the long run the growth tends to diffuse out, because there are limits to really high growth being sustained for a really long time. This view would be similar to my own skepticism about the “haste consideration” ( http://www.utilitarian-essays.com/haste-consideration-revisited.html ).
Brian_Tomasik
Counterfactual credit assignment
Nice post, Bernadette. You make a good case that some people may need to have children to be most effective. My guess is the situation depends heavily on the individual, although assessing which type of individual you are may not be easy. (You might think you don’t want kids and then change your mind, or you might think you need kids for a brief time, after which the need fades.)
You could outsource the costs if that’s something you’re inclined to do. I would probably feel guilty about doing so myself.
The opportunity costs can vary a lot based on what your alternative career might have been (e.g., for a would-have-been CEO they’re much bigger than for a would-have-been librarian), as well as whether the parenting time comes from your existing leisure time or your existing work time.
5% above inflation seems reasonable if you invest in stocks, unless you think (as some do) that stock markets are going to systematically have lower returns in the future than they did in the past. I don’t see why a risk-free rate would be appropriate, since stocks aren’t risky enough to cause problems in many practical situations.
Bernadette makes some good criticisms, and I’ve updated my piece in response. I now put the opportunity-cost figure around $100K of present value, which seems low but not obviously too low if much of the parenting time is not “work.” I also changed the opportunity cost from being about extra income to being about how much you value what you would have been doing instead.
Individual differences seem big here. While there are some people like Bernadette and Julia who give extensive thought to this issue, there are others who don’t actually care about kids as much but just go along with social custom, spousal pressure, or the results of carelessness with birth control. It’s this latter group that my piece is mainly intended to speak to. I don’t know the relative proportions of different types of people in the population.
Thanks for those notes. :)
http://economics.mit.edu/files/637 says the US Social Security Administration used a 7% real rate of return, but the paper goes on to explain why that seems too high.
https://en.wikipedia.org/wiki/Equity_premium_puzzle says the equity premium for stocks “is generally accepted to be in the range of 3–7% in the long-run.” That piece lists reasons to deny an equity premium, similar to those you enumerate, but it also says “most mainstream economists agree that the evidence [for an equity premium] shows substantial statistical power.” I don’t know enough to evaluate this debate without further investigation, but your concerns about biases seem significant.
Thanks for the post. :)
It’s far from obvious that short-term human development is a good metric for far-future trajectories. Indeed, some believe the opposite. I’m personally extremely ambivalent on the matter ( http://foundational-research.org/publications/differential-intellectual-progress-as-a-positive-sum-project/#economic-growth ).
As Nick Bostrom says in “Astronomical Waste,” what matters is the safety / wisdom with which we approach the future, not the speed. A lot of arguing needs to be done to say that speeding human development in the short run improves the safety of the future. I personally expect that many interventions are much better than human development for the far future, and short-term helping of humans may not be a very good proxy at all.
I agree that short-term helping of animals is also not a great proxy of long-term helping of animals, though the two may correlate because of memetic side effects. Memes might help make human development good for the far future too, though probably the effect is less than for animals because it’s already widely accepted that human suffering matters.
Fair enough about citing others who claim it’s a positive correlation. :)
The idea that the quality of the far future is strongly influenced in a compounding fashion by human empowerment strikes me as a rather specific and controversial model. From the outside, it looks like anchoring to human-poverty charities. If I were to come up with a list of variables to push on that I thought would causally improve the far future, Third-world poverty or economic growth probably wouldn’t make the top 10.
Of course, other variables that I would care about (e.g., degree of international cooperation, good governance, philosophical sophistication, quality of discourse, empathy, etc.) might happen to correlate well with poverty reduction or growth, but causation matters. Even if the welfare of elderly patients is correlated with a good far future, working to improve the welfare of elderly people probably isn’t the best place to push.
The discussion of where to push to make the far future better seems to me inadequately discussed, with different people assuming their own particular views. (Hence part of the importance of GPP. :) )
“that many short term welfare benefits to humans are likely to compound in a way that means that the size of short term benefit tracks the size of long term benefit.”
I think this is actually controversial in the EA community. My impression is that Eliezer Yudkowsky and Luke Muehlhauser would disagree with it, as would I. Others who support the view are likely to acknowledge that it’s non-obvious and could be mistaken. Many forms of short-term progress may increase long-term risks.
Hi Owen :)
Animal welfare can be about more than promoting empathy. For one thing, it’s about promoting empathy for nonhumans, which is somewhat different thing from promoting empathy wholesale (which usually means being nicer to other people). Secondly, animal welfare as a case study can raise a number of important ethical issues, such as naturalistic fallacies, welfare- vs. rights-based empathy, how far we think sentience extends, how to weigh minds of different complexities, population ethics, and lots more.
Also, animal welfare is quite sticky, which means it could be a good way to draw people in to these issues and get them excited about them.
I agree that, e.g., veg outreach is not the very best way to help animals. I think talking explicitly about things like wild-animal suffering and digital sentients in the future can be better, which is why I focus on those. But veg outreach is probably not vastly worse, and it can be a good donation suggestion for mainstream donors who are weirded out by far-future ideas.
As far as: “I do think that optimising for long-term animal welfare is not the best place to stop in picking an instrumental goal, because it’s quite hard to see how things affect it.” I don’t agree with this, depending how broadly we define “animal.” It seems likely to me that most of the sentience of the far future will reside in non-human-like creatures (robots, sentient subroutines, simulated insects, etc.), and most of the far-future-related things I write about are relevant to improving long-term “animal” welfare in that sense.
I happen to agree that promoting empathy (for animals) is probably better than promoting welfare directly, but a devil’s advocate might point out that beliefs often follow actions, and maybe directly changing people’s practices toward animals would be a more concrete way to change values.
I think whether there is a long-term society at all is relatively hard to change, except maybe in the case of AI risk. I think our expected influence through values is not obviously smaller and may be larger than our expected influence through whether there is a future, especially for non-mainstream values. This is doubly true if you’re a negative utilitarian, since for NUs there aren’t feasible ways to decrease the probability of a future ( http://foundational-research.org/publications/how-would-catastrophic-risks-affect-prospects-for-compromise/ ), and doing so isn’t nice to other value systems ( http://foundational-research.org/publications/reasons-to-be-nice-to-other-value-systems/ ), so you have to focus on improving the quality of the future. By the same token, it’s nicer for non-NUs to focus on improving the quality of the future (which is something NUs can support) than on making the future more likely (which is something NUs oppose).
Great points. :) There was a discussion on Felicifia in 2012 about the value of empathy vs. the value of feeling moral duty: http://felicifia.org/viewtopic.php?f=7&t=492&start=40#p7120 David Brooks argued that feelings without follow-through aren’t very useful. Likewise, it’s often said that Buddhist monks have immense empathy, but how often do you see them lobbying for more humane policies or something? Probably by “empathy” what Owen had in mind was more substantive empathy, like a culture of feeling and acting on compassion for powerless creatures.
If the Joneses are donating a bunch to charity, then keeping up with them could be great. :) Things like The Giving Pledge seem promising for this reason, because they suggest to billionaires that if you want higher status, you should donate a lot.
These are useful considerations, Toby. :)
Other reasons to do (at least some) direct work sooner:
1. In order to build a movement, you have to have something to build the movement around. If you do actually interesting research, you can attract people who are interested in that research. If you just talk about doing research, you attract people who like to talk about research. I really think there’s something to be said to just tackling something that looks important, trying to do a good job, and seeing who joins you and where it ends up, rather than thinking meta-meta about how best to go about it for a long time. That said, I also see high value in thinking hard for a long time, but I contend that you need both together, to bounce ideas off each other, rather than only sitting in an armchair for 10 years. This ties into the next point...
2. Doing concrete research can teach you things no amount of abstract theorizing would have. It’s like the philosophy behind agile development: Rather than making a grand plan, try some stuff, see how it works, get acquainted with the situation on the ground, and then figure out where to go next. I think it’s useful to get a little bit of deep knowledge of a topic in addition to more shallow knowledge, in order to calibrate your picture of things. It’s similar to the reason philosophy courses have you actually read Plato and Hobbes rather than just reading other people talk about them. You get a special kind of understanding by seeing things up close.
3. Lots of things could happen between now and later. Your movement might disband. You might lose interest. You might decide you want to spend time on something non-altruism related. And so on. It can be good to take advantage of what you have when you have it.
Finally, a last point that can go either way depending on the circumstances is
4. Comparative advantage: If you’re an awesome AI researcher, you should probably do direct AI work, not movement-building, and the opposite if you’re an awesome evangelist.
Sorry, I see you already mentioned a few of these points in the piece.
Yes, I think part of the reason to get hands-on is instrumental, but I think the direct value of doing so is relevant too. Eventually someone has to do the work, and while I do think the value of an EA’s labor is often higher than that of other smart people, I don’t think it’s vastly higher. At some point, somebody needs to do the work. I think it’s often good to try some stuff now, see how the situation looks, and then keep working on the more promising areas. That investigation work is not wasted if it’s shared publicly. As long as you don’t get mired too long in a highly narrow focus, you should be ok.
Nice post. It’s also worth noting that this version of the far-future argument appeals even to negative utilitarians, strongly anti-suffering prioritarians, Buddhists, antinatalists, and others who don’t think it’s important to create new lives for reasons other than holding a person-affecting view.
I also think even if you want to create lots of happy lives, most of the relevant ways to tackle that problem involve changing the direction in which the future goes rather than whether there is a future. The most likely so-called “extinction” event in my mind is human replacement by AIs, but AIs would be their own life forms with their own complex galaxy-colonization efforts, so I think work on AI issues should be considered part of “changing the direction of the future” rather than “making sure there is a future”.
It’s great you’re thinking about these issues.
I agree that AGI safety is plausibly the dominating consideration regarding takeoff speed. Thus, whether one wants a faster or slower takeoff depends on whether one wants safe AGI (which is not a completely trivial question, http://foundational-research.org/robots-ai-intelligence-explosion/#Would_a_human_inspired_AI_or_rogue_AI_cause_more_suffering , though I think it’s likely safe AI is better for most human values).
And yes, neuromorphic AGI seems likely to be safer both because it may be associated with a slow takeoff but also because we understand how humans work, how to balance power with them, and so on. Arbitrary AGIs with alien motivational and behavioral systems are more unpredictable. In the long run, if you want goal preservation, you probably need AGI that’s different from the human brain, but goal preservation is arguably less of a concern in the short run; knowledge of how to do goal preservation will come with greater intelligence. In any case, neuromorphic AGIs are much more likely to have human-like values than arbitrary AGIs. We don’t worry that much about goal preservation with subsequent generations of humans because they’re pretty similar to us (though old conservatives are often upset with the moral degeneration of society caused by young people).
I agree that multipolar power dynamics could be bad, because this might lead to arms races and conflict relative to a quick monopoly by one group. On the other hand, it might allow for more representation by different parties.
Overall, I think the odds of a fast takeoff are sufficiently low that I’m not convinced it makes sense to focus on fast-takeoff work (even if some such exploration is worthwhile). There may be important first-mover advantages to shaping how society approaches slow takeoffs, and if slow takeoff is sufficiently probable, those may dominate in impact. In any case, the fast-slow distinction is not binary, and maybe the best place to focus is on scenarios where human-level AI takes over on a time scale of a few years. (Timescales of months, days, or hours strike me as pretty improbable, unless, say, Skynet gets control of nuclear weapons.)
Thanks for this post. :) I wrote a similar comment here.
Singer is criticized for spending tens of thousands of dollars on his ailing mother, but if he hadn’t done so, he would have been condemned as cold-hearted and cruel.
I think the “overhead ratio” heuristic is moderately rational because
There are some real scam charities that skim a lot off the top. (Not quite a charity, but a recent example is the claim that Sarah Palin’s PAC only uses 3% of its funds for its intended purpose.)
If you have limited time and are only donating small amounts, it’s potentially not worth the effort to look up detailed information, so evaluating based on what’s available may be better than nothing. (At least you reduce your chance of donating to a scam that way.)
The work of many charities is hard to quantify. Development aid is probably among the easiest, but how do you quantify MIRI’s cost-effectiveness and compare it with other charities in its league? If two charities seem to both do good work per employee, but one charity spends more of its funds on its core employees, that charity is the better deal.
I agree it makes sense to wait in your position. If you are filing taxes, you’ll probably be taking the standard deduction this year. So you wouldn’t save on taxes by donating now, but you would save on taxes if you postponed the donations for a year when you itemize.
I also agree with the general sentiment of figuring out more what the landscape looks like. As an analogy, some people begin college thinking they know what they’ll major in; many of them end up changing their minds by the middle of sophomore year. Same thing with donations. Even after many years thinking about this topic, I’m still making significant updates to my assessments as I learn more.
That said, I don’t think it’s likely that one charity differs from another by more than 1000 times, except in rare cases (http://utilitarian-essays.com/robustness-against-uncertainty.html#why-even-out). That said, they can probably differ by 10-100 times, and this still makes it really important to think more about where to give.
Great post, Peter. :)