Feel free to message me on here.
JackM
What if the mother wasn’t there (say she is no longer alive) and it was just the dying baby? The only thing the baby would say is “wah wah wah” which is neither an argument for its own welfare nor the welfare of anyone else.
(I’m trying to demonstrate that the ability to speak up for yourself shouldn’t be a criterion in determining the strength of your moral rights...).
Generally I think that those in richer countries are going to shape the future not those in poorer countries, so I’m not sure I agree with you about “wise decision processes” rising to the top if we end extreme poverty.
For example, if we create AI that causes an existential catastrophe, that is going to be the fault of people in richer countries.
Another example—I am concerned about risks of lock in which could enable mass suffering to persist for a very long time. E.g. we spread to the stars while factory farming is still widespread and so end up spreading factory farming too. Or we create digital sentience while we still don’t really care about non-human sentience and so end up creating vast amounts of digital suffering. I can’t see how ending poverty in lower income countries is going to reduce these risks which, if they happen, will be the fault of those in richer countries. Furthermore, ending factory farming seems important to widen the moral circle and reduce these risks.
What is the argument for Health and development interventions being best from a long-term perspective?
I think animal welfare work is underrated from a long-term perspective. There is a risk that we lock-in values that don’t give adequate consideration to non-human sentience which could enable mass suffering to persist for a very long time. E.g. we spread to the stars while factory farming is still widespread and so end up spreading factory farming too. Or we create digital sentience while we still don’t really care about non-human sentience and so end up creating vast amounts of digital suffering. I think working to end factory farming is one way to widen the moral circle and prevent these moral catastrophes from occurring.
It is commonly assumed a lot of interventions will likely fall prey to the “washing-out hypothesis” where the impact of the intervention becomes less significant as time goes on, meaning that the effects of actions in the near future matter more than their long-term consequences. In other words, over time, the differences between the outcomes of various actions tend to fade or “wash out.” So in practice most people would assume the long-term impact of something like medical research is, in expectation, zero.
Longtermists aim to avoid “washing out”. One way is to find interventions that steer between “attractor states”. For example, extinction is an attractor state in that, once humans go extinct, they will stay that way forever (assuming humans don’t re-evolve). Non-extinction is also an attractor state, although to a lesser extent. Increasing the probability of achieving the better attractor state (probably non-extinction by a large margin, if we make certain foundational assumptions) has high expected value that stretches into the far future. This is because the persistence of the attractor states allows the expected value of reducing extinction risk not to “wash out” over time.
This is all explained better in the paper The Case for Strong Longtermism which I would recommend you read.
At a risk of getting off topic from the core question, which interventions do you think are most effective in ensuring we thrive in the future with better cooperative norms? I don’t think it’s clear that this would be EA global health interventions. I would think boosting innovation and improving institutions are more effective.
Also boosting economic growth would probably be better than so-called randomista interventions from a long-term perspective.
What do you think about people who do go through with suicide? These people clearly thought their suffering outweighed any happiness they experienced.
I’m not sure how I have stigmatised any particular response.
Thank you for justifying your vote for global health!
One counterargument to your position is that, with the same amount of money, one can help significantly more non-human animals than humans. Check out this post. An estimated 1.1. billion chickens are helped by broiler and cage-free campaigns in a given year. Each dollar can help an estimated 64 chickens to a total of 41 chicken-years of life.
This contrasts to needing $5,000 to save a human life through top-ranked GiveWell charities.
Personally I would gain more value from knowing why people would prefer $100m to go to global health over animal welfare (or vice versa) than knowing if people would prefer this. This is partly because it already seems clear that the forum (which isn’t even a representative sample of EAs) has a leaning towards animal welfare over global health.
So if my comment incentivises people to comment more but vote less then that is fine by me. Of course my comment may not incentivise people to comment more in which case I apologise.
Interesting to note that, as it stands, there isn’t a single comment on the debate week banner in favor of Global Health. There are votes for global health (13 in total at time of writing), but no comments backing up the votes. I’m sure this will change, but I still find it interesting.
One possible reason is that the arguments for global health > animal welfare are often speciesist and people don’t really want to admit that they are speciesist—but I’m admittedly not certain of this.
In a nutshell—there is more suffering to address in non-human animals, and it is a more neglected area.
What we need is an argument for why it would be good in expectation, compared to all these other cause areas.
Yeah the strong longtermism paper elucidates this argument. I also provide a short sketch of the argument here. At its core is the expected vastness of the future that allows longtermism to beat other areas. The argument for “normal” longtermism i.e. not “strong” is pretty much the same structure.
Future well being does matter but focusing on existential risk doesn’t lead to greater future well-being necessarily. It leads to humans being alive. If the future is filled with human suffering, then focus on existential risk could be one of the worst focus areas.
Yes that’s true. Again we’re dealing with expectations and most people expect the future to be good if we manage not to go extinct. But it’s also worth noting that reducing extinction risk is just one class of reducing existential risk. If you think the future will be bad, you can work to improve the future conditional on us being alive or, in theory, you can work to make us go extinct (but this is of course a bit out there). Improving the future conditional on us being alive might involve tackling climate change, improving institutions, or aligning AI.
And, to reiterate, while we focus on these areas to some extent now, I don’t think we focus on them as much as we would in a world where society at large accepts longtermism.
The analysis I linked to isn’t conclusive on longtermism being the clear winner if only considering the short-term. Under certain assumptions it won’t be the best. Therefore if only considering the short-term, many may choose not to give to longtermist interventions. Indeed this is what we see in the EA movement where global health still reigns supreme as the highest priority cause area.
What most longtermist analysis does is argue that if you consider the far future, longtermism then becomes the clear winner (e.g. here). In short, significantly more value is at stake with reducing existential risk because now you care about enabling far future beings to live and thrive. If longtermism is the clear winner then we shouldn’t see a movement that clearly prioritises global health, we should see a movement that clearly prioritises longtermist causes. This would be a big shift from the status quo.
As for your final point, I think I understand what you / the authors were saying now. I don’t think we have no idea what the far future effects of interventions like medical research are. We can make a general argument it will be good in expectation because it will help us deal with future disease which will help us reduce future suffering. Could that be wrong—sure—but we’re just talking about expectational value. With longtermist interventions, the argument is the far future effects are significantly positive and large in expectation. The simplest explanation is that future wellbeing matters, so reducing extinction risk seems good because we increase the probability of there being some welfare in the future rather than none.
existential risk can be justified without reference to the far future
This is pretty vague. If existential risk is roughly on par with other cause areas then we would be justified in giving any amount of resources to it. If existential risk is orders of magnitude more important then we should greatly prioritize it over other areas (at least on the current margin). So factoring in the far future does seem to be very consequential here.
FWIW I think it’s pretty unclear that something like reducing existential risk should be prioritised just based on near-term effects (e.g. see here). So I think factoring in that future people may value being alive and that they won’t want to be disempowered can shift the balance to reducing existential risk.
If future people don’t want to be alive they can in theory go extinct (this is the option value argument for reducing existential risk). The idea that future generations will want to be disempowered is pretty barmy, but again they can disempower themselves if they want to so it seems good to at least give them the option.
Hey, did you ever look into the Moral Consequences of Economic Growth book?
I’m not sure I buy the “We are not in a position to predict the best actions for the far future”
estimating the impact of current actions on medical research in 10,000 years—or even millions of years—is beyond our capacity.
I would say the following would, in expectation, boost medical research in millions of years:
Not going extinct or becoming disempowered: if you’re extinct or completely disempowered you can’t do medical research (and of course wellbeing would be zero or low!).
Investing in medical research now: if we invest in such research now we bring forward progress. So, in theory, in millions of years we would be ahead where we would have been if we had not invested now. If there’s a point at which we plateau with medical research then we we would just reach that plateau earlier and have more time to enjoy with the highest possible level of medical research.
We cannot reliably predict what future generations will value.
They will probably value:
Being alive: another argument for not going extinct.
Having the ability to do what they want: another argument for not becoming permanently disempowered. Or not to have a totalitarian regime control the world (e.g. through superintelligent AI).
Minimizing suffering: OK maybe they will like suffering who knows, but in my mind that would mean things have gone very wrong. Assuming they want to minimize suffering we should try to, for example, ensure factory farming does not spread out to other planets and therefore persist for millenia. Or advocate for the moral status of digital minds.
It’s slightly odd this paper argues that:
The urgency of addressing existential risks does not depend on the far future; the importance of avoiding these risks is clear when focusing on the present and the next few generations.
But then also says:
Focusing on the far future comes at a cost to addressing present-day needs and crises, such as health issues and poverty.
I’m left uncertain if the authors are in favor of spending to address existential risk, which would of course lead to less money to address present-day suffering due to health issues and poverty.
Yes but if I were to ask my non-EA friends what they give to (if they give to anything at all) they will say things like local educational charities, soup kitchens, animal shelters etc. I do think EA generally has more of a focus on saving lives.
Agreed!