global health and development will almost permanently be a pressing cause area
Really? You don’t think there will come a time perhaps in the next few centuries where pretty much everyone lives above an absolute poverty line and has access to basic health resources? Our World in Data shows the progress that has been made in reducing extreme poverty, and whilst there is more work to do, saying that global health and development will “permanently” be a pressing cause areas seems an exaggeration to me.
Also, if you’re a longtermist now, you’ll probably be a longtermist later even if we do reduce existential risks from AI and bio etc. There are other longtermist interventions that we are aware of such as improving values, investing for the future, economic growth.
I think when we reach the point where everyone lives above a certain poverty line and has access to basic health resources, the global distribution of wealth will still be very inefficient for maximising welfare, and redistributing resources from the globally richest to the globally poorest / increasing consumption of the poorest will still be one of the best available options to improve aggregate welfare.
Sidenote—but I think it’s better for debate to view this as disagreement, not exaggeration. I also don’t entirely agree with total utilitarianism or longtermism, if that makes my point of view easier to understand.
I think when we reach the point where everyone lives above a certain poverty line and has access to basic health resources, the global distribution of wealth will still be very inefficient for maximising welfare
Agreed.
redistributing resources from the globally richest to the globally poorest / increasing consumption of the poorest will still be one of the best available options to improve aggregate welfare.
I highly doubt this. Longtermism aside, I find it very hard to accept that redistribution of resources in a world where everyone lives above a certain poverty line would be anywhere near as pressing as reducing animal suffering.
I think it’s better for debate to view this as disagreement, not exaggeration.
Fair enough. “Permanently” is a very strong word though so I guess I disagree with its usage.
The quote was “almost permanently,” which I took to mean something like: of sufficient permanence that for purposes of the topic of the post—focusing on cause areas vs. focusing on EA (in the medium run is implied to me) -- we can assume that global health and development will remain a pressing cause area (although possibly relatively less pressing than a new cause area—point four).
I don’t think that’s inconsistent with a view that GHD probably won’t be a pressing cause area in, say, 250 years. Knowing whether it will or won’t doesn’t materially affect my answer to the question posed in the original post. (My ability to predict 250 years into the future is poor, so I have low confidence about the importance of GHD at that time. Or just about anything else, for that matter.)
Anyway, I am wondering if part of the disagreement is that we’re partially talking about somewhat different things.
I might be wrong but I think “almost” was an addition and not there originally. It still reads weirdly to me.
From the follow-on comments I think freedomandutility expects GHD to be a top cause area beyond 250 years from now. I doubt this and even now I put GHD behind reducing animal suffering and longtermist areas so there does seem to be some disagreement here (which is fine!).
EDIT: actually I am wrong because I quoted the word “almost” in my original comment. Still reads weird to me.
I also think that most future worlds in which humanity has its act together enough to have solved global economic and health security are worlds in which it has also managed to solve a bunch of other problems, which makes cause prioritization in this hypothetical future difficult.
I wouldn’t view things like “basic health resources” in absolute terms. As global prosperity and technology increases, the bar for what I’d consider basic health services increases as well. That’s true, to a lesser extent, of poverty more generally.
Sure, but there is likely to be diminishing marginal utility of health resources. At a certain point accessibility to health resources will be such that it will be very hard to argue that boosting it further would be a more pressing priority than say reducing animal suffering (and I happen to think the latter is more pressing now anyway).
One could say much the same thing about almost any cause, though—such as “investing for the future, economic growth” at the end of your comment. The diminishing marginal returns that likely apply to global health in a world with 10x as many resources will generally be applicable there too.
Different causes have different rates at which marginal utility diminishes. Some are huge problems so we are unlikely to have even nearly solved them in a 10x richer world (e.g. wild animal suffering?) and others can just absorb loads of money.
Investing for the future is one such example—we can invest loads of money with the hope that one day it can be used to do a very large amount of good.
Also, in a world where we are 10x richer I’d imagine reducing total existential risk to permanently minuscule levels (existential security) will still be a priority. This will likely take loads of effort and I’d imagine there is likely to always be more we can do (even better institutions, better safeguards etc.). Furthermore, in a world where we are all rich, ensuring safety becomes even more important because we would be preserving a really good world.
Really? You don’t think there will come a time perhaps in the next few centuries where pretty much everyone lives above an absolute poverty line and has access to basic health resources? Our World in Data shows the progress that has been made in reducing extreme poverty, and whilst there is more work to do, saying that global health and development will “permanently” be a pressing cause areas seems an exaggeration to me.
Also, if you’re a longtermist now, you’ll probably be a longtermist later even if we do reduce existential risks from AI and bio etc. There are other longtermist interventions that we are aware of such as improving values, investing for the future, economic growth.
I think when we reach the point where everyone lives above a certain poverty line and has access to basic health resources, the global distribution of wealth will still be very inefficient for maximising welfare, and redistributing resources from the globally richest to the globally poorest / increasing consumption of the poorest will still be one of the best available options to improve aggregate welfare.
Sidenote—but I think it’s better for debate to view this as disagreement, not exaggeration. I also don’t entirely agree with total utilitarianism or longtermism, if that makes my point of view easier to understand.
Agreed.
I highly doubt this. Longtermism aside, I find it very hard to accept that redistribution of resources in a world where everyone lives above a certain poverty line would be anywhere near as pressing as reducing animal suffering.
Fair enough. “Permanently” is a very strong word though so I guess I disagree with its usage.
The quote was “almost permanently,” which I took to mean something like: of sufficient permanence that for purposes of the topic of the post—focusing on cause areas vs. focusing on EA (in the medium run is implied to me) -- we can assume that global health and development will remain a pressing cause area (although possibly relatively less pressing than a new cause area—point four).
I don’t think that’s inconsistent with a view that GHD probably won’t be a pressing cause area in, say, 250 years. Knowing whether it will or won’t doesn’t materially affect my answer to the question posed in the original post. (My ability to predict 250 years into the future is poor, so I have low confidence about the importance of GHD at that time. Or just about anything else, for that matter.)
Anyway, I am wondering if part of the disagreement is that we’re partially talking about somewhat different things.
I might be wrong but I think “almost” was an addition and not there originally. It still reads weirdly to me.
From the follow-on comments I think freedomandutility expects GHD to be a top cause area beyond 250 years from now. I doubt this and even now I put GHD behind reducing animal suffering and longtermist areas so there does seem to be some disagreement here (which is fine!).
EDIT: actually I am wrong because I quoted the word “almost” in my original comment. Still reads weird to me.
I also think that most future worlds in which humanity has its act together enough to have solved global economic and health security are worlds in which it has also managed to solve a bunch of other problems, which makes cause prioritization in this hypothetical future difficult.
I wouldn’t view things like “basic health resources” in absolute terms. As global prosperity and technology increases, the bar for what I’d consider basic health services increases as well. That’s true, to a lesser extent, of poverty more generally.
Sure, but there is likely to be diminishing marginal utility of health resources. At a certain point accessibility to health resources will be such that it will be very hard to argue that boosting it further would be a more pressing priority than say reducing animal suffering (and I happen to think the latter is more pressing now anyway).
One could say much the same thing about almost any cause, though—such as “investing for the future, economic growth” at the end of your comment. The diminishing marginal returns that likely apply to global health in a world with 10x as many resources will generally be applicable there too.
Different causes have different rates at which marginal utility diminishes. Some are huge problems so we are unlikely to have even nearly solved them in a 10x richer world (e.g. wild animal suffering?) and others can just absorb loads of money.
Investing for the future is one such example—we can invest loads of money with the hope that one day it can be used to do a very large amount of good.
Also, in a world where we are 10x richer I’d imagine reducing total existential risk to permanently minuscule levels (existential security) will still be a priority. This will likely take loads of effort and I’d imagine there is likely to always be more we can do (even better institutions, better safeguards etc.). Furthermore, in a world where we are all rich, ensuring safety becomes even more important because we would be preserving a really good world.