I agree that the “longtermism”/”near-termism” is a bad description of the true splits in EA. However I think that your proposed replacements could end up imposing one worldview on a movement that is usually a broad tent.
You might not think speciesm is justified, but there are plenty of philosophers who disagree. If someone cares about saving human lives, without caring overmuch if they go on to be productive, should they be shunned from the movement?
I think the advantage of a label like “Global health and development” is that is doesn’t require a super specific worldview: you make your own assumptions about what you value, then you can decide for yourself whether GHD works as a cause area for you, based on the evidence presented.
If I were picking categories, I’d simply be more specific with categories, and then further subdivide them into “speculative” and “grounded” based on their level of weirdness.
Grounded GHD would be malaria nets, speculative GHD would be, like, charter cities or something
Grounded non-human suffering reduction would be reducing factory farming, speculative non-human suffering reduction looks at shrimp suffering
Grounded X-risk/catastrophe reduction would be pandemic prevention, speculative x-risk/catastrophe reduction would be malevolent hyper-powered AI attacks.
To clarify: I’m definitely not recommending “shunning” anyone. I agree it makes perfect sense to continue to refer to particular cause areas (e.g. “global health & development”) by their descriptive names, and anyone may choose to support them for whatever reasons.
I’m specifically addressing the question of how Open Philanthropy (or other big funders) should think about “Worldview Diversification” for purposes of having separate funding “buckets” for different clusters of EA cause areas.
This task does require taking some sort of stand on what “worldviews” are sufficiently warranted to be worth funding, with real money that could have otherwise been used elsewhere.
Especially for a dominant funder like OP, I think there is great value in legibly communicating its honest beliefs. Based on what it has been funding in GH&D, at least historically, it places great value on saving lives as ~an end unto itself, not as a means of improving long-term human capacity. My understanding is that its usual evaluation metrics in GH&D have reflected that (and historic heavy dependence on GiveWell is clearly based on that). Coming up with some sort of alternative rationale that isn’t the actual rationale doesn’t feel honest, transparent, or . . . well, open.
In the end, Open Phil recommends grants out of Dustin and Cari’s large bucket of money. If their donors want to spend X% on saving human lives, it isn’t OP’s obligation to backsolve a philosophical rationale for that preference.
I’m suggesting that they should change their honest beliefs. They’re at liberty to burn their money too, if they want. But the rest of us are free to try to convince them that they could do better. This is my attempt.
I upvoted this comment for the second half about categories, but this part didn’t make much sense to me:
I think the advantage of a label like “Global health and development” is that is doesn’t require a super specific worldview: you make your own assumptions about what you value, then you can decide for yourself whether GHD works as a cause area for you, based on the evidence presented.
I can imagine either speciesism or anti-speciesism being considered “specific” worldviews, likewise person-affecting ethics or total ethics, likewise pure time discounting or longtermism, so I don’t think the case for GHD feels obviously less specific than any other cause area, but maybe there’s some sense of the word “specific” you have in mind that I haven’t thought of.
Moreover, and again I’m not sure what you’re saying so I’m not sure this is relevant, I think even once you’ve decided that GHD is good for you, I think your philosophical and moral commitments will continue to influence which specific GHD interventions seem worthwhile, and you’ll continue to disagree with other people in GHD on philosophical grounds. For example:
whether creating new lives is good, or only saving existing lives,
how saving children under 5 compares with saving the lives of adults,
how tolerant you are of paternalism, influencing the choices of others, vs. being insistent on autonomy and self-determination.
I agree that the “longtermism”/”near-termism” is a bad description of the true splits in EA. However I think that your proposed replacements could end up imposing one worldview on a movement that is usually a broad tent.
You might not think speciesm is justified, but there are plenty of philosophers who disagree. If someone cares about saving human lives, without caring overmuch if they go on to be productive, should they be shunned from the movement?
I think the advantage of a label like “Global health and development” is that is doesn’t require a super specific worldview: you make your own assumptions about what you value, then you can decide for yourself whether GHD works as a cause area for you, based on the evidence presented.
If I were picking categories, I’d simply be more specific with categories, and then further subdivide them into “speculative” and “grounded” based on their level of weirdness.
Grounded GHD would be malaria nets, speculative GHD would be, like, charter cities or something
Grounded non-human suffering reduction would be reducing factory farming, speculative non-human suffering reduction looks at shrimp suffering
Grounded X-risk/catastrophe reduction would be pandemic prevention, speculative x-risk/catastrophe reduction would be malevolent hyper-powered AI attacks.
To clarify: I’m definitely not recommending “shunning” anyone. I agree it makes perfect sense to continue to refer to particular cause areas (e.g. “global health & development”) by their descriptive names, and anyone may choose to support them for whatever reasons.
I’m specifically addressing the question of how Open Philanthropy (or other big funders) should think about “Worldview Diversification” for purposes of having separate funding “buckets” for different clusters of EA cause areas.
This task does require taking some sort of stand on what “worldviews” are sufficiently warranted to be worth funding, with real money that could have otherwise been used elsewhere.
Especially for a dominant funder like OP, I think there is great value in legibly communicating its honest beliefs. Based on what it has been funding in GH&D, at least historically, it places great value on saving lives as ~an end unto itself, not as a means of improving long-term human capacity. My understanding is that its usual evaluation metrics in GH&D have reflected that (and historic heavy dependence on GiveWell is clearly based on that). Coming up with some sort of alternative rationale that isn’t the actual rationale doesn’t feel honest, transparent, or . . . well, open.
In the end, Open Phil recommends grants out of Dustin and Cari’s large bucket of money. If their donors want to spend X% on saving human lives, it isn’t OP’s obligation to backsolve a philosophical rationale for that preference.
I’m suggesting that they should change their honest beliefs. They’re at liberty to burn their money too, if they want. But the rest of us are free to try to convince them that they could do better. This is my attempt.
I upvoted this comment for the second half about categories, but this part didn’t make much sense to me:
I can imagine either speciesism or anti-speciesism being considered “specific” worldviews, likewise person-affecting ethics or total ethics, likewise pure time discounting or longtermism, so I don’t think the case for GHD feels obviously less specific than any other cause area, but maybe there’s some sense of the word “specific” you have in mind that I haven’t thought of.
Moreover, and again I’m not sure what you’re saying so I’m not sure this is relevant, I think even once you’ve decided that GHD is good for you, I think your philosophical and moral commitments will continue to influence which specific GHD interventions seem worthwhile, and you’ll continue to disagree with other people in GHD on philosophical grounds. For example:
whether creating new lives is good, or only saving existing lives,
how saving children under 5 compares with saving the lives of adults,
how tolerant you are of paternalism, influencing the choices of others, vs. being insistent on autonomy and self-determination.