It’s a great idea but the devil is definitely in the details. You get at much of this, but maybe underestimate the challenge, especially for things like ‘getting customization without overwhelming people and dissolving the impact’.
For the traditional global development stuff this could be somewhat tractable; the Cgdev/CDI took a stab at this for a country-level attempt at this.
… Although even there there are still loads of moral weights and ‘moonshot project’ issues to consider
But going further, including animals, x-risk/s-risk, lt-ism, cause prioritization itself, opens up huge worm cans. We may be to debate this stuff intelligently, but it might look like a huge black box to outsiders, or “that’s like, just your opinion man”.
A big challenge, but seems worth attempting to me. And the process of trying to do this itself, and engage wider audiences, seems valuable.
Is the CGD’s Commitment to Development Index an expression of policy and spending quantifications within different sectors? I also wonder if ‘Commitment’ is the right way to call it, since some countries may be advantaged differently. For example, nations have limited control over how much, absolutely, they can spend on international peacekeeing, one of the Security’s three subcomponents due to different income levels.
This index can but does not have to motivate countries to optimize for an ‘ideal approach.’ This can be valuable when countries make better choices (e. g. realize that they get a better score if they divest from arms trade and invest into fishing alternatives) but disvaluable when the intent of the index is different from the effect of the calculation (e. g. if it would be favorable to take large sums from subsidized organic agriculture to arms trade within the context of ‘peacekeeping’).
A similar consideration should be applied in this list. There should be no way to ‘trick’ the index. A way to address this is enable adjustments alongside/based on feedback. This, in addition to the possibly reduced perception absoluteness can enable partiality based on convenience or strategical attraction of donors (for example, utility monsters can be weighted down if it would make someone extremely better than the others or up, if there is a large monsters donor who could be attracted to EA by their listing). But, this could be addressed by actually inviting the top people converse about possible improvements in the metrics, implementing impartiality considerations as they are applying courtesy to each other, maybe. Just some thoughts.
Yeah, I’ve lately been considering just three options for moral weights: ‘humans only’, ‘including animals’, and ‘longtermist’, with the first two being implicitly neartermist.
It seems like we don’t need ‘longtermist with humans only’ and ‘longtermist including animals’ because if things go well the bulk of the beings that exist in the long run will be morally relevant (if they weren’t we would have replaced them with more morally relevant beings).
Agreed. I guess my intuition is that using WALYs for humans+animals (scaled for brain complexity), humans only, and longtermist beings will be a decent enough approximation for maybe 80% of EAs and over 90% of the general public. Not that it’s the ideal metric for these people, but good enough that they’d treat the results as pretty important if they knew the calculations were done well.
Linch, that sounds like a reasonable approach. I think something like that could work.
Ultimately, I guess a high value per dollarwould then be assigned to anyone who donated ~‘maximally our best guess impactfully’ in either of the three categories,
… and then the value would scale ~linearly with the amount donated ‘max-impactfully’ to either of the three categories.
Might be somewhat difficult to explain this to a smart popular audience, but I suspect it might be doable.
My suspicion is that there will only be a very narrow and “lucky” range of moral and belief parameters where the three cause areas will have cost effectivenesses in the same orders of magnitude.
It’s a great idea but the devil is definitely in the details. You get at much of this, but maybe underestimate the challenge, especially for things like ‘getting customization without overwhelming people and dissolving the impact’.
For the traditional global development stuff this could be somewhat tractable; the Cgdev/CDI took a stab at this for a country-level attempt at this.
… Although even there there are still loads of moral weights and ‘moonshot project’ issues to consider
But going further, including animals, x-risk/s-risk, lt-ism, cause prioritization itself, opens up huge worm cans. We may be to debate this stuff intelligently, but it might look like a huge black box to outsiders, or “that’s like, just your opinion man”.
A big challenge, but seems worth attempting to me. And the process of trying to do this itself, and engage wider audiences, seems valuable.
Is the CGD’s Commitment to Development Index an expression of policy and spending quantifications within different sectors? I also wonder if ‘Commitment’ is the right way to call it, since some countries may be advantaged differently. For example, nations have limited control over how much, absolutely, they can spend on international peacekeeing, one of the Security’s three subcomponents due to different income levels.
This index can but does not have to motivate countries to optimize for an ‘ideal approach.’ This can be valuable when countries make better choices (e. g. realize that they get a better score if they divest from arms trade and invest into fishing alternatives) but disvaluable when the intent of the index is different from the effect of the calculation (e. g. if it would be favorable to take large sums from subsidized organic agriculture to arms trade within the context of ‘peacekeeping’).
A similar consideration should be applied in this list. There should be no way to ‘trick’ the index. A way to address this is enable adjustments alongside/based on feedback. This, in addition to the possibly reduced perception absoluteness can enable partiality based on convenience or strategical attraction of donors (for example, utility monsters can be weighted down if it would make someone extremely better than the others or up, if there is a large monsters donor who could be attracted to EA by their listing). But, this could be addressed by actually inviting the top people converse about possible improvements in the metrics, implementing impartiality considerations as they are applying courtesy to each other, maybe. Just some thoughts.
Yeah, I’ve lately been considering just three options for moral weights: ‘humans only’, ‘including animals’, and ‘longtermist’, with the first two being implicitly neartermist.
It seems like we don’t need ‘longtermist with humans only’ and ‘longtermist including animals’ because if things go well the bulk of the beings that exist in the long run will be morally relevant (if they weren’t we would have replaced them with more morally relevant beings).
but even within ‘humans only’ (say, weighted by ‘probability of existing’ … or only those sure to exist).
There are still difficult moral parameters, such as:
suffering vs happiness
death of a being with a strong identity vs suffering
death of babies vs children vs adults
(Similar questions ‘within animals’ too).
Agreed. I guess my intuition is that using WALYs for humans+animals (scaled for brain complexity), humans only, and longtermist beings will be a decent enough approximation for maybe 80% of EAs and over 90% of the general public. Not that it’s the ideal metric for these people, but good enough that they’d treat the results as pretty important if they knew the calculations were done well.
Do you mean all three separately (humans, animals, potential people) or trying to combine them in the same rating?
My impression was that separate could work but combining them ‘one of the three will overwhelm the others’.
If you do a linear weighting, this is expected. But one approach to worldview diversification is that you can normalize.
Linch, that sounds like a reasonable approach. I think something like that could work.
Ultimately, I guess a high value per dollarwould then be assigned to anyone who donated ~‘maximally our best guess impactfully’ in either of the three categories,
… and then the value would scale ~linearly with the amount donated ‘max-impactfully’ to either of the three categories.
Might be somewhat difficult to explain this to a smart popular audience, but I suspect it might be doable.
My suspicion is that there will only be a very narrow and “lucky” range of moral and belief parameters where the three cause areas will have cost effectivenesses in the same orders of magnitude.
But I should dig into this.