1) Have you considered that in addition to one’s absolute impact, their relative impact (e. g. to their income or capacity) would be shown? These lists can be inspirational and insightful, for example, could show that many extremely poor people compete with Bill Gates while other extremely poor people do not. This can motivate cooperation in inclusive global development which can make everyone involved in this area feel great and others wanting to join.
2) Since this is lifetime, younger people would be disadvantaged. Would you apply Bayesian updating?
3) Why are these lists of large philanthropes not so popular? Are there some regional/cause area/industry lists that are better known? Can they be aggregated? What are their current effects and how could EA make them better in doing good, such as motivating people to donate or develop market solutions that are better sustainable in the long term?
4) Are you considering the counterfactual impact of the people’s alternative spending?
5) How are you going to make sure you do not forget anyone but do not discourage people who are motivated by not being recognized for their charity? (If you start excluding them, maybe they will not stop donating but get recognized?) Have you considered effects on networks, e. g. where some donations are a norm but no one individual donates a large sum? Is there a way to highlight them in a way that is not biased by the number of people?
6) Why would you include net worth? It seems inconsistent with the other columns, which celebrate donating rather than gaining status within the socioeconomic framework. Status is in impact, right.
7) I would also suggest deleting the amount donated so far/in a given year, since there should be intrinsic motivation of the donor/trust that they keep up with their pledge or when they do not there are valid reasons. If this is public, it seems like they have to do it, which can be demotivating. Thus, I would just keep an updated estimate of a pledge, updated by external experts, possibly after consultations with the donor. If the experts are sufficiently cool and serious, everyone will be excited to increase their expected impact value, even asking how they can do it even better?
8) I would either enable granular filtering, almost by output or intermediate outcome or no filtering, all converted to some sentience-adjusted wellbeing-adjusted life year. Because, if you want to compare who makes the most impact in animal welfare, for example, you have to measure which charity frees the most chicken from cages, or supports the implementation of policies or alternatives with the same effect regarding confined farm animals. If you just filter by cause area, then $1b to cute puppy mill awareness is the same as $1b to effective dynamic policy advocacy and development considering animals’ experiences.
The one metric I am talking about is measuring ‘active neural complexity’ (Being You: A New Science of Consciousness, Chapter 2) and multiplying this by weighted adjustment in wellbeing. I am suggesting this first eliminating suffering and weighting suffering by an exponent of 2.5. Of course, externalities should be considered, so always one’s contributions to changes of total weighted wellbeing in the universe should be estimated. This should be a prediction into infinity. This can seem like a challenge but if institutional inertia is considered and effects on the very long-term future of destructive actions known well and of constructive actions known little, it can be possible. Here, not only donations but also decisions should be considered.
9) Are you considering the counterfactual of what wealth and impact capacity people could have developed, within what they perceive as their free will?
10) Would you want to wait with heavily publicizing this list until it is possible to make impact cool in public, competing with other media competing for attention by other means (shaming, fear, impulsive appeal, …)? Or, are you planning to draft it in a way that supports the occurrence of this environment?
11) Yes, I was talking to GWWC a while ago that they should enable people showcase their donations and sometimes I keep mentioning it. Is this already occurring? Maybe, it is not consistent with the spirit of the community to brag about donations. But, a pilot list could just include a few volunteers disclosing their donations (especially more speculative ones) and arguing for their impact. Others could comment, in lieu of organizations submitting other calculations of impact.
This could provide valuable feedback on even more impactful donations consideration as well as support ‘donation specializations’ within EA. Then, additional donors could use a donations ITN to research their philanthropic investment best options. Also, feedback prior to donating could be valuable.
12) Do you know of the Charity Navigator’s Impact Unit (previously ImpactMatters) list of top charities within causes? Maybe the organization will not like it, but perhaps they could list top effective donors by multiplying the amount they donated by the unit effectiveness they list (for some they list cost per unit output, such as tonne of CO2 sequestered).
13) Trying to invite ‘external’ people by highlighting them on ‘our’ list can be ineffective because either they are already effective and so probably using at least some of EA-related resources or funding programs, so there is no need to maybe invite them to list their profile somewhere or there would be people who would like to make the list strategic so that it includes some people who would pay the most attention if included, some if excluded, also depending on the changing relationships among billionaires, so would either have to be well planned in advance considering possible scenarios of credible changing of metrics (but if billionaires just chat about improvements of the impact index under the standards of impartiality, it can work), a bit (tacitly) transparent about this objective of advancing a discourse, or not done.
14) But the list could also discourage some people, imagine that you were like number 3 or 4 in your local group and you realize you are number 135,000 - maybe you forget malaria or your local group before you start realizing that systemic change cannot happen if you just throw nets and money at the problem? So, I would maybe, in addition to considering impact relative to one’s capacity and counterfactuals and including non-financial impact conduct a network analysis of decisionmaking. This can further make people comfortable and wanting to join EA. Anyone can be competitive, they just have to have high positive impact.
15) Will you consider the counterfactual of if one is preventing others to do what they are doing better?
16) Would the entire list be more interesting if it is something like a 4D vector space? It can still be possible to comprehend, but slightly challenging and all these colored images just attract one’s attention. I would also make it VR compatible because it can be more interesting to explore this space of philanthropy with this headset.
17) Before you spend much resources, what is the metric? Is it QALY? WALY? Some other metric we have yet to come up with or discover, such as the prevalent spirit of virtuous progress? Are there any conditions (progress yes but no suffering or wellbeing yes and no suffering but no specific drugs, etc).
18) Would you be interested in spending an amount on researching the complex interactions of the millions of charities in the world and their donors to see what support increases efficiencies the most, considering donor interactions, the impact of counterfactual spending, and the charity’s existing capital that makes the marginal cost of various programs complementary in systemic change different?
19) How do the billionaires get into EA? I trust that less and less will be interested in big numbers and more and more in sincere impact. Then, an impact list should primarily facilitate cooperation in making impact but also, of course, highlight some billionaires by its structure.
20) The rankings have to be publicly acceptable. It is a check for narratives use in EA, also, e. g. to attract donors. For example, if the public would be skeptical about AI safety, then sound arguments, that relate to the effects of AI safety, should be understandable.
21) How would you engage representatives of public groups and networks who have to buy into this list in creating it without introducing partiality biases? This can prevent any perceptions of arrogance by actual learning.
Would the participation of the public discourage large donors who will then perceive less exclusivity/something special for them developed by special people? Either, the representatives of the public have to be special, e. g. by their ability to think about impact, or this list should not be public public, more like EA-related networks public. But then, this could lower epistemics in EA, or improve them, depending on the calculation.
22) How are you going to include impact due to market-based innovations, coordination, efficiencies, and policy negotiations?
23) Will you differentiate situations when real income increases and when it does not (redistribution takes place)?
To answer your questions:
A lot of the community’s funds, really at least $100m per year in the first year should go toward creating the environment that would make this idea seem plausible, aiming for sincerity, collaboration, and impartial and thoughtful definition of metrics/calculations. This is because if you hire like 5 people for $100k, pay maybe for articles in newspapers and popularize this list and it will be like who gives the most bednets then you end up with a world literally polluted by bednets (or with AI safety issues) and everyone thinking it is quite shameful to be thinking about stuff like systemic change or a cooperation toward it.
I would sum the contributions toward a better impact trajectory of EA of all the sub-parts of this project and update this sum as sub-parts occur and alternatives in EA that can achieve the same objective in a different way appear. It is an updated difference of two integrals. I would use my weighted WALY but I am biased.
In short, you should include externalities in an MVP. Also, consider making an actual spreadsheet.
Test it voluntarily with EAs. Do not publish it by Vox. This would go deeper the hole of ‘more bednets and OpenAI’ no thinking, because as it seems presented, people could experience negative emotions, such as fear, powerlessness/threat of submission, or anger, when seeing the list, which reduces critical thinking abilities, even within the environment they would be due to that list.
Some questions; you do not have to reply to most.
1) Have you considered that in addition to one’s absolute impact, their relative impact (e. g. to their income or capacity) would be shown? These lists can be inspirational and insightful, for example, could show that many extremely poor people compete with Bill Gates while other extremely poor people do not. This can motivate cooperation in inclusive global development which can make everyone involved in this area feel great and others wanting to join.
2) Since this is lifetime, younger people would be disadvantaged. Would you apply Bayesian updating?
3) Why are these lists of large philanthropes not so popular? Are there some regional/cause area/industry lists that are better known? Can they be aggregated? What are their current effects and how could EA make them better in doing good, such as motivating people to donate or develop market solutions that are better sustainable in the long term?
4) Are you considering the counterfactual impact of the people’s alternative spending?
5) How are you going to make sure you do not forget anyone but do not discourage people who are motivated by not being recognized for their charity? (If you start excluding them, maybe they will not stop donating but get recognized?) Have you considered effects on networks, e. g. where some donations are a norm but no one individual donates a large sum? Is there a way to highlight them in a way that is not biased by the number of people?
6) Why would you include net worth? It seems inconsistent with the other columns, which celebrate donating rather than gaining status within the socioeconomic framework. Status is in impact, right.
7) I would also suggest deleting the amount donated so far/in a given year, since there should be intrinsic motivation of the donor/trust that they keep up with their pledge or when they do not there are valid reasons. If this is public, it seems like they have to do it, which can be demotivating. Thus, I would just keep an updated estimate of a pledge, updated by external experts, possibly after consultations with the donor. If the experts are sufficiently cool and serious, everyone will be excited to increase their expected impact value, even asking how they can do it even better?
8) I would either enable granular filtering, almost by output or intermediate outcome or no filtering, all converted to some sentience-adjusted wellbeing-adjusted life year. Because, if you want to compare who makes the most impact in animal welfare, for example, you have to measure which charity frees the most chicken from cages, or supports the implementation of policies or alternatives with the same effect regarding confined farm animals. If you just filter by cause area, then $1b to cute puppy mill awareness is the same as $1b to effective dynamic policy advocacy and development considering animals’ experiences.
The one metric I am talking about is measuring ‘active neural complexity’ (Being You: A New Science of Consciousness, Chapter 2) and multiplying this by weighted adjustment in wellbeing. I am suggesting this first eliminating suffering and weighting suffering by an exponent of 2.5. Of course, externalities should be considered, so always one’s contributions to changes of total weighted wellbeing in the universe should be estimated. This should be a prediction into infinity. This can seem like a challenge but if institutional inertia is considered and effects on the very long-term future of destructive actions known well and of constructive actions known little, it can be possible. Here, not only donations but also decisions should be considered.
9) Are you considering the counterfactual of what wealth and impact capacity people could have developed, within what they perceive as their free will?
10) Would you want to wait with heavily publicizing this list until it is possible to make impact cool in public, competing with other media competing for attention by other means (shaming, fear, impulsive appeal, …)? Or, are you planning to draft it in a way that supports the occurrence of this environment?
11) Yes, I was talking to GWWC a while ago that they should enable people showcase their donations and sometimes I keep mentioning it. Is this already occurring? Maybe, it is not consistent with the spirit of the community to brag about donations. But, a pilot list could just include a few volunteers disclosing their donations (especially more speculative ones) and arguing for their impact. Others could comment, in lieu of organizations submitting other calculations of impact.
This could provide valuable feedback on even more impactful donations consideration as well as support ‘donation specializations’ within EA. Then, additional donors could use a donations ITN to research their philanthropic investment best options. Also, feedback prior to donating could be valuable.
12) Do you know of the Charity Navigator’s Impact Unit (previously ImpactMatters) list of top charities within causes? Maybe the organization will not like it, but perhaps they could list top effective donors by multiplying the amount they donated by the unit effectiveness they list (for some they list cost per unit output, such as tonne of CO2 sequestered).
13) Trying to invite ‘external’ people by highlighting them on ‘our’ list can be ineffective because either they are already effective and so probably using at least some of EA-related resources or funding programs, so there is no need to maybe invite them to list their profile somewhere or there would be people who would like to make the list strategic so that it includes some people who would pay the most attention if included, some if excluded, also depending on the changing relationships among billionaires, so would either have to be well planned in advance considering possible scenarios of credible changing of metrics (but if billionaires just chat about improvements of the impact index under the standards of impartiality, it can work), a bit (tacitly) transparent about this objective of advancing a discourse, or not done.
14) But the list could also discourage some people, imagine that you were like number 3 or 4 in your local group and you realize you are number 135,000 - maybe you forget malaria or your local group before you start realizing that systemic change cannot happen if you just throw nets and money at the problem? So, I would maybe, in addition to considering impact relative to one’s capacity and counterfactuals and including non-financial impact conduct a network analysis of decisionmaking. This can further make people comfortable and wanting to join EA. Anyone can be competitive, they just have to have high positive impact.
15) Will you consider the counterfactual of if one is preventing others to do what they are doing better?
16) Would the entire list be more interesting if it is something like a 4D vector space? It can still be possible to comprehend, but slightly challenging and all these colored images just attract one’s attention. I would also make it VR compatible because it can be more interesting to explore this space of philanthropy with this headset.
17) Before you spend much resources, what is the metric? Is it QALY? WALY? Some other metric we have yet to come up with or discover, such as the prevalent spirit of virtuous progress? Are there any conditions (progress yes but no suffering or wellbeing yes and no suffering but no specific drugs, etc).
18) Would you be interested in spending an amount on researching the complex interactions of the millions of charities in the world and their donors to see what support increases efficiencies the most, considering donor interactions, the impact of counterfactual spending, and the charity’s existing capital that makes the marginal cost of various programs complementary in systemic change different?
19) How do the billionaires get into EA? I trust that less and less will be interested in big numbers and more and more in sincere impact. Then, an impact list should primarily facilitate cooperation in making impact but also, of course, highlight some billionaires by its structure.
20) The rankings have to be publicly acceptable. It is a check for narratives use in EA, also, e. g. to attract donors. For example, if the public would be skeptical about AI safety, then sound arguments, that relate to the effects of AI safety, should be understandable.
21) How would you engage representatives of public groups and networks who have to buy into this list in creating it without introducing partiality biases? This can prevent any perceptions of arrogance by actual learning.
Would the participation of the public discourage large donors who will then perceive less exclusivity/something special for them developed by special people? Either, the representatives of the public have to be special, e. g. by their ability to think about impact, or this list should not be public public, more like EA-related networks public. But then, this could lower epistemics in EA, or improve them, depending on the calculation.
22) How are you going to include impact due to market-based innovations, coordination, efficiencies, and policy negotiations?
23) Will you differentiate situations when real income increases and when it does not (redistribution takes place)?
To answer your questions:
A lot of the community’s funds, really at least $100m per year in the first year should go toward creating the environment that would make this idea seem plausible, aiming for sincerity, collaboration, and impartial and thoughtful definition of metrics/calculations. This is because if you hire like 5 people for $100k, pay maybe for articles in newspapers and popularize this list and it will be like who gives the most bednets then you end up with a world literally polluted by bednets (or with AI safety issues) and everyone thinking it is quite shameful to be thinking about stuff like systemic change or a cooperation toward it.
I would sum the contributions toward a better impact trajectory of EA of all the sub-parts of this project and update this sum as sub-parts occur and alternatives in EA that can achieve the same objective in a different way appear. It is an updated difference of two integrals. I would use my weighted WALY but I am biased.
In short, you should include externalities in an MVP. Also, consider making an actual spreadsheet.
Test it voluntarily with EAs. Do not publish it by Vox. This would go deeper the hole of ‘more bednets and OpenAI’ no thinking, because as it seems presented, people could experience negative emotions, such as fear, powerlessness/threat of submission, or anger, when seeing the list, which reduces critical thinking abilities, even within the environment they would be due to that list.