The above does not link to the original post. You are supposed to type out the URL in the field above.
Despite not even having publicly launched, I have back-to-back monthly promising projects lined up, each with significant estimated impact, each with higher impact than my upper bound estimates of my ability to earn via for-profit founding (my next highest career option).
How did you determine this? Did you explicitly quantify the impact of the promising projects in terms of money donated to GiveWell’s top charities or similar?
Another example is when AIM [Ambitious Impact] created the metric of SADs [suffering-adjusted days], which is now used not only by AIM but also across the animal welfare space.
Could you elaborate on which organisations use SADs? I am only aware of Animal Charity Evaluators (ACE) using them in their charity evaluations.
I am particularly excited about time-bound projects that take between 30 and 300 hours, especially projects that create a common good. By this, I mean outcomes that benefit multiple philanthropic actors in the ecosystem. One example might be creating an external evaluation system for a single foundation but publishing the methods and strategies so that multiple other foundations can also use them.
What do you think about decreasing the uncertainty in welfare comparisons across species as a common good project? I think much more research on that is needed to conclude which interventions robustly increase welfare. I do not know about any intervention which robustly increases welfare due to potentially dominant uncertain effects on soil animals and microorganisms. Even neglecting these, I believe there is lots of room to change funding decisions as a result of more of research on that. I understand AIM, ACE, maybe the Animal Welfare Fund (AWF), and Coefficient Giving (CG) sometimes for robustness checks use the (expected) welfare ranges Rethink Priorities (RP) initially presented, or the ones in Bob Fischer’s book as if they are within a factor of 10 of the right estimates (such that these could 10 % to 10 times as large). However, I can easily see much larger differences. For example, the estimate in Bob’s book for the welfare range of shrimps is 8.0 % that of humans, but I would say one reasonable best guess (though not the only one) is 10^-6, the ratio between the number of neurons of shrimps and humans.
Thanks for crossposting this, Joey.
The above does not link to the original post. You are supposed to type out the URL in the field above.
How did you determine this? Did you explicitly quantify the impact of the promising projects in terms of money donated to GiveWell’s top charities or similar?
Could you elaborate on which organisations use SADs? I am only aware of Animal Charity Evaluators (ACE) using them in their charity evaluations.
What do you think about decreasing the uncertainty in welfare comparisons across species as a common good project? I think much more research on that is needed to conclude which interventions robustly increase welfare. I do not know about any intervention which robustly increases welfare due to potentially dominant uncertain effects on soil animals and microorganisms. Even neglecting these, I believe there is lots of room to change funding decisions as a result of more of research on that. I understand AIM, ACE, maybe the Animal Welfare Fund (AWF), and Coefficient Giving (CG) sometimes for robustness checks use the (expected) welfare ranges Rethink Priorities (RP) initially presented, or the ones in Bob Fischer’s book as if they are within a factor of 10 of the right estimates (such that these could 10 % to 10 times as large). However, I can easily see much larger differences. For example, the estimate in Bob’s book for the welfare range of shrimps is 8.0 % that of humans, but I would say one reasonable best guess (though not the only one) is 10^-6, the ratio between the number of neurons of shrimps and humans.