I don’t have any advice but just wanted to say that I think it’s really cool that you asked for advice here.
Question: How funding constrained do you feel like the Animal Welfare Fund is? Do you feel like you get to make essentially every grant you think you’d reasonably want to make or are there more awesome grants you would’ve made if only the fund had raised more money?
Question: How funding constrained do you feel like the Meta Fund is? Do you feel like you get to make essentially every grant you think you’d reasonably want to make or are there more awesome grants you would’ve made if only the fund had raised more money?
Question: How funding constrained do you feel like the Long-Term Future Fund is? Do you feel like you get to make essentially every grant you think you’d reasonably want to make or are there more awesome grants you would’ve made if only the fund had raised more money?
Yep! Fixed! Thanks!
Can you elaborate more about what you mean?
Corporate cage-free campaigns aren’t considered among the “near-termist human-centric OpenPhil grants”… they’re instead in a separate “animal-inclusive” granting bucket that will be evaluated later.
A more relevant curve for nuclear weapons might be “TNT equivalents” or “cost per ton TNT equivalent”.
This is very exciting—it is really great to see another large-scale grantmaker with a public application process.
Nitpick: There is a typo in the choices for “What would you likely do if we decided not to fund your project?” in the application.
I’m not a good person to ask about that… I’d reach out to email@example.com and firstname.lastname@example.org.
It’s worth noting that this post here uses data from the Effective Altruism Survey, which is different from the GWWC pledge dashboard. I don’t think GWWC pledge dashboard has been used yet to calculate any retention statistics.
(Quick sidenote: “if we categorize all taxa as a likely yes”… it sounds like you’re saying “taxa” are the features/rows, but “taxa” refers to groupings of animals. Sorry that the term is a bit unfamiliar.)
Hey Gavin, we (at Rethink Priorities) would be interested in exploring funding for this. Would you be able to reach out to email@example.com and we can discuss next steps?
In this analysis, I’m looking specifically at people who do report donations that are distinctly and clearly inconsistent with pledge keeping. “Quiet pledge-keepers” who do not report any data would not be included in this analysis because they would not be reporting data to the EA Survey. So the phenomena I report here cannot be mere instances of quiet pledge-keeping.
As for the point “total number of people ceasing reporting donations (and very likely ceasing keeping the pledge)” which refers to GWWC analysis that may refer to quiet pledge-keeping, it is impossible to know to what degree people do or do not keep the pledge but fail to report it. My intuition, shared by others, is that people who don’t return GWWC’s emails asking for whether people keep the pledge are likely not keeping the pledge, but I agree there can be exceptions to this (such as maybe you) and that perhaps there are actually a lot of quiet pledge-keepers.
I’m personally much more skeptical of the work. Most notably, the paper’s results are suspiciously clear-cut—all the of treated ants showed the mirror test behavior and none of the control ants did. Generally mirror self-recognition tests are not this straightforward and it is common, even with chimps, for some tested subjects to not pass. Unless ants are potentially smarter than chimps, I don’t think we should expect results this clear. Instead, this suggests something in the study may have gone wrong.
Also Max Carpendale, then a contract researcher with Rethink Priorities, pointed out to me that one of the authors of the paper, Marie-Claire Caemmarts, has a history of making dubious claims outside mainstream scientific consensus. For example, she has published research that suggests cell phone radiation and Wi-Fi are damaging to human health, for which she has received criticism of bad science.
Note that it is possible to do longitudinal analysis with the EA Survey, and we have done some in the past (such as for retention, GWWC pledge keeping, and changes in cause preferences). I’d be happy to help walk people through how they can do their own longitudinal analyses and what the relevant caveats are.
I checked and ~22% of GWWC members* did not donate more than 5% of their income in 2017, so even assuming taxes accounted for a large portion of the issue, there are still a lot of people who are not reporting data consistent with keeping the GWWC pledge.
*this analysis was limited to people who (a) took the 2018 EA Survey, (b) reported having taken the GWWC pledge, (c) reported income and donation data, (d) are non-students, (e) have income >$10K, and (f) reported joining GWWC prior to 2017. N=253.
Also of note is that the SlateStarCodex 2017 survey offered an EA ID question in addition to a mental health inventory (David links to the 2019 SSC Survey which sadly does not have the EA ID question). We benchmarked the 2017 EA Survey to the SSC 2017 survey and found the EA populations to be mostly similar.
I’d strongly urge the OP or anyone interested in this topic to dig into the SSC 2017 survey’s EA population and investigate mental health statistics… I imagine the likelihood of oversampling people with mental health concerns would be lower (though not nonexistent) there.