Last week there was a post on “The Future Might Not Be So Great” that made similar points as this post.
redmoonsoaring
This may be an issue for other EA organisations. For instance, New Harvest recently called an “emergency town hall” because of “this recent economic downturn” where they need to shift “from a growth mindset to survival mode” and “staff cuts are unavoidable.”
That seems right, but I might be more inclined to push back against this kind of norm. I find on Reddit that I can be quite straightforward and brief, and people don’t downvote based on their interpretation of the feelings of the commenter. I would like to encourage that sort of norm on the EAF, rather than the norms that (as I see! and I could be wrong) focus on excessive positivity towards established views of the community as it currently stands.
That sounds right to me and seems consistent with my original comment.
Thank you. I was just estimating PhDs based on their bios.
Thank you. I was just estimating PhDs based on their bios.
That sounds broadly correct, but just for clarification, my question was about capacity-building impact, not current spending and research output. For example, RP funding contributes to the research experience of their staff, and RP staff might be considerably less likely to stay in the animal welfare cause area than researchers at other animal charities. So there might be more spillover of this long-term impact than is reflected in the current budget breakdown.
This is especially likely if RP itself shifts its funding allocation in the future.
RP seems to err more towards quantity of research over quality than other organizations. Is this your impression as well? Is this a conscious decision? Do you think other EA research organizations should also steer in that direction, or does it reflect RP’s niche?
For example, Global Priorities Institute seems to prioritize high-quality research that will help garner momentum for longtermist work in academia, such as journal articles published by PhDs (compare to RP’s large number of blog post research and having, I believe, only one PhD on staff (edit: according to MichaelStJules, there are 2 PhDs on staff)). Of course work in peer-reviewed journals with academic training might not necessarily reflect higher quality, so I know this hinges on one’s view of the various metrics of “quality” we have available.
How do you think about your role as a research organization working across different cause areas?
Personally, I have considered donating to Rethink Priorities. But I care a lot about capacity-building with organizations, so I tend to donate to the other animal welfare EA research organizations such as Animal Charity Evaluators and Sentience Institute. My impression is that while RP is currently focused on animal welfare, a substantial part of the impact of my donations might spillover too much into cause areas that are personally less of a priority to me, such as x-risk and global poverty.
Of course there may be benefits to working across different cause areas, such as the ability to learn methodologies and data from one issue that have relevance to another. So it’s not at all clear how this shakes out, even for supporters who focus on one cause area. What do you think?
>>There is also more optimism about farm animal lives coming from farmers, who are more familiar with them than anyone else.
I believe this familiarity is a much weaker factor than the bias farmers have to think of themselves as ethical and to justify the industry they work in.
Thank you for the comment. I didn’t reply because I had hoped other Animal Welfare Fund representatives would respond to the substance of the concern (concentration of power). I don’t think we need critics of ACE on the fund committee. I simply believe it would be beneficial to have less concentration of funding in the two entities of ACE and OpenPhil. I believe this is a concern even if one believes ACE and OpenPhil are competent.
Quite a few people in the animal welfare and EA spaces are concerned that the two parties ACE and OpenPhil, i.e., ACE staff and Lewis Bollard, control the vast majority of funding in the EAA space, and a very large portion of funding in the farm animal space as a whole.
I had hoped that expanding the Animal Welfare Fund to a committee would address this concern, but 3⁄4 members are with either ACE or OpenPhil. This seems especially disappointing given criticisms of ACE in the EAA community: 1 ,2 , and 3 .
Why were more non-ACE/non-OpenPhil members not added, and are there plans to diversify in the future?
Thank you for the explanation. I still believe the 2017 and 2018 animal welfare and global poverty line-ups left a lot to be desired, but those years might have been better than 2016 at least in the choice of keynote speaker.
Maybe there could be more transparency in regards to the advisory board, because without knowing those details, I don’t know how to evaluate the situation. I do feel concern from CEA’s history that the advisory board may favor people with close ties to CEA rather than actual meaningful representation from those fields. But I can’t be confident in that without knowing the details.
This EA Forum post might be a really good example of how EAs interested in blogging and research can support Open Philanthropy Project. If you have any other ideas for topics like this, Lewis, sharing them could help other EAs help you in other ways.
On the topic of Effective Altruism Global, I’m not just concerned about the lower representation of non-x-risk cause areas, but also the speaker selection for those cause areas. In 2016 as an example, the main animal welfare speaker was a parrot intelligence researcher who seemed, I’m sorry to say this, uninformed about animal welfare, even of birds. I think the animal welfare speakers over the years have been more selected for looking cool to the organizers (who didn’t know much about animal welfare) and/or increasing speaker demographic diversity (Not that this is a bad thing, but it’s unhelpful to just get diversity in one cause area.), instead of actually having the leading experts on EA and animal welfare.
I think thoughtful, rationality-focused people (not just EA, but even, say, young software engineers) can often outperform the average ‘expert,’ with expertise measured by traditional credentials like having a PhD. There are many biases that pervade academia and other fields (e.g. publication bias, status quo bias, publish or perish incentives), and thoughtful people have often done a lot more than traditional experts to understand and overcome these biases. They also get the benefit of going into a field without as many preconceptions and personal investments, allowing them to better synthesize the literature in a less-biased way.
I don’t have many examples on hand (and would really like if someone else can provide them), but I feel there’s a solid track record of a thoughtful, rationality-focused person disagreeing strongly with traditional experts. Only two are coming to mind right now:
One is Eliezer Yudkowsky, a self-educated blogger, advocating for a focus on safety in the AI community that most traditional AI experts thought was crazy, but now the traditional AI community has shifted heavily towards Yudkowsky.
Another one is the Superforecasters discussed by Phil Tetlock doing very well at predicting future events (e.g. whether there will be a civil war in a certain country), despite traditional experts doing little better than chance.
For what it’s worth, I do agree that’s where most of the value comes from, though I think the value is much lower than the value of similar empirical/bold writing, at least for this example.
While I see some value in detailing commonly-held positions like this post does, and I think this post is well-written, I want to flag my concern that it seems like a great example of a lot of effort going into creating content that nobody really disagrees with. This sort of armchair qualified writing doesn’t seem to me like a very cost-effective use of EA resources, and I worry we do a lot of it, partly because it’s easy to do and gets a lot of positive social reinforcement, to a much greater degree than empirical bold writing tends to get.
I agree with the caveat that the $333 figure is much less worrisome if it’s due to a high number of student or people working for nonprofits.
There are many factors going into that issue, but I think the biggest are the bottlenecks within the pipeline that brings money from OP to individual donation opportunities. Most directly, OP has a limited staff and a lot of large, important grants to manage. They often don’t have the spare attention, time, or energy to solicit, vet, and manage funding to the many individuals and small organizations that need funding.
LTFF and other grantmakers have similar issues. The general idea is that just there are many inefficiencies in the grantmaker → ??? → grantee market. The market is especially inefficient for funding opportunities that are small (because the fixed costs of granting remain high) and weird (because the downside risk is magnified for large grantmakers).
Worse, I hear that a big issue is that everyone asks this same question “Why aren’t you already funded by [funders that are not me]?” of new ventures who lack existing personal connections to the big funders, which leads them to never get off the ground.