Thanks for your comment, and I understand your frustration. I’m still figuring out how to communicate about specifics around why I feel strongly that incorrectly applying the neglectedness heuristic as a shortcut to avoid investigating whether investment in an area is warranted has led to tons of lost potential impact. And yes, US politics are, in my opinion, a central example. But I also think there are tons of others I’m not aware of, which brings me to the broader (meta) point I wanted to emphasize in the above take.
I wanted to focus on the case for more independent thinking, and discussion of how little cross-cause prioritization work there seems to be in EA world, rather than trying to convince the community of my current beliefs. I did initially include object-level takes on prioritization (which I may make another quick take about soon) in the comment, but decided to remove them for this reason: to keep the focus on the meta issue.
My guess is that many community members implicitly assume that cross-cause prioritization work is done frequently and rigorously enough to take into account important changes in the world, and that the takeaways get communicated such that EA resources get effectively allocated. I don’t think this is the case. If it is, the takeaways don’t seem to be communicated widely. I don’t know of a longtermist Givewell alternative for donations. I don’t see much rigorous cross-cause prioritization analysis from Open Phil, 80K, or on the EA forum to inform how to most impactfully spend time. Also, the priorities of the community have stayed surprisingly consistent over the years, despite many large changes in AI, politics, and more.
Given how important and difficult it is to do well, I think EAs should feel empowered to regularly contribute to cross-cause prioritization discourse, so we can all understand the world better and make better decisions.
Thanks for your comment, and I understand your frustration. I’m still figuring out how to communicate about specifics around why I feel strongly that incorrectly applying the neglectedness heuristic as a shortcut to avoid investigating whether investment in an area is warranted has led to tons of lost potential impact. And yes, US politics are, in my opinion, a central example. But I also think there are tons of others I’m not aware of, which brings me to the broader (meta) point I wanted to emphasize in the above take.
I wanted to focus on the case for more independent thinking, and discussion of how little cross-cause prioritization work there seems to be in EA world, rather than trying to convince the community of my current beliefs. I did initially include object-level takes on prioritization (which I may make another quick take about soon) in the comment, but decided to remove them for this reason: to keep the focus on the meta issue.
My guess is that many community members implicitly assume that cross-cause prioritization work is done frequently and rigorously enough to take into account important changes in the world, and that the takeaways get communicated such that EA resources get effectively allocated. I don’t think this is the case. If it is, the takeaways don’t seem to be communicated widely. I don’t know of a longtermist Givewell alternative for donations. I don’t see much rigorous cross-cause prioritization analysis from Open Phil, 80K, or on the EA forum to inform how to most impactfully spend time. Also, the priorities of the community have stayed surprisingly consistent over the years, despite many large changes in AI, politics, and more.
Given how important and difficult it is to do well, I think EAs should feel empowered to regularly contribute to cross-cause prioritization discourse, so we can all understand the world better and make better decisions.