I really like the Open Philanthropy Project’s way of thinking about this problem:
https://www.openphilanthropy.org/blog/update-cause-prioritization-open-philanthropy
The short version (in my understanding):
Split assumptions about the world/target metrics into distinct “buckets”.
Do allocation as a two step process: intra-bucket on that bucket’s metric, and inter-bucket separately using other sorts of heuristics.
(If you like watching videos rather than reading blog posts, Holden also discussed this approach in his fireside chat at EAG 2018: San Francisco.)
Disclosure: I copyedited a draft of this post, and do contract work for CEA more generally
I don’t think that longtermism is a consensus view in the movement.
The 2017 EA Survey results had more people saying poverty was the top priority than AI and non-AI far future work combined. Similarly, AMF and GiveWell got by far the most donations in 2016, according to that same survey. While I agree that someone can be a longtermist and think that practicality concerns prioritize near-term good work for now anyway, I don’t think this is a very compelling explanation for these survey results.
As a first pass heuristic, I think EA leadership would guess correctly about community-held views more often if they held the belief “the modal EA-identifying person cares most about solving suffering that is happening in the world right now.”