Thanks for this update! Two questions…
When all the sponsored projects have been spun out, will EV continue to exist? If so, what will it do?
“I plan to share other non-privileged information on lessons learned in the aftermath of FTX and encourage others to share their reflections as well.” Do you have an estimated timeline for this?
For me, it’s been stuff like:
People (generally those who prioritize AI) describing global poverty as “rounding error”.
From late 2017 to early 2021, effectivealtruism.org (the de facto landing page for EA) had at least 3 articles on longtermist/AI causes (all listed above the single animal welfare article), but none on global poverty.
The EA Grants program granted ~16x more money to longtermist projects as global poverty and animal welfare projects combined. [Edit: this statistic only refers to the first round of EA Grants, the only round for which grant data has been published. ]
The EA Handbook 2.0 heavily emphasized AI relative to global poverty and animal welfare. As one EA commented: “By page count, AI is 45.7% of the entire causes sections. And as Catherine Low pointed out, in both the animal and the global poverty articles (which I didn’t count toward the page count), more than half the article was dedicated to why we might not choose this cause area, with much of that space also focused on far-future of humanity. I’d find it hard for anyone to read this and not take away that the community consensus is that AI risk is clearly the most important thing to focus on.” When changes to the handbook were promised in response to this type of criticism, those changes were then deprioritized and the Handbook wasn’t updated for years.
80k’s “top recommended problems” skew heavily longtermist; global health and poverty and factory farming don’t make the cut and are instead listed as “other pressing problems”
For the Community Building Grants program, “The primary metric used to assess grants at the end of the first year is the number of group members who apply for internships or graduate programs in priority areas and reach at least the interview stage… We used the 80,000 Hours list of priority paths as the basis for our list of accredited roles, but expanded it to be somewhat broader.” Since 80k’s priority paths are predominantly longtermist, groups were in large part evaluated by how many members they steered into longtermist jobs or studies.
GPI’s research agenda focuses on, and essentially assumes, longtermism
CFAR’s website went from emphasizing things like “turning cognitive science into cognitive practice”, to things like “we are focused on the existential win… we see AI safety as especially key here”
Etc.