(Speaking for myself and not others on the team, etc)
At a very high level, I think I have mostly “mainstream longtermist EA” views here, and my current best guess would be that AI Safety, existential biosecurity, and cause prioritization (broadly construed) are the highest EV efforts to work on overall, object-level.
This does not necessarily mean that marginal progress on these things are the best use of additional resources, or that they are the most cost-effective efforts to work on, of course.
This is not a satisfying answer but right now I think the longtermist effort with the highest expected value is spending time trying to figure out what longtermist efforts we should prioritize.
I also think we should spend a lot more resources on figuring out if and how much we can expect to reliably influence the long-term future, as this could have a lot of impact on our strategy (such as becoming less longtermist or more focused on broad longtermism or more focused on patient longtermism, etc.).
I don’t have a third thing yet, but both of these projects we are aiming to do within Rethink Priorities.
Roughly in line with Peter’s statement that “I think the longtermist effort with the highest expected value is spending time trying to figure out what longtermist efforts we should prioritize”, I recently argued (with some caveats and uncertainties) that marginal longtermist donations will tend to be better used to support “fundamental” rather than “intervention” research. On what those terms mean, I wrote:
It’s most useful to distinguish intervention research from fundamental research based on whether the aim is to:
better understand, design, and/or prioritise among a small set of specific, already-identified intervention options, or
better understand aspects of the world that may be relevant to a large set of intervention options (more)
See that post’s “Key takeaways” for the main arguments for and against that overall position of mine.
I think I’d also argue that marginal longtermist research hours (not just donations) will tend to be better used to support fundamental rather than intervention research. (But here personal fit becomes quite important.) And I think I’d also currently tend to prioritise “fundamental” research over non-research interventions, but I haven’t thought about that as much and didn’t discuss it in the post.
So the highest-EV-on-the-current-margin efforts I’d pick would probably be in the “fundamental research” category.
Of course, these are all just general rules, and the value of different fundamental research efforts, intervention research efforts, and non-research efforts will vary greatly.
In terms of specific fundamental research efforts I’m currently personally excited about, these include analyses, from a longtermist perspective, of:
Basically, those things seem like variables that might (or might not!) matter a great deal, and (as far as I’m aware) haven’t yet been looked into from a longtermist perspective much. So I expect there could be some valuable low-hanging fruit there.
Maybe if I had to pick just three, I’d bundle the first two together, and then stamp my feet and say “But I want four!”
(I have more thoughts on this that I may write about later. See also this and this. And again, these are just my personal, current views.)
If you had to choose just three long-termist efforts as the highest expected value, which would you pick and why?
(Speaking for myself and not others on the team, etc)
At a very high level, I think I have mostly “mainstream longtermist EA” views here, and my current best guess would be that AI Safety, existential biosecurity, and cause prioritization (broadly construed) are the highest EV efforts to work on overall, object-level.
This does not necessarily mean that marginal progress on these things are the best use of additional resources, or that they are the most cost-effective efforts to work on, of course.
This is not a satisfying answer but right now I think the longtermist effort with the highest expected value is spending time trying to figure out what longtermist efforts we should prioritize.
I also think we should spend a lot more resources on figuring out if and how much we can expect to reliably influence the long-term future, as this could have a lot of impact on our strategy (such as becoming less longtermist or more focused on broad longtermism or more focused on patient longtermism, etc.).
I don’t have a third thing yet, but both of these projects we are aiming to do within Rethink Priorities.
(Just my personal views, as always)
Roughly in line with Peter’s statement that “I think the longtermist effort with the highest expected value is spending time trying to figure out what longtermist efforts we should prioritize”, I recently argued (with some caveats and uncertainties) that marginal longtermist donations will tend to be better used to support “fundamental” rather than “intervention” research. On what those terms mean, I wrote:
See that post’s “Key takeaways” for the main arguments for and against that overall position of mine.
I think I’d also argue that marginal longtermist research hours (not just donations) will tend to be better used to support fundamental rather than intervention research. (But here personal fit becomes quite important.) And I think I’d also currently tend to prioritise “fundamental” research over non-research interventions, but I haven’t thought about that as much and didn’t discuss it in the post.
So the highest-EV-on-the-current-margin efforts I’d pick would probably be in the “fundamental research” category.
Of course, these are all just general rules, and the value of different fundamental research efforts, intervention research efforts, and non-research efforts will vary greatly.
In terms of specific fundamental research efforts I’m currently personally excited about, these include analyses, from a longtermist perspective, of:
totalitarianism/dystopias,
world government (see also),
civilizational collapse and recovery,
“the long reflection”, and/or
long-term risks from malevolent actors
Basically, those things seem like variables that might (or might not!) matter a great deal, and (as far as I’m aware) haven’t yet been looked into from a longtermist perspective much. So I expect there could be some valuable low-hanging fruit there.
Maybe if I had to pick just three, I’d bundle the first two together, and then stamp my feet and say “But I want four!”
(I have more thoughts on this that I may write about later. See also this and this. And again, these are just my personal, current views.)