How do you decide whether something belongs to the longtermism department (i.e., whether it’ll affect the long-term future)?
We haven’t had to make too many fine-grained decisions, so it hasn’t been something that has come up enough to merit a clear decision procedure. I think the trickiest decision was what to do with research aimed at understanding and mitigating the negative effects of climate change. The main considerations were questions like “how do our stakeholders classify this work” and “what is the probability of this issue leading to human extinction within the century” and both of those considerations led to climate change work falling into our “global health and development” portfolio.
This year we’ve made an intentional decision to focus nearly all our longtermist work on AI due to our assessment of AI risk as both unusually large and urgent, even among other existential risks. We will revisit this decision in future years and to be clear this does not mean that we think other people shouldn’t work on non-AI x-risk or non-xrisk longtermism.
We haven’t had to make too many fine-grained decisions, so it hasn’t been something that has come up enough to merit a clear decision procedure. I think the trickiest decision was what to do with research aimed at understanding and mitigating the negative effects of climate change. The main considerations were questions like “how do our stakeholders classify this work” and “what is the probability of this issue leading to human extinction within the century” and both of those considerations led to climate change work falling into our “global health and development” portfolio.
This year we’ve made an intentional decision to focus nearly all our longtermist work on AI due to our assessment of AI risk as both unusually large and urgent, even among other existential risks. We will revisit this decision in future years and to be clear this does not mean that we think other people shouldn’t work on non-AI x-risk or non-xrisk longtermism.