I think this is a great question. The lack of clear, demonstrable progress in reducing existential risks, and the difficulty of making and demonstrating any progress, makes me very skeptical of longtermism in practice.
I think shifting focus from tractable, measurable issues like global health and development to issues that—while critical—are impossible to reliably affect, might be really bad.
I don’t think that a lack of concrete/legible examples of existential risk reduction so far should make us move to other cause areas.
The main reason is that it might be unsurprising for a movement to take a while to properly get going. I haven’t researched this, but it seems unsurprising to me that movements may typically start with a period of increasing awareness / the number of people working in the movement (a period I think we are currently still in), before achieving really concrete wins. The longtermist movement is a new one with mostly young people who have reoriented their careers but generally haven’t yet reached senior enough positions to affect real change.
If you actually buy into the longtermist argument, then why give up now? It seems unreasonable to me to think that we haven’t yet achieved concrete change and that we are very unlikely to ever do so in the future.
I don’t think that a lack of concrete/legible examples of existential risk reduction so far should make us move to other cause areas.
Perhaps not, but if a movement is happy to use estimates like “our X-risk is 17% this century” to justify working on existential risks and call it the most important thing you can do with your life, but cannot measure how their work actually decreases this 17% figure, they should at the very least reconsider whether their approach is achieving their stated goals.
The longtermist movement is a new one with mostly young people who have reoriented their careers but generally haven’t yet reached senior enough positions to affect real change.
It’s true that global health as an area is newer than AI safety, but given EA GHD isn’t taking credit for things that happened before EA existed, like eradicating smallpox, I don’t know if this is actually the “main reason”.
If you actually buy into the longtermist argument, then why give up now? It seems unreasonable to me to think that we haven’t yet achieved concrete change and that we are very unlikely to ever do so in the future.
You might buy into the longtermism argument at a general level (“Future lives matter”, “the future is large”, we can affect the future”), but update about some of the details, such that you think planning for and affecting the far future is much more intractable or premature than you previously thought. Otherwise, are you saying there’s nothing that could happen that would change your mind on whether longtermism was a good use of EA resources?
but cannot measure how their work actually decreases this 17% figure, they should at the very least reconsider whether their approach is achieving their stated goals.
I’m not sure how it’s even theoretically possible to measure reductions in existential risk. An existential catastrophe is something that can only happen once. Without being able to observe a reduction in incidence of an event I don’t think you can “measure” reduction in risk. I do on the other hand think it’s fair to say that increasing awareness of existential risk reduces total existential risk, even if I’m not sure by how much exactly.
I’d imagine concrete/legible actions to reduce existential risk will probably come in the form of policy change and I don’t think for the most part EAs have yet entered influential policy positions. Please do say what other actions you would consider to count under concrete/legible though as that is up for interpretation.
It’s true that global health as an area is newer than AI safety, but given EA GHD isn’t taking credit for things that happened before EA existed, like eradicating smallpox, I don’t know if this is actually the “main reason”.
Sorry I’m not really sure what you’re saying here.
Otherwise, are you saying there’s nothing that could happen that would change your mind on whether longtermism was a good use of EA resources?
This is a good question. I think the best arguments against longtermism are:
That longtermism is fanatical and that fanaticism is not warranted
On balance I am not convinced by this objection as I don’t think longtermism is fanatical and am unsure if fanaticism is a problem. But further research here might sway me.
That we might simply be clueless about the impacts of our actions
At the moment I don’t think we are, and I think if cluelessness is a big issue it is very likely to be an issue for neartermist cause areas as well, and even EA altogether.
I don’t mind admitting that it seems unlikely that I will change my mind on longtermism. If I do, I’d imagine it will be on account of one of the two arguments above.
I’m not sure how it’s even theoretically possible to measure reductions in existential risk. Without being able to observe a reduction in incidence of an event I don’t think you can “measure” reduction in risk.
I disagree—what do you think the likelihood of a civilization ending event from engineered pandemics is, and what do you base this forecast on?
I’d imagine concrete/legible actions to reduce existential risk will probably come in the form of policy change
What % of longtermist $ and FTEs do you think are being spent on trying to influence policy versus technical or technological solutions? (I would consider many of these as concrete + legible)
Sorry I’m not really sure what you’re saying here.
That was me trying to steelman your justification of lack of concrete/legible wins to “longtermism is new” by thinking of clearer ways that longtermism is different to neartermist causes, and that requires looking outside the EA space.
I disagree—what do you think the likelihood of a civilization ending event from engineered pandemics is, and what do you base this forecast on?
As I say I don’t think one can “measure” the probability of existential risk. I think one can estimate it through considered judgment of relevant arguments but I am not inclined to do so and I don’t think anyone else should be so inclined either. Any such probability would be somewhat arbitrary and open to reasonable disagreement. What I am willing to do is say things like “existential risk is non-negligible” and “we can meaningfully reduce it”. These claims are easier to defend and are all we really need to justify working on reducing existential risk.
What % of longtermist $ and FTEs do you think are being spent on trying to influence policy versus technical or technological solutions? (I would consider many of these as concrete + legible)
No idea. Even if the answer is a lot and we haven’t made much progress, this doesn’t lead me away from longtermism. Mainly because the stakes are so high and I think we’re still relatively new to all this so I expect us to get more effective over time, especially as we actually get people into influential policy roles.
That was me trying to steelman your justification of lack of concrete/legible wins to “longtermism is new” by thinking of clearer ways that longtermism is different to neartermist causes, and that requires looking outside the EA space.
This may be because I’m slightly hungover but you’re going to have to ELI5 your point here!
I think this is a great question. The lack of clear, demonstrable progress in reducing existential risks, and the difficulty of making and demonstrating any progress, makes me very skeptical of longtermism in practice.
I think shifting focus from tractable, measurable issues like global health and development to issues that—while critical—are impossible to reliably affect, might be really bad.
I don’t think that a lack of concrete/legible examples of existential risk reduction so far should make us move to other cause areas.
The main reason is that it might be unsurprising for a movement to take a while to properly get going. I haven’t researched this, but it seems unsurprising to me that movements may typically start with a period of increasing awareness / the number of people working in the movement (a period I think we are currently still in), before achieving really concrete wins. The longtermist movement is a new one with mostly young people who have reoriented their careers but generally haven’t yet reached senior enough positions to affect real change.
If you actually buy into the longtermist argument, then why give up now? It seems unreasonable to me to think that we haven’t yet achieved concrete change and that we are very unlikely to ever do so in the future.
Perhaps not, but if a movement is happy to use estimates like “our X-risk is 17% this century” to justify working on existential risks and call it the most important thing you can do with your life, but cannot measure how their work actually decreases this 17% figure, they should at the very least reconsider whether their approach is achieving their stated goals.
I think this is misleading, because:
Longtermism has been part of EA since close to its very beginning, and many senior leaders in EA are longtermists.
It’s true that global health as an area is newer than AI safety, but given EA GHD isn’t taking credit for things that happened before EA existed, like eradicating smallpox, I don’t know if this is actually the “main reason”.
You might buy into the longtermism argument at a general level (“Future lives matter”, “the future is large”, we can affect the future”), but update about some of the details, such that you think planning for and affecting the far future is much more intractable or premature than you previously thought. Otherwise, are you saying there’s nothing that could happen that would change your mind on whether longtermism was a good use of EA resources?
I’m not sure how it’s even theoretically possible to measure reductions in existential risk. An existential catastrophe is something that can only happen once. Without being able to observe a reduction in incidence of an event I don’t think you can “measure” reduction in risk. I do on the other hand think it’s fair to say that increasing awareness of existential risk reduces total existential risk, even if I’m not sure by how much exactly.
I’d imagine concrete/legible actions to reduce existential risk will probably come in the form of policy change and I don’t think for the most part EAs have yet entered influential policy positions. Please do say what other actions you would consider to count under concrete/legible though as that is up for interpretation.
Sorry I’m not really sure what you’re saying here.
This is a good question. I think the best arguments against longtermism are:
That longtermism is fanatical and that fanaticism is not warranted
On balance I am not convinced by this objection as I don’t think longtermism is fanatical and am unsure if fanaticism is a problem. But further research here might sway me.
That we might simply be clueless about the impacts of our actions
At the moment I don’t think we are, and I think if cluelessness is a big issue it is very likely to be an issue for neartermist cause areas as well, and even EA altogether.
I don’t mind admitting that it seems unlikely that I will change my mind on longtermism. If I do, I’d imagine it will be on account of one of the two arguments above.
I disagree—what do you think the likelihood of a civilization ending event from engineered pandemics is, and what do you base this forecast on?
What % of longtermist $ and FTEs do you think are being spent on trying to influence policy versus technical or technological solutions? (I would consider many of these as concrete + legible)
That was me trying to steelman your justification of lack of concrete/legible wins to “longtermism is new” by thinking of clearer ways that longtermism is different to neartermist causes, and that requires looking outside the EA space.
As I say I don’t think one can “measure” the probability of existential risk. I think one can estimate it through considered judgment of relevant arguments but I am not inclined to do so and I don’t think anyone else should be so inclined either. Any such probability would be somewhat arbitrary and open to reasonable disagreement. What I am willing to do is say things like “existential risk is non-negligible” and “we can meaningfully reduce it”. These claims are easier to defend and are all we really need to justify working on reducing existential risk.
No idea. Even if the answer is a lot and we haven’t made much progress, this doesn’t lead me away from longtermism. Mainly because the stakes are so high and I think we’re still relatively new to all this so I expect us to get more effective over time, especially as we actually get people into influential policy roles.
This may be because I’m slightly hungover but you’re going to have to ELI5 your point here!