I think this is a great question. The lack of clear, demonstrable progress in reducing existential risks, and the difficulty of making and demonstrating any progress, makes me very skeptical of longtermism in practice.
I think shifting focus from tractable, measurable issues like global health and development to issues thatāwhile criticalāare impossible to reliably affect, might be really bad.
I donāt think that a lack of concrete/ālegible examples of existential risk reduction so far should make us move to other cause areas.
The main reason is that it might be unsurprising for a movement to take a while to properly get going. I havenāt researched this, but it seems unsurprising to me that movements may typically start with a period of increasing awareness /ā the number of people working in the movement (a period I think we are currently still in), before achieving really concrete wins. The longtermist movement is a new one with mostly young people who have reoriented their careers but generally havenāt yet reached senior enough positions to affect real change.
If you actually buy into the longtermist argument, then why give up now? It seems unreasonable to me to think that we havenāt yet achieved concrete change and that we are very unlikely to ever do so in the future.
I donāt think that a lack of concrete/ālegible examples of existential risk reduction so far should make us move to other cause areas.
Perhaps not, but if a movement is happy to use estimates like āour X-risk is 17% this centuryā to justify working on existential risks and call it the most important thing you can do with your life, but cannot measure how their work actually decreases this 17% figure, they should at the very least reconsider whether their approach is achieving their stated goals.
The longtermist movement is a new one with mostly young people who have reoriented their careers but generally havenāt yet reached senior enough positions to affect real change.
Itās true that global health as an area is newer than AI safety, but given EA GHD isnāt taking credit for things that happened before EA existed, like eradicating smallpox, I donāt know if this is actually the āmain reasonā.
If you actually buy into the longtermist argument, then why give up now? It seems unreasonable to me to think that we havenāt yet achieved concrete change and that we are very unlikely to ever do so in the future.
You might buy into the longtermism argument at a general level (āFuture lives matterā, āthe future is largeā, we can affect the futureā), but update about some of the details, such that you think planning for and affecting the far future is much more intractable or premature than you previously thought. Otherwise, are you saying thereās nothing that could happen that would change your mind on whether longtermism was a good use of EA resources?
but cannot measure how their work actually decreases this 17% figure, they should at the very least reconsider whether their approach is achieving their stated goals.
Iām not sure how itās even theoretically possible to measure reductions in existential risk. An existential catastrophe is something that can only happen once. Without being able to observe a reduction in incidence of an event I donāt think you can āmeasureā reduction in risk. I do on the other hand think itās fair to say that increasing awareness of existential risk reduces total existential risk, even if Iām not sure by how much exactly.
Iād imagine concrete/ālegible actions to reduce existential risk will probably come in the form of policy change and I donāt think for the most part EAs have yet entered influential policy positions. Please do say what other actions you would consider to count under concrete/ālegible though as that is up for interpretation.
Itās true that global health as an area is newer than AI safety, but given EA GHD isnāt taking credit for things that happened before EA existed, like eradicating smallpox, I donāt know if this is actually the āmain reasonā.
Sorry Iām not really sure what youāre saying here.
Otherwise, are you saying thereās nothing that could happen that would change your mind on whether longtermism was a good use of EA resources?
This is a good question. I think the best arguments against longtermism are:
That longtermism is fanatical and that fanaticism is not warranted
On balance I am not convinced by this objection as I donāt think longtermism is fanatical and am unsure if fanaticism is a problem. But further research here might sway me.
That we might simply be clueless about the impacts of our actions
At the moment I donāt think we are, and I think if cluelessness is a big issue it is very likely to be an issue for neartermist cause areas as well, and even EA altogether.
I donāt mind admitting that it seems unlikely that I will change my mind on longtermism. If I do, Iād imagine it will be on account of one of the two arguments above.
Iām not sure how itās even theoretically possible to measure reductions in existential risk. Without being able to observe a reduction in incidence of an event I donāt think you can āmeasureā reduction in risk.
I disagreeāwhat do you think the likelihood of a civilization ending event from engineered pandemics is, and what do you base this forecast on?
Iād imagine concrete/ālegible actions to reduce existential risk will probably come in the form of policy change
What % of longtermist $ and FTEs do you think are being spent on trying to influence policy versus technical or technological solutions? (I would consider many of these as concrete + legible)
Sorry Iām not really sure what youāre saying here.
That was me trying to steelman your justification of lack of concrete/ālegible wins to ālongtermism is newā by thinking of clearer ways that longtermism is different to neartermist causes, and that requires looking outside the EA space.
I disagreeāwhat do you think the likelihood of a civilization ending event from engineered pandemics is, and what do you base this forecast on?
As I say I donāt think one can āmeasureā the probability of existential risk. I think one can estimate it through considered judgment of relevant arguments but I am not inclined to do so and I donāt think anyone else should be so inclined either. Any such probability would be somewhat arbitrary and open to reasonable disagreement. What I am willing to do is say things like āexistential risk is non-negligibleā and āwe can meaningfully reduce itā. These claims are easier to defend and are all we really need to justify working on reducing existential risk.
What % of longtermist $ and FTEs do you think are being spent on trying to influence policy versus technical or technological solutions? (I would consider many of these as concrete + legible)
No idea. Even if the answer is a lot and we havenāt made much progress, this doesnāt lead me away from longtermism. Mainly because the stakes are so high and I think weāre still relatively new to all this so I expect us to get more effective over time, especially as we actually get people into influential policy roles.
That was me trying to steelman your justification of lack of concrete/ālegible wins to ālongtermism is newā by thinking of clearer ways that longtermism is different to neartermist causes, and that requires looking outside the EA space.
This may be because Iām slightly hungover but youāre going to have to ELI5 your point here!
I think this is a great question. The lack of clear, demonstrable progress in reducing existential risks, and the difficulty of making and demonstrating any progress, makes me very skeptical of longtermism in practice.
I think shifting focus from tractable, measurable issues like global health and development to issues thatāwhile criticalāare impossible to reliably affect, might be really bad.
I donāt think that a lack of concrete/ālegible examples of existential risk reduction so far should make us move to other cause areas.
The main reason is that it might be unsurprising for a movement to take a while to properly get going. I havenāt researched this, but it seems unsurprising to me that movements may typically start with a period of increasing awareness /ā the number of people working in the movement (a period I think we are currently still in), before achieving really concrete wins. The longtermist movement is a new one with mostly young people who have reoriented their careers but generally havenāt yet reached senior enough positions to affect real change.
If you actually buy into the longtermist argument, then why give up now? It seems unreasonable to me to think that we havenāt yet achieved concrete change and that we are very unlikely to ever do so in the future.
Perhaps not, but if a movement is happy to use estimates like āour X-risk is 17% this centuryā to justify working on existential risks and call it the most important thing you can do with your life, but cannot measure how their work actually decreases this 17% figure, they should at the very least reconsider whether their approach is achieving their stated goals.
I think this is misleading, because:
Longtermism has been part of EA since close to its very beginning, and many senior leaders in EA are longtermists.
Itās true that global health as an area is newer than AI safety, but given EA GHD isnāt taking credit for things that happened before EA existed, like eradicating smallpox, I donāt know if this is actually the āmain reasonā.
You might buy into the longtermism argument at a general level (āFuture lives matterā, āthe future is largeā, we can affect the futureā), but update about some of the details, such that you think planning for and affecting the far future is much more intractable or premature than you previously thought. Otherwise, are you saying thereās nothing that could happen that would change your mind on whether longtermism was a good use of EA resources?
Iām not sure how itās even theoretically possible to measure reductions in existential risk. An existential catastrophe is something that can only happen once. Without being able to observe a reduction in incidence of an event I donāt think you can āmeasureā reduction in risk. I do on the other hand think itās fair to say that increasing awareness of existential risk reduces total existential risk, even if Iām not sure by how much exactly.
Iād imagine concrete/ālegible actions to reduce existential risk will probably come in the form of policy change and I donāt think for the most part EAs have yet entered influential policy positions. Please do say what other actions you would consider to count under concrete/ālegible though as that is up for interpretation.
Sorry Iām not really sure what youāre saying here.
This is a good question. I think the best arguments against longtermism are:
That longtermism is fanatical and that fanaticism is not warranted
On balance I am not convinced by this objection as I donāt think longtermism is fanatical and am unsure if fanaticism is a problem. But further research here might sway me.
That we might simply be clueless about the impacts of our actions
At the moment I donāt think we are, and I think if cluelessness is a big issue it is very likely to be an issue for neartermist cause areas as well, and even EA altogether.
I donāt mind admitting that it seems unlikely that I will change my mind on longtermism. If I do, Iād imagine it will be on account of one of the two arguments above.
I disagreeāwhat do you think the likelihood of a civilization ending event from engineered pandemics is, and what do you base this forecast on?
What % of longtermist $ and FTEs do you think are being spent on trying to influence policy versus technical or technological solutions? (I would consider many of these as concrete + legible)
That was me trying to steelman your justification of lack of concrete/ālegible wins to ālongtermism is newā by thinking of clearer ways that longtermism is different to neartermist causes, and that requires looking outside the EA space.
As I say I donāt think one can āmeasureā the probability of existential risk. I think one can estimate it through considered judgment of relevant arguments but I am not inclined to do so and I donāt think anyone else should be so inclined either. Any such probability would be somewhat arbitrary and open to reasonable disagreement. What I am willing to do is say things like āexistential risk is non-negligibleā and āwe can meaningfully reduce itā. These claims are easier to defend and are all we really need to justify working on reducing existential risk.
No idea. Even if the answer is a lot and we havenāt made much progress, this doesnāt lead me away from longtermism. Mainly because the stakes are so high and I think weāre still relatively new to all this so I expect us to get more effective over time, especially as we actually get people into influential policy roles.
This may be because Iām slightly hungover but youāre going to have to ELI5 your point here!