I disagree—what do you think the likelihood of a civilization ending event from engineered pandemics is, and what do you base this forecast on?
As I say I don’t think one can “measure” the probability of existential risk. I think one can estimate it through considered judgment of relevant arguments but I am not inclined to do so and I don’t think anyone else should be so inclined either. Any such probability would be somewhat arbitrary and open to reasonable disagreement. What I am willing to do is say things like “existential risk is non-negligible” and “we can meaningfully reduce it”. These claims are easier to defend and are all we really need to justify working on reducing existential risk.
What % of longtermist $ and FTEs do you think are being spent on trying to influence policy versus technical or technological solutions? (I would consider many of these as concrete + legible)
No idea. Even if the answer is a lot and we haven’t made much progress, this doesn’t lead me away from longtermism. Mainly because the stakes are so high and I think we’re still relatively new to all this so I expect us to get more effective over time, especially as we actually get people into influential policy roles.
That was me trying to steelman your justification of lack of concrete/legible wins to “longtermism is new” by thinking of clearer ways that longtermism is different to neartermist causes, and that requires looking outside the EA space.
This may be because I’m slightly hungover but you’re going to have to ELI5 your point here!
As I say I don’t think one can “measure” the probability of existential risk. I think one can estimate it through considered judgment of relevant arguments but I am not inclined to do so and I don’t think anyone else should be so inclined either. Any such probability would be somewhat arbitrary and open to reasonable disagreement. What I am willing to do is say things like “existential risk is non-negligible” and “we can meaningfully reduce it”. These claims are easier to defend and are all we really need to justify working on reducing existential risk.
No idea. Even if the answer is a lot and we haven’t made much progress, this doesn’t lead me away from longtermism. Mainly because the stakes are so high and I think we’re still relatively new to all this so I expect us to get more effective over time, especially as we actually get people into influential policy roles.
This may be because I’m slightly hungover but you’re going to have to ELI5 your point here!