Thanks for running the survey, I’m looking forward to seeing results!
I’ve filled out the form but find some of the potential arguments problematic. It could be worth to seeing how persuasive others find these arguments but I would be hesitant to promote arguments that don’t seem robust. In general, I think more disjunctive arguments work well.
For example, (being somewhat nitpicky):
Everyone you know and love would suffer and die tragically.
Some existential catastrophes could happen painlessly and quickly .
We would destroy the universe’s only chance at knowing itself...
Aliens (maybe!) or (much less likely imo) another intelligent species evolving on Earth
There are co-benefits to existential risk mitigation: prioritizing these risks means building better healthcare infrastructure, better defense against climate change, etc.
It seems that work on biorisk prevention does involve “building better healthcare infrastructure” but is maybe misleading to characterise it in this way since I imagine people think of something different when they hear the term. There are also drawbacks to some (proposed) existential risk mitigation interventions.
I share the hesitancy around promoting arguments that don’t seem robust. To keep the form short, I only tried to communicate the thrust of the arguments. There are stronger and more detailed versions of most of them, which I plan on using. In the cases you pointed to:
Some existential risks could definitely happen rather painlessly. But some could also happen painfully, so while the argument is perhaps not all encompassing, I think it still stands. Nevertheless, I’ll change it to something more like “you and everyone you know and love will die prematurely.”
Other intelligent life is definitely a possibility, but even if it’s a reality, I think we can still consider ourselves cosmically significant. I’ll use a less objectionable version of this argument like “… destroy what could be the universe’s only chance…”
I got the co-benefits argument from this paper, which lists a bunch of co-benefits of GCR work, one of which I could swap the “better healthcare infrastructure bit.” I’ll try to get a few more opinions on this.
In any case, thanks again for your comment—I hadn’t considered some of the objections you pointed out!
Thanks for running the survey, I’m looking forward to seeing results!
I’ve filled out the form but find some of the potential arguments problematic. It could be worth to seeing how persuasive others find these arguments but I would be hesitant to promote arguments that don’t seem robust. In general, I think more disjunctive arguments work well.
For example, (being somewhat nitpicky):
Some existential catastrophes could happen painlessly and quickly .
Aliens (maybe!) or (much less likely imo) another intelligent species evolving on Earth
It seems that work on biorisk prevention does involve “building better healthcare infrastructure” but is maybe misleading to characterise it in this way since I imagine people think of something different when they hear the term. There are also drawbacks to some (proposed) existential risk mitigation interventions.
Thanks a lot for your thoughtful feedback!
I share the hesitancy around promoting arguments that don’t seem robust. To keep the form short, I only tried to communicate the thrust of the arguments. There are stronger and more detailed versions of most of them, which I plan on using. In the cases you pointed to:
Some existential risks could definitely happen rather painlessly. But some could also happen painfully, so while the argument is perhaps not all encompassing, I think it still stands. Nevertheless, I’ll change it to something more like “you and everyone you know and love will die prematurely.”
Other intelligent life is definitely a possibility, but even if it’s a reality, I think we can still consider ourselves cosmically significant. I’ll use a less objectionable version of this argument like “… destroy what could be the universe’s only chance…”
I got the co-benefits argument from this paper, which lists a bunch of co-benefits of GCR work, one of which I could swap the “better healthcare infrastructure bit.” I’ll try to get a few more opinions on this.
In any case, thanks again for your comment—I hadn’t considered some of the objections you pointed out!