Good question. To be honest, it was just me intuiting the chance that all of the premises and exemptions are true, which maybe cashes out to your first option. I’m happy to use a conventional measure, if there’s a convention on here.
Would also invite people who disagree to comment.
something like “extinction is less than 1% likely, not because...”
Interesting. This neatly sidesteps Ord’s argument (about low extinction probability implying proportionally higher expected value) which I just added, above.
Another objection I missed, which I think is the clincher inside EA, is a kind of defensive empiricism, e.g. Jeff Kaufman:
I’m much more skeptical than most people I talk to, even most people in EA, about our ability to make progress without good feedback. This is where I think the argument for x-risk is weakest: how can we know if what we’re doing is helping..?
I take this very seriously; it’s why I focus on the ML branch of AI safety. If there is a response to this (excellent) philosophy, it might be that it’s equivalent to risk aversion (the bad kind) somehow. Not sure.
Good question. To be honest, it was just me intuiting the chance that all of the premises and exemptions are true, which maybe cashes out to your first option. I’m happy to use a conventional measure, if there’s a convention on here.
Would also invite people who disagree to comment.
Interesting. This neatly sidesteps Ord’s argument (about low extinction probability implying proportionally higher expected value) which I just added, above.
Another objection I missed, which I think is the clincher inside EA, is a kind of defensive empiricism, e.g. Jeff Kaufman:
I take this very seriously; it’s why I focus on the ML branch of AI safety. If there is a response to this (excellent) philosophy, it might be that it’s equivalent to risk aversion (the bad kind) somehow. Not sure.