Strong upvote. This is a fantastic post, and I wish that people who downvoted it had explained their reasoning, because I donât see any big flaws.
I donât necessarily agree with everything written here, and I donât think the argument would suffice to convince people outside of EA, but we need more content like this, which:
Cites a lot of philosophers who arenât commonly cited in EA (good for two reasons: non-EA philosophers are the vast majority of philosophers and presumably have many good ideas, including on areas we care about; citing a wider range of philosophers makes EA work look a lot more credible)
Carefully points out a lot of uncertainties and points that could be made against the argument. I hadnât put a name before on the difference between âhonoringâ and âpromotingâ, but I suspect that many if not most peoplesâ objection to focusing on X-risk probably takes this form if you dig deep enough.
Includes a summary and a confidence level.
A couple of things I wish had been different:
I donât know what âconfidence levelâ means, given the wide range of ways a person could âagreeâ with the argument. Is this your estimate of the chance that a given personâs best bet is to give to whatever X-risk organization they think is best, as long as they arenât one of your groups? Your estimate of how solid your own argument is, where 100% is âlogically perfectâ and 0% is âno evidence whatsoeverâ? Something else?
The formatting is off in some places, which doesnât impact readability too much but can be tricky in a post that uses so many different ways of organizing info (quotes, bullets, headings, etc.) One specific improvement would be to replace your asterisk footnotes with numbers [1] so that itâs easier to find them and not mix them up with bullet points.
Aside from the honor/âpromote distinction, I think the most common objection to this from someone outside of EA might be something like âextinction is less than 1% likely, not because the world isnât dangerous but because I implicitly trust other people to handle that sort of thing, and prefer to focus on local issues that are especially important to myself, my family, and my communityâ.
Good question. To be honest, it was just me intuiting the chance that all of the premises and exemptions are true, which maybe cashes out to your first option. Iâm happy to use a conventional measure, if thereâs a convention on here.
Would also invite people who disagree to comment.
something like âextinction is less than 1% likely, not because...â
Interesting. This neatly sidesteps Ordâs argument (about low extinction probability implying proportionally higher expected value) which I just added, above.
Another objection I missed, which I think is the clincher inside EA, is a kind of defensive empiricism, e.g. Jeff Kaufman:
Iâm much more skeptical than most people I talk to, even most people in EA, about our ability to make progress without good feedback. This is where I think the argument for x-risk is weakest: how can we know if what weâre doing is helping..?
I take this very seriously; itâs why I focus on the ML branch of AI safety. If there is a response to this (excellent) philosophy, it might be that itâs equivalent to risk aversion (the bad kind) somehow. Not sure.
Strong upvote. This is a fantastic post, and I wish that people who downvoted it had explained their reasoning, because I donât see any big flaws.
I donât necessarily agree with everything written here, and I donât think the argument would suffice to convince people outside of EA, but we need more content like this, which:
Cites a lot of philosophers who arenât commonly cited in EA (good for two reasons: non-EA philosophers are the vast majority of philosophers and presumably have many good ideas, including on areas we care about; citing a wider range of philosophers makes EA work look a lot more credible)
Carefully points out a lot of uncertainties and points that could be made against the argument. I hadnât put a name before on the difference between âhonoringâ and âpromotingâ, but I suspect that many if not most peoplesâ objection to focusing on X-risk probably takes this form if you dig deep enough.
Includes a summary and a confidence level.
A couple of things I wish had been different:
I donât know what âconfidence levelâ means, given the wide range of ways a person could âagreeâ with the argument. Is this your estimate of the chance that a given personâs best bet is to give to whatever X-risk organization they think is best, as long as they arenât one of your groups? Your estimate of how solid your own argument is, where 100% is âlogically perfectâ and 0% is âno evidence whatsoeverâ? Something else?
The formatting is off in some places, which doesnât impact readability too much but can be tricky in a post that uses so many different ways of organizing info (quotes, bullets, headings, etc.) One specific improvement would be to replace your asterisk footnotes with numbers [1] so that itâs easier to find them and not mix them up with bullet points.
Aside from the honor/âpromote distinction, I think the most common objection to this from someone outside of EA might be something like âextinction is less than 1% likely, not because the world isnât dangerous but because I implicitly trust other people to handle that sort of thing, and prefer to focus on local issues that are especially important to myself, my family, and my communityâ.
[1] Like this.
Good question. To be honest, it was just me intuiting the chance that all of the premises and exemptions are true, which maybe cashes out to your first option. Iâm happy to use a conventional measure, if thereâs a convention on here.
Would also invite people who disagree to comment.
Interesting. This neatly sidesteps Ordâs argument (about low extinction probability implying proportionally higher expected value) which I just added, above.
Another objection I missed, which I think is the clincher inside EA, is a kind of defensive empiricism, e.g. Jeff Kaufman:
I take this very seriously; itâs why I focus on the ML branch of AI safety. If there is a response to this (excellent) philosophy, it might be that itâs equivalent to risk aversion (the bad kind) somehow. Not sure.