Although their arguments are reasonable, my big problem with this is that these guys are so motivated that I find it hard to read what they write in good faith.
People who are very invested in arguing for slowing down AI development, or decreasing catastrophic risk from AI, like many in the effective altruism community, will also be happier if they succeed in getting more resources to pursue their goals. However, I believe it is better to assess arguments on their own merits. I agree with the title of the article that it is difficult to do this. I am not aware of any empirical quantitative estimate of the risk of human extinction resulting from transformative AI.
I would consider driving people to delusion and suicide, killing people for self-preservation and even Hitler the man himself to be at least a somewhat “alien” style of evil.
I agree those actions are alien in the sense of deviating a lot from what random people do. However, I think this is practically negligible evidence about the risk of human extinction.
Hi Nick.
People who are very invested in arguing for slowing down AI development, or decreasing catastrophic risk from AI, like many in the effective altruism community, will also be happier if they succeed in getting more resources to pursue their goals. However, I believe it is better to assess arguments on their own merits. I agree with the title of the article that it is difficult to do this. I am not aware of any empirical quantitative estimate of the risk of human extinction resulting from transformative AI.
I agree those actions are alien in the sense of deviating a lot from what random people do. However, I think this is practically negligible evidence about the risk of human extinction.