Why is there so much more talk about the existential risk from AI as opposed to the amount by which individuals (e.g. researchers) should expect to reduce these risks through their work?
The second number seems much more decision-guiding for individuals than the first. Is the main reason that its much harder to estimate? If so, why?
Why is there so much more talk about the existential risk from AI as opposed to the amount by which individuals (e.g. researchers) should expect to reduce these risks through their work?
The second number seems much more decision-guiding for individuals than the first. Is the main reason that its much harder to estimate? If so, why?
Here is an attempt by Jordan Taylor: Expected ethical value of a career in AI safety (which you can plug your own numbers into).