FWIW my impression of the EA community’s position is that we need to build safe AI, not that we need to stop AI development altogether (although some may hold this view).
Stopping AI development altogether misses out on all the benefits from AI, which could genuinely be extensive and could include helping us with other very pressing problems (global health, animal welfare etc.).
I do think one can do a tremendous amount of good at OpenAI, and a tremendous amount of harm. I am in favor of roles at AI companies being on the 80,000 Hours job board so that the former is more likely.
JackM—these alleged ‘tremendous’ benefits are all hypothetical and speculative.
Whereas the likely X risk from ASI have been examined in detail by thousands of serious people, and polls show that most people, both inside and outside the AI industry, are deeply concerned by them.
This is why I think it’s deeply unethical for 80k Hours to post jobs to work on ASI within AI companies.
I share your concern about x-risk from ASI, that’s why I want safety-aligned people in these roles as opposed to people who aren’t concerned about the risks.
There are genuine proposals on how to align ASI, so I think it’s possible. I’m not sure what the chances are, but I think it’s possible. I think the most promising proposals involve using advanced AI to assist with oversight, interpretability, and recursive alignment tasks—eventually building a feedback loop where aligned systems help align more powerful successors.
FWIW my impression of the EA community’s position is that we need to build safe AI, not that we need to stop AI development altogether (although some may hold this view).
Stopping AI development altogether misses out on all the benefits from AI, which could genuinely be extensive and could include helping us with other very pressing problems (global health, animal welfare etc.).
I do think one can do a tremendous amount of good at OpenAI, and a tremendous amount of harm. I am in favor of roles at AI companies being on the 80,000 Hours job board so that the former is more likely.
JackM—these alleged ‘tremendous’ benefits are all hypothetical and speculative.
Whereas the likely X risk from ASI have been examined in detail by thousands of serious people, and polls show that most people, both inside and outside the AI industry, are deeply concerned by them.
This is why I think it’s deeply unethical for 80k Hours to post jobs to work on ASI within AI companies.
I share your concern about x-risk from ASI, that’s why I want safety-aligned people in these roles as opposed to people who aren’t concerned about the risks.
There are genuine proposals on how to align ASI, so I think it’s possible. I’m not sure what the chances are, but I think it’s possible. I think the most promising proposals involve using advanced AI to assist with oversight, interpretability, and recursive alignment tasks—eventually building a feedback loop where aligned systems help align more powerful successors.
I don’t agree that benefits are speculative by the way. DeepMind has already won the Nobel prize for Chemistry for their work on protein folding.
EDIT: 80,000 Hours also doesn’t seem to promote all roles, only those which contribute to safety, which seems reasonable to me.