I see! Yes, I agree that more public “buying time” interventions (e.g. outreach) could be net negative. However, for the average person entering AI safety, I think there are less risky “buying time” interventions that are more useful than technical alignment.
I think probably this post should be edited and “focus on low risk interventions first” put in bold in the first sentence and put right next to the pictures. Because the most careless people (possibly like me...) are the ones that will read that and not read the current caveats
You’d be well able to compute the risk on your own, however, if you seriously considered doing any big outreach efforts. I think people should still have a large prior on action for anything that looks promising to them. : )
I see! Yes, I agree that more public “buying time” interventions (e.g. outreach) could be net negative. However, for the average person entering AI safety, I think there are less risky “buying time” interventions that are more useful than technical alignment.
I think probably this post should be edited and “focus on low risk interventions first” put in bold in the first sentence and put right next to the pictures. Because the most careless people (possibly like me...) are the ones that will read that and not read the current caveats
You’d be well able to compute the risk on your own, however, if you seriously considered doing any big outreach efforts. I think people should still have a large prior on action for anything that looks promising to them. : )