You write: “I don’t really see what to do about it yet”, but then you provide a bunch of suggestions: “I think focusing outreach on groups that are more likely to start working on AI safety makes sense. Focusing outreach in circles of ML researchers makes sense. Encouraging EAs currently working in other areas to go work in alignment or AI government makes sense.”
Do you mean that you don’t think these are very likely to work, but they’re the best plan you’ve got? Or do you mean something else?
I think these are all valuable, but not much more valuable in a world with short timelines. I wanted to express that I am not sure how we should change our approach in a world with short timelines. So I think these ideas are net positive but I’m uncertain whether they are much of an update
I don’t suppose you could clarify your comment?
You write: “I don’t really see what to do about it yet”, but then you provide a bunch of suggestions: “I think focusing outreach on groups that are more likely to start working on AI safety makes sense. Focusing outreach in circles of ML researchers makes sense. Encouraging EAs currently working in other areas to go work in alignment or AI government makes sense.”
Do you mean that you don’t think these are very likely to work, but they’re the best plan you’ve got? Or do you mean something else?
I think these are all valuable, but not much more valuable in a world with short timelines. I wanted to express that I am not sure how we should change our approach in a world with short timelines. So I think these ideas are net positive but I’m uncertain whether they are much of an update