I think this line of reasoning may be misguided, at least if taken in a particular direction. If the AI Safety community loudly talks about there being a significant chance of AGI within 10 years, then this will hurt the AI Safety community’s reputation when 10 years later we’re not even close. It’s important that we don’t come off as alarmists. I’d also imagine that the argument “1% is still significant enough to warrant focus” won’t resonate with a lot of people. If we really think the chances in the next 10 years are quite small, I think we’re better off (at least for PR reasons) talking about how there’s a significant chance of AGI in 20-30 years (or whatever we think), and how solving the problem of safety might take that long, so we should start today.
I think this line of reasoning may be misguided, at least if taken in a particular direction. If the AI Safety community loudly talks about there being a significant chance of AGI within 10 years, then this will hurt the AI Safety community’s reputation when 10 years later we’re not even close. It’s important that we don’t come off as alarmists. I’d also imagine that the argument “1% is still significant enough to warrant focus” won’t resonate with a lot of people. If we really think the chances in the next 10 years are quite small, I think we’re better off (at least for PR reasons) talking about how there’s a significant chance of AGI in 20-30 years (or whatever we think), and how solving the problem of safety might take that long, so we should start today.
Makes sense – I think the optics question is pretty separate from the “what’s our actual best-guess?” question.