What is the upshot of this? Is this for new audiences to read? It seems like the most straightforward application of it is futures betting, not positively influencing the future.
Perhaps you’re indicating that if the money will run out if frontier-AI doesn’t becoming self-sustaining by 2030? Maybe we can do something to make that more likely?
It’s the first chapter in a new guide about how to help make AI go well (aimed at new audiences).
I think it’s generally important for people who want to help to understand the strategic picture.
Plus in my experience the thing most likely to make people take AI risk more seriously is believing that powerful AI might happen soon.
I appreciate that talking about this could also wake more people up to AGI, but I expect the guide overall will proportionally boost the safety talent pool a lot more than the speeding up AI talent pool.
(And long term I think it’s also better to be open about my actual thinking rather than try to message control to that degree, and a big part of the case in favour in my mind is that it might happen soon.)
What is the upshot of this? Is this for new audiences to read? It seems like the most straightforward application of it is futures betting, not positively influencing the future.
Perhaps you’re indicating that if the money will run out if frontier-AI doesn’t becoming self-sustaining by 2030? Maybe we can do something to make that more likely?
Because I do struggle to see how this helps.
It’s the first chapter in a new guide about how to help make AI go well (aimed at new audiences).
I think it’s generally important for people who want to help to understand the strategic picture.
Plus in my experience the thing most likely to make people take AI risk more seriously is believing that powerful AI might happen soon.
I appreciate that talking about this could also wake more people up to AGI, but I expect the guide overall will proportionally boost the safety talent pool a lot more than the speeding up AI talent pool.
(And long term I think it’s also better to be open about my actual thinking rather than try to message control to that degree, and a big part of the case in favour in my mind is that it might happen soon.)