As in, your crux is that the probability of AGI within the next 50 years is less than 10%?
I’m essentially deeply uncertain about how to answer this question, in a true ‘Knightian Uncertainty’ sense and I don’t know how much it makes sense to use subjective probability calculus. It is also highly variable to what we mean by AGI though. I find many of the arguments I’ve seen to be a) deference to the subjective probabilities of others or b) extrapolation of straight lines on graphs—neither of which I find highly convincing. (I think your arguments seem stronger and more grounded fwiw)
I think from an x-risk perspective it is quite hard to beat AI risk even on pretty long timelines.
I think this can hold, but it hold’s not just in light of particular facts about AI progress now but in light of various strong philosophical beliefs about value, what future AI would be like, and how the future would be post the invention of said AI. You may have strong arguments for these, but I find many arguments for the overwhelming importance of AI Safety do very poorly to ground these, especially in the light of compelling interventions to good that exist in the world right now.
It is also highly variable to what we mean by AGI though.
I’m happy to do timelines to the singularity and operationize this with “we have the technological capacity to pretty easily build projects as impressive as a dyson sphere”.
(Or 1000x electricity production, or whatever.)
In my views, this likely adds only a moderate number of years (3-20 depending on how various details go).
I’m essentially deeply uncertain about how to answer this question, in a true ‘Knightian Uncertainty’ sense and I don’t know how much it makes sense to use subjective probability calculus. It is also highly variable to what we mean by AGI though. I find many of the arguments I’ve seen to be a) deference to the subjective probabilities of others or b) extrapolation of straight lines on graphs—neither of which I find highly convincing. (I think your arguments seem stronger and more grounded fwiw)
I think this can hold, but it hold’s not just in light of particular facts about AI progress now but in light of various strong philosophical beliefs about value, what future AI would be like, and how the future would be post the invention of said AI. You may have strong arguments for these, but I find many arguments for the overwhelming importance of AI Safety do very poorly to ground these, especially in the light of compelling interventions to good that exist in the world right now.
I’m happy to do timelines to the singularity and operationize this with “we have the technological capacity to pretty easily build projects as impressive as a dyson sphere”.
(Or 1000x electricity production, or whatever.)
In my views, this likely adds only a moderate number of years (3-20 depending on how various details go).