Pair this with the EA concern that we should be concerned about the counterfactual impact of our actions, and that there are opportunities to do good right here and now,[3] it shouldn’t be a primary EA concern.
As in, your crux is that the probability of AGI within the next 50 years is less than 10%?
I think from an x-risk perspective it is quite hard to beat AI risk even on pretty long timelines. (Where the main question is bio risk and what you think about (likely temporary) civilizational collapse due to nuclear war.)
It’s pretty plausible that on longer timelines technical alignment/safety work looks weak relative to other stuff focused on making AI go better.
As in, your crux is that the probability of AGI within the next 50 years is less than 10%?
I’m essentially deeply uncertain about how to answer this question, in a true ‘Knightian Uncertainty’ sense and I don’t know how much it makes sense to use subjective probability calculus. It is also highly variable to what we mean by AGI though. I find many of the arguments I’ve seen to be a) deference to the subjective probabilities of others or b) extrapolation of straight lines on graphs—neither of which I find highly convincing. (I think your arguments seem stronger and more grounded fwiw)
I think from an x-risk perspective it is quite hard to beat AI risk even on pretty long timelines.
I think this can hold, but it hold’s not just in light of particular facts about AI progress now but in light of various strong philosophical beliefs about value, what future AI would be like, and how the future would be post the invention of said AI. You may have strong arguments for these, but I find many arguments for the overwhelming importance of AI Safety do very poorly to ground these, especially in the light of compelling interventions to good that exist in the world right now.
It is also highly variable to what we mean by AGI though.
I’m happy to do timelines to the singularity and operationize this with “we have the technological capacity to pretty easily build projects as impressive as a dyson sphere”.
(Or 1000x electricity production, or whatever.)
In my views, this likely adds only a moderate number of years (3-20 depending on how various details go).
As in, your crux is that the probability of AGI within the next 50 years is less than 10%?
I think from an x-risk perspective it is quite hard to beat AI risk even on pretty long timelines. (Where the main question is bio risk and what you think about (likely temporary) civilizational collapse due to nuclear war.)
It’s pretty plausible that on longer timelines technical alignment/safety work looks weak relative to other stuff focused on making AI go better.
I’m essentially deeply uncertain about how to answer this question, in a true ‘Knightian Uncertainty’ sense and I don’t know how much it makes sense to use subjective probability calculus. It is also highly variable to what we mean by AGI though. I find many of the arguments I’ve seen to be a) deference to the subjective probabilities of others or b) extrapolation of straight lines on graphs—neither of which I find highly convincing. (I think your arguments seem stronger and more grounded fwiw)
I think this can hold, but it hold’s not just in light of particular facts about AI progress now but in light of various strong philosophical beliefs about value, what future AI would be like, and how the future would be post the invention of said AI. You may have strong arguments for these, but I find many arguments for the overwhelming importance of AI Safety do very poorly to ground these, especially in the light of compelling interventions to good that exist in the world right now.
I’m happy to do timelines to the singularity and operationize this with “we have the technological capacity to pretty easily build projects as impressive as a dyson sphere”.
(Or 1000x electricity production, or whatever.)
In my views, this likely adds only a moderate number of years (3-20 depending on how various details go).