Couldn’t automating most human decisions before AGI make AGI catastrophes more likely when AGI does come? We’ll trust AI more and would be more likely to use it in more applications, or give it more options to break through.
Or, maybe with more work with pre-AGI AI, we’ll trust AI less and work harder on security, which could reduce AI risk overall?
My guess is that a success might look more like, 1. We use software and early AI to become more wise/intelligent 2. That wisdom/intelligence helps people realize how to make a good/safe plan for AGI. Maybe this means building it very slowly, maybe it means delaying it indefinitely.
Couldn’t automating most human decisions before AGI make AGI catastrophes more likely when AGI does come
To be clear, it’s “automating most of the decisions we make now”, but we’ll still be making plenty of decisions (just different ones). Less “what dentist should I visit”, more of other things, possibly, “how do we make sure AI goes well”
Automating most human decisions looks a whole lot like us being able to effectively think faster and better. My guess is that this will be great, though, like with other wisdom and intelligence interventions, there are risks. If AI companies think faster and better, and this doesn’t get them to realize how important safety is, then that would be an issue. On the other hand, we might just need EA groups to think faster/better for us to actually save the world.
We’ll trust AI more and would be more likely to use it in more applications, or give it more options to break through.
It’s possible, but the benefits are really there too. I don’t think the “trust AI more” will be a major factor, but “give it more options to breakthrough” might technically be.
Much of decision automation doesn’t have to be ML-based, the rest looks much more like traditional software.
The internet might be a good example. The introduction of the internet has led to a big attack vector for AI, but it also allowed people to talk about and realize that AI safety was a thing. My guess is that the internet was a pretty big win in expectation.
Or, maybe with more work with pre-AGI AI, we’ll trust AI less and work harder on security, which could reduce AI risk overall?
The question of, “should we use a lot of AI soon, to understand it better and optimize it” is an interesting one, but I think a bit out of scope for this piece. I think we’d do decision automation for benefits other than “to try out AI”.
Couldn’t automating most human decisions before AGI make AGI catastrophes more likely when AGI does come? We’ll trust AI more and would be more likely to use it in more applications, or give it more options to break through.
Or, maybe with more work with pre-AGI AI, we’ll trust AI less and work harder on security, which could reduce AI risk overall?
Or maybe if we can discover how to use “primitive” AI usefully enough, we decide we never need AGI.
(This is an immediate reaction, not something I have ever thought about in detail)
My guess is that a success might look more like,
1. We use software and early AI to become more wise/intelligent
2. That wisdom/intelligence helps people realize how to make a good/safe plan for AGI. Maybe this means building it very slowly, maybe it means delaying it indefinitely.
To be clear, it’s “automating most of the decisions we make now”, but we’ll still be making plenty of decisions (just different ones). Less “what dentist should I visit”, more of other things, possibly, “how do we make sure AI goes well”
Automating most human decisions looks a whole lot like us being able to effectively think faster and better. My guess is that this will be great, though, like with other wisdom and intelligence interventions, there are risks. If AI companies think faster and better, and this doesn’t get them to realize how important safety is, then that would be an issue. On the other hand, we might just need EA groups to think faster/better for us to actually save the world.
It’s possible, but the benefits are really there too. I don’t think the “trust AI more” will be a major factor, but “give it more options to breakthrough” might technically be.
Much of decision automation doesn’t have to be ML-based, the rest looks much more like traditional software.
The internet might be a good example. The introduction of the internet has led to a big attack vector for AI, but it also allowed people to talk about and realize that AI safety was a thing. My guess is that the internet was a pretty big win in expectation.
The question of, “should we use a lot of AI soon, to understand it better and optimize it” is an interesting one, but I think a bit out of scope for this piece. I think we’d do decision automation for benefits other than “to try out AI”.