Couldnât automating most human decisions before AGI make AGI catastrophes more likely when AGI does come? Weâll trust AI more and would be more likely to use it in more applications, or give it more options to break through.
Or, maybe with more work with pre-AGI AI, weâll trust AI less and work harder on security, which could reduce AI risk overall?
My guess is that a success might look more like, 1. We use software and early AI to become more wise/âintelligent 2. That wisdom/âintelligence helps people realize how to make a good/âsafe plan for AGI. Maybe this means building it very slowly, maybe it means delaying it indefinitely.
Couldnât automating most human decisions before AGI make AGI catastrophes more likely when AGI does come
To be clear, itâs âautomating most of the decisions we make nowâ, but weâll still be making plenty of decisions (just different ones). Less âwhat dentist should I visitâ, more of other things, possibly, âhow do we make sure AI goes wellâ
Automating most human decisions looks a whole lot like us being able to effectively think faster and better. My guess is that this will be great, though, like with other wisdom and intelligence interventions, there are risks. If AI companies think faster and better, and this doesnât get them to realize how important safety is, then that would be an issue. On the other hand, we might just need EA groups to think faster/âbetter for us to actually save the world.
Weâll trust AI more and would be more likely to use it in more applications, or give it more options to break through.
Itâs possible, but the benefits are really there too. I donât think the âtrust AI moreâ will be a major factor, but âgive it more options to breakthroughâ might technically be.
Much of decision automation doesnât have to be ML-based, the rest looks much more like traditional software.
The internet might be a good example. The introduction of the internet has led to a big attack vector for AI, but it also allowed people to talk about and realize that AI safety was a thing. My guess is that the internet was a pretty big win in expectation.
Or, maybe with more work with pre-AGI AI, weâll trust AI less and work harder on security, which could reduce AI risk overall?
The question of, âshould we use a lot of AI soon, to understand it better and optimize itâ is an interesting one, but I think a bit out of scope for this piece. I think weâd do decision automation for benefits other than âto try out AIâ.
Couldnât automating most human decisions before AGI make AGI catastrophes more likely when AGI does come? Weâll trust AI more and would be more likely to use it in more applications, or give it more options to break through.
Or, maybe with more work with pre-AGI AI, weâll trust AI less and work harder on security, which could reduce AI risk overall?
Or maybe if we can discover how to use âprimitiveâ AI usefully enough, we decide we never need AGI.
(This is an immediate reaction, not something I have ever thought about in detail)
My guess is that a success might look more like,
1. We use software and early AI to become more wise/âintelligent
2. That wisdom/âintelligence helps people realize how to make a good/âsafe plan for AGI. Maybe this means building it very slowly, maybe it means delaying it indefinitely.
To be clear, itâs âautomating most of the decisions we make nowâ, but weâll still be making plenty of decisions (just different ones). Less âwhat dentist should I visitâ, more of other things, possibly, âhow do we make sure AI goes wellâ
Automating most human decisions looks a whole lot like us being able to effectively think faster and better. My guess is that this will be great, though, like with other wisdom and intelligence interventions, there are risks. If AI companies think faster and better, and this doesnât get them to realize how important safety is, then that would be an issue. On the other hand, we might just need EA groups to think faster/âbetter for us to actually save the world.
Itâs possible, but the benefits are really there too. I donât think the âtrust AI moreâ will be a major factor, but âgive it more options to breakthroughâ might technically be.
Much of decision automation doesnât have to be ML-based, the rest looks much more like traditional software.
The internet might be a good example. The introduction of the internet has led to a big attack vector for AI, but it also allowed people to talk about and realize that AI safety was a thing. My guess is that the internet was a pretty big win in expectation.
The question of, âshould we use a lot of AI soon, to understand it better and optimize itâ is an interesting one, but I think a bit out of scope for this piece. I think weâd do decision automation for benefits other than âto try out AIâ.