To keep it simple, let’s assume that someone figures out how to make AI completely safe. Personally I feel such an assumption would be absurd, but anyway, for the sake of discussion let’s assume it for the moment.
Why would such a success be meaningful if the knowledge explosion continues to generate ever more, ever larger powers, at an ever accelerating rate, without limit?
I keep asking this in every other thread to illustrate that....
I think once we know how to align a really powerful AI and we create it we can use it to create good policies and systems to prevent others misaligned AIs from emerging and gaining more knowledge and intelligence than the one aligned.
Tic-tac-toe is a solved game. We are (or easily can be) intelligence-complete for tic-tac-toe (h/t Jonathan Yan for this concept). For playing tic-tac-toe, no further gains in intelligence matters, and if tic-tac-toe is the entire game, that’s where selection pressure for higher intelligence ends.
A gas in a box is a dynamic equilibrium. You can model it as a gas in a box, use some general laws to predict its behaviour, and that game is imperceptibly close to being solved. Gains from further intelligence at this game are negligible, and may not even be worth the cost. Do not fool yourself into thinking that meaningfwl selection pressure for higher intelligence will continue forever just because there are 10^100 atoms in the sky.
Gains from generalisation (i.e. what you call “knowledge explosion”) do not always scale faster than gains from specialisation and market segmentation. A population of foxes may grow exponentially given a rabbit overhang, but not forever.
I’m not saying I know anything, I’m just saying I don’t see why I should aim to die with dignity just yet.
[Edit: When I say “do not fool yourself”, I’m not attacking anyone. I didn’t realise how this looked before now. I mean it as “here’s a general rule for us all that I’m sure we’ll agree on, but I’m saying it anyway to emphasise the point” or something.]
To keep it simple, let’s assume that someone figures out how to make AI completely safe. Personally I feel such an assumption would be absurd, but anyway, for the sake of discussion let’s assume it for the moment.
Why would such a success be meaningful if the knowledge explosion continues to generate ever more, ever larger powers, at an ever accelerating rate, without limit?
I keep asking this in every other thread to illustrate that....
We don’t have an answer to this.
I think once we know how to align a really powerful AI and we create it we can use it to create good policies and systems to prevent others misaligned AIs from emerging and gaining more knowledge and intelligence than the one aligned.
Tic-tac-toe is a solved game. We are (or easily can be) intelligence-complete for tic-tac-toe (h/t Jonathan Yan for this concept). For playing tic-tac-toe, no further gains in intelligence matters, and if tic-tac-toe is the entire game, that’s where selection pressure for higher intelligence ends.
A gas in a box is a dynamic equilibrium. You can model it as a gas in a box, use some general laws to predict its behaviour, and that game is imperceptibly close to being solved. Gains from further intelligence at this game are negligible, and may not even be worth the cost. Do not fool yourself into thinking that meaningfwl selection pressure for higher intelligence will continue forever just because there are 10^100 atoms in the sky.
Gains from generalisation (i.e. what you call “knowledge explosion”) do not always scale faster than gains from specialisation and market segmentation. A population of foxes may grow exponentially given a rabbit overhang, but not forever.
I’m not saying I know anything, I’m just saying I don’t see why I should aim to die with dignity just yet.
[Edit: When I say “do not fool yourself”, I’m not attacking anyone. I didn’t realise how this looked before now. I mean it as “here’s a general rule for us all that I’m sure we’ll agree on, but I’m saying it anyway to emphasise the point” or something.]