Humans seems to have some form of generality. We seem capable of a solving a large range of problems and the people that are capable on one aspect seem more capable in general. However the nature of this generality is important. There are at least two options that I’ve thought of. 1) A general intelligence is intrinsically better at solving problems 2) A general intelligence is better at solving problems in general because it is capable of absorbing social information about problems. And society has information about solving lots of different problems.
Option 2 is the one I lean towards as it fits with the evidence. Humans spent a long time in the stone age, with the same general architecture, but now can solve a much larger set of problems because of education and general access to information.
The difference is important because it has implications for the solving of novel problems (not solved by society today). If the form of generality we can make is all about absorbing social information there are no guarantees about it being able to go beyond the social knowledge in a principled way. Conceptual leaps to new understanding might require immense amounts of luck and so be slow to accumulate. ASIs might be the equivalent of us stuck in the stone age, at least to start with.
Are people thinking about these kinds of issues when considering time lines and existential risk likelihoods?
[Question] What is the nature of humans general intelligence and it’s implications for AGI?
Humans seems to have some form of generality. We seem capable of a solving a large range of problems and the people that are capable on one aspect seem more capable in general. However the nature of this generality is important. There are at least two options that I’ve thought of.
1) A general intelligence is intrinsically better at solving problems
2) A general intelligence is better at solving problems in general because it is capable of absorbing social information about problems. And society has information about solving lots of different problems.
Option 2 is the one I lean towards as it fits with the evidence. Humans spent a long time in the stone age, with the same general architecture, but now can solve a much larger set of problems because of education and general access to information.
The difference is important because it has implications for the solving of novel problems (not solved by society today). If the form of generality we can make is all about absorbing social information there are no guarantees about it being able to go beyond the social knowledge in a principled way. Conceptual leaps to new understanding might require immense amounts of luck and so be slow to accumulate. ASIs might be the equivalent of us stuck in the stone age, at least to start with.
Are people thinking about these kinds of issues when considering time lines and existential risk likelihoods?
Already crossposted to lesswrong