In Bostromian AI Safety people often talk about human-level intelligence, defined roughly as more efficient mental performance than humans on all tasks humans care about. Has anyone ever tried to sketch out some subsets of human abilities that would still be sufficient to make a software system highly disruptive? This could develop into a stronger and more specific claim than Bostrom’s. For example, perhaps a system with ‘just’ a superhuman ability to observe and model economics, or to find new optimization algorithms (etc) could be almost as concerning to a more vaguely defined ‘human-level intelligence’ AGI.
In Bostromian AI Safety people often talk about human-level intelligence, defined roughly as more efficient mental performance than humans on all tasks humans care about. Has anyone ever tried to sketch out some subsets of human abilities that would still be sufficient to make a software system highly disruptive? This could develop into a stronger and more specific claim than Bostrom’s. For example, perhaps a system with ‘just’ a superhuman ability to observe and model economics, or to find new optimization algorithms (etc) could be almost as concerning to a more vaguely defined ‘human-level intelligence’ AGI.