If narrow super-intelligence is competent for almost all the things we would trust AI to do, such that a switch to AGI is expensive, risky, with a low margin: then, we wouldn’t need to worry about ‘missing-out’ if we ban AGI. From a glance at the decision-tree, it seems better to explore narrow AI fully, so that we can see how much value is left on the table for AGI to yield us.
Additionally, I expect AGI to be possible within the next 5 years. (You can hold me to that prediction!) Looking back five years, and at the recent capabilities toward generalization from few examples, equivariance, as well as formulating & testing symbolic expressions—we might already be close to the necessary algorithms. And, with companies like Cerebras offering orders of magnitude more energy-efficient compute in this next year, then human-brain-scale networks seem to be on the doorstep already.
[[Tangent of Details: GPT-3 and the like are ~1% the network scale of a human brain, and Cerebras’ chip will support AI up to 20% larger than such a ‘human’ connectome. You might be tempted to claim “neurons are more complex”, yet the proficiencies demonstrated with GPT-3, using only 1% of our stuff, betray the argument for biological superiority. AI is satisfied with 16-bit precision, for example. Our brains are heavily redundant and jumbly, so out-performing us might take much less effort. Heck, GPT-3 level performance is now possible with 25x times smaller network… “0.04% of a human brain”, yet it works as well as us.]]
So, narrow AI that uses 1/100th the compute can usually do the task fine. GPT-3 was writing convincing poetry. If someone can choose between a single AGI or a hundred narrow AIs, they’ll probably choose the latter. It would let you do 100x more stuff per second, and swapping between the networks loaded in memory would still allow you to utilize myriad task-specific AI. Those narrow AI will be easier to train AND verify, as well.
Let’s ban AGI, because I don’t think it’d help much, anyway!
An odd window to an unmentioned scenario:
If narrow super-intelligence is competent for almost all the things we would trust AI to do, such that a switch to AGI is expensive, risky, with a low margin: then, we wouldn’t need to worry about ‘missing-out’ if we ban AGI. From a glance at the decision-tree, it seems better to explore narrow AI fully, so that we can see how much value is left on the table for AGI to yield us.
Additionally, I expect AGI to be possible within the next 5 years. (You can hold me to that prediction!) Looking back five years, and at the recent capabilities toward generalization from few examples, equivariance, as well as formulating & testing symbolic expressions—we might already be close to the necessary algorithms. And, with companies like Cerebras offering orders of magnitude more energy-efficient compute in this next year, then human-brain-scale networks seem to be on the doorstep already.
[[Tangent of Details: GPT-3 and the like are ~1% the network scale of a human brain, and Cerebras’ chip will support AI up to 20% larger than such a ‘human’ connectome. You might be tempted to claim “neurons are more complex”, yet the proficiencies demonstrated with GPT-3, using only 1% of our stuff, betray the argument for biological superiority. AI is satisfied with 16-bit precision, for example. Our brains are heavily redundant and jumbly, so out-performing us might take much less effort. Heck, GPT-3 level performance is now possible with 25x times smaller network… “0.04% of a human brain”, yet it works as well as us.]]
So, narrow AI that uses 1/100th the compute can usually do the task fine. GPT-3 was writing convincing poetry. If someone can choose between a single AGI or a hundred narrow AIs, they’ll probably choose the latter. It would let you do 100x more stuff per second, and swapping between the networks loaded in memory would still allow you to utilize myriad task-specific AI. Those narrow AI will be easier to train AND verify, as well.
Let’s ban AGI, because I don’t think it’d help much, anyway!