Isn’t Elon Musk’s OpenAI basically operating under this assumption? His main thing seems to be to make sure AGI is distributed broadly so no one group with evil intentions controls it. Bostrom responded that might be a bad idea, since AGI could be quite dangerous, and we similarly don’t want to give nukes to everyone so that they’re “democratized.”
Multi-agent outcomes seem like a possibility to me, but I think the alignment problem is still quite important. If none of the AGI have human values, I’d assume we’re very likely screwed, while we might not be if some do have human values.
For WBE I’d assume the most important things for its “friendliness” is that we upload people who are virtuous and our ability and willingness to find “brain tweaks” that increase things like compassion.
If you’re interested, here’s a paper I published where I argued that we will probably create WBE by around 2060 if we don’t get AGI through other means first:
https://www.degruyter.com/view/j/jagi.2013.4.issue-3/jagi-2013-0008/jagi-2013-0008.xml
“Industry and academia seem to be placing much more effort into even the very speculative strains of AI research than into emulation.”
Actually, I’m gonna somewhat disagree with that statement. Very little research is done on advancing AI towards AGI, while a large portion of neuroscience research and also a decent amount of nanotechnology research (billions of dollars per year between the two) are clearly pushing us towards the ability to do WBE, even if that’s not the reason that research is conducting right now.
Very little research is done on advancing AI towards AGI, while a large portion of neuroscience research and also a decent amount of nanotechnology research (billions of dollars per year between the two) are clearly pushing us towards the ability to do WBE, even if that’s not the reason that research is conducting right now.
Yes, but I mean they’re not trying to figure out how to do it safely and ethically. The ethics/safety worries are 90% focused around what we have today, and 10% focused on superintelligence.
Isn’t Elon Musk’s OpenAI basically operating under this assumption? His main thing seems to be to make sure AGI is distributed broadly so no one group with evil intentions controls it. Bostrom responded that might be a bad idea, since AGI could be quite dangerous, and we similarly don’t want to give nukes to everyone so that they’re “democratized.”
Multi-agent outcomes seem like a possibility to me, but I think the alignment problem is still quite important. If none of the AGI have human values, I’d assume we’re very likely screwed, while we might not be if some do have human values.
For WBE I’d assume the most important things for its “friendliness” is that we upload people who are virtuous and our ability and willingness to find “brain tweaks” that increase things like compassion. If you’re interested, here’s a paper I published where I argued that we will probably create WBE by around 2060 if we don’t get AGI through other means first: https://www.degruyter.com/view/j/jagi.2013.4.issue-3/jagi-2013-0008/jagi-2013-0008.xml
“Industry and academia seem to be placing much more effort into even the very speculative strains of AI research than into emulation.” Actually, I’m gonna somewhat disagree with that statement. Very little research is done on advancing AI towards AGI, while a large portion of neuroscience research and also a decent amount of nanotechnology research (billions of dollars per year between the two) are clearly pushing us towards the ability to do WBE, even if that’s not the reason that research is conducting right now.
Yes, but I mean they’re not trying to figure out how to do it safely and ethically. The ethics/safety worries are 90% focused around what we have today, and 10% focused on superintelligence.