This paper produced by Future of Humanity Institute is fairly heavy for me to digest, but I think it reaches conclusions similar to a profound concern I have:
- “Intelligence” does not necessarily need to have anything to do with “our” type of intelligence, where we steadily build on historic knowledge; indeed this approach naturally falls prey to preferring “hedgehogs” (as compared to “foxes” in the hedgehogs v foxes compairson in Tetlock’s “superintelligence”) - who are worse than random at predicting the future;
- With latest version of Alpha Zero which quickly reached superintelligent levels with no human intervention in 3 different game domains we have to face the uncomfortable truth that AI has already far surpassed our own level of intelligence.
- that corporations as legal person and with profit maximising at their core (a value orthogonal to values that cause humanity to thrive) could rapidly become extremely dominant with this type of AI used across all tasks they are required to perform.
- this represents a real deep and potentially existential threat that the EA community should take extremely seriously. It is also at the core of the increasingly systemic failure of politics
- that this is particularly difficult for the EA community to accept given the high status they place on their intellectual capabilities (and that status is a key driver in our limbic brain so will constantly play tricks on us)
- but that unless EAs are far more intelligent than Kasparov and Sedol and all those that play these games this risk should be taken very seriously.
- that potentially the prime purpose of politics should thus be to ensure that corporations act in a way that is value aligned with the communities they serve, including international coordination as necessary.
- I will give £250 to a charity of the choice of the first party who is able to come up with a flaw in my argument that is not along the lines of “you are too stupid to understand”.
Is Superintelligence Here Already?
https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf
This paper produced by Future of Humanity Institute is fairly heavy for me to digest, but I think it reaches conclusions similar to a profound concern I have:
- “Intelligence” does not necessarily need to have anything to do with “our” type of intelligence, where we steadily build on historic knowledge; indeed this approach naturally falls prey to preferring “hedgehogs” (as compared to “foxes” in the hedgehogs v foxes compairson in Tetlock’s “superintelligence”) - who are worse than random at predicting the future;
- With latest version of Alpha Zero which quickly reached superintelligent levels with no human intervention in 3 different game domains we have to face the uncomfortable truth that AI has already far surpassed our own level of intelligence.
- that corporations as legal person and with profit maximising at their core (a value orthogonal to values that cause humanity to thrive) could rapidly become extremely dominant with this type of AI used across all tasks they are required to perform.
- this represents a real deep and potentially existential threat that the EA community should take extremely seriously. It is also at the core of the increasingly systemic failure of politics
- that this is particularly difficult for the EA community to accept given the high status they place on their intellectual capabilities (and that status is a key driver in our limbic brain so will constantly play tricks on us)
- but that unless EAs are far more intelligent than Kasparov and Sedol and all those that play these games this risk should be taken very seriously.
- that potentially the prime purpose of politics should thus be to ensure that corporations act in a way that is value aligned with the communities they serve, including international coordination as necessary.
- I will give £250 to a charity of the choice of the first party who is able to come up with a flaw in my argument that is not along the lines of “you are too stupid to understand”.