Is Superintelligence Here Already?

https://​​www.fhi.ox.ac.uk/​​wp-con­tent/​​up­loads/​​Refram­ing_Su­per­in­tel­li­gence_FHI-TR-2019-1.1-1.pdf

This pa­per pro­duced by Fu­ture of Hu­man­ity In­sti­tute is fairly heavy for me to di­gest, but I think it reaches con­clu­sions similar to a profound con­cern I have:

- “In­tel­li­gence” does not nec­es­sar­ily need to have any­thing to do with “our” type of in­tel­li­gence, where we steadily build on his­toric knowl­edge; in­deed this ap­proach nat­u­rally falls prey to prefer­ring “hedge­hogs” (as com­pared to “foxes” in the hedge­hogs v foxes com­pair­son in Tet­lock’s “su­per­in­tel­li­gence”) - who are worse than ran­dom at pre­dict­ing the fu­ture;

- With lat­est ver­sion of Alpha Zero which quickly reached su­per­in­tel­li­gent lev­els with no hu­man in­ter­ven­tion in 3 differ­ent game do­mains we have to face the un­com­fortable truth that AI has already far sur­passed our own level of in­tel­li­gence.

- that cor­po­ra­tions as le­gal per­son and with profit max­imis­ing at their core (a value or­thog­o­nal to val­ues that cause hu­man­ity to thrive) could rapidly be­come ex­tremely dom­i­nant with this type of AI used across all tasks they are re­quired to perform.

- this rep­re­sents a real deep and po­ten­tially ex­is­ten­tial threat that the EA com­mu­nity should take ex­tremely se­ri­ously. It is also at the core of the in­creas­ingly sys­temic failure of politics

- that this is par­tic­u­larly difficult for the EA com­mu­nity to ac­cept given the high sta­tus they place on their in­tel­lec­tual ca­pa­bil­ities (and that sta­tus is a key driver in our lim­bic brain so will con­stantly play tricks on us)

- but that un­less EAs are far more in­tel­li­gent than Kas­parov and Sedol and all those that play these games this risk should be taken very se­ri­ously.

- that po­ten­tially the prime pur­pose of poli­tics should thus be to en­sure that cor­po­ra­tions act in a way that is value al­igned with the com­mu­ni­ties they serve, in­clud­ing in­ter­na­tional co­or­di­na­tion as nec­es­sary.

- I will give £250 to a char­ity of the choice of the first party who is able to come up with a flaw in my ar­gu­ment that is not along the lines of “you are too stupid to un­der­stand”.