There’s a lot of value in having an AI safety orthodoxy for coordination purposes; there’s also a lot of value in this sort of heterodox criticism of the orthodoxy. Thanks for posting.
Also: nobody seems to be really looking into the state of AI safety & x-risk memes inside of China. Whether they’re developing a different ‘availability cascade’ seems hugely important and under-studied.
There’s a lot of value in having an AI safety orthodoxy for coordination purposes; there’s also a lot of value in this sort of heterodox criticism of the orthodoxy. Thanks for posting.
One additional area of orthodoxy that I think could use more critique is the community’s views on consciousness. A few thoughts here (+comments): http://effective-altruism.com/ea/14t/principia_qualia_blueprint_for_a_new_cause_area/
Also: nobody seems to be really looking into the state of AI safety & x-risk memes inside of China. Whether they’re developing a different ‘availability cascade’ seems hugely important and under-studied.