X-risks could take many forms – a meteor crash, catastrophic global warming, plague – but the one that effective altruists like to worry about most is the ‘intelligence explosion’: artificial intelligence taking over the world and destroying humanity. Their favoured solution is to invest more money in AI research. Thus the humanitarian logic of effective altruism leads to the conclusion that more money needs to be spent on computers: why invest in anti-malarial nets when there’s a robot apocalypse to halt? It’s no surprise that effective altruism is popular in Silicon Valley: PayPal founder Peter Thiel, Skype developer Jaan Tallinn and Tesla CEO Elon Musk are all major financial supporters of x-risk research.* Who doesn’t want to believe that their work is of overwhelming humanitarian significance?
This paragraph was frustrating to read. X-risk concerned EAs don’t want more money invested AI research per se. Rather, they are interested to see more money invested in AI safety research in particular. X-risk concerned EAs are if anything bearish on spending money to advance generic AI research. Also, none of Thiel, Tallin, or Musk is an AI researcher, so I don’t see why we should think they were attracted to x-risks because it’s an idea that gave their work “overwhelming humanitarian significance”.
And, I’m frustrated that the author seems to think it’s adequate to dismiss the idea of AI risk with what ultimately amounts to an ad hominem attack: “AI risk worries these people, but of course they are the sort of people who would be worried, so there’s no need to investigate further”. I actually think the opposite argument applies. I would expect Musk and Thiel to be if anything techno-utopians, with their history of creating and funding breakthrough technology. So if they are worried about some future tech that seems like it should make us sit up.
This paragraph was frustrating to read. X-risk concerned EAs don’t want more money invested AI research per se. Rather, they are interested to see more money invested in AI safety research in particular. X-risk concerned EAs are if anything bearish on spending money to advance generic AI research. Also, none of Thiel, Tallin, or Musk is an AI researcher, so I don’t see why we should think they were attracted to x-risks because it’s an idea that gave their work “overwhelming humanitarian significance”.
And, I’m frustrated that the author seems to think it’s adequate to dismiss the idea of AI risk with what ultimately amounts to an ad hominem attack: “AI risk worries these people, but of course they are the sort of people who would be worried, so there’s no need to investigate further”. I actually think the opposite argument applies. I would expect Musk and Thiel to be if anything techno-utopians, with their history of creating and funding breakthrough technology. So if they are worried about some future tech that seems like it should make us sit up.
That’s a very good point. Promoting concern for AI safety is potentially against the interests of many people involved in the technology industry.
Yeah that bit is a totally disingenuous misunderstanding of what these people are doing.