I think it’s also easy to make a case that longtermist efforts have increased the x-risk of artificial intelligence, with the money and talent that grew some of the biggest hype machines in AI (Deepmind, OpenAI) coming from longtermist places.
It’s possible that EA has shaved a couple counterfactual years off of time to catastrophic AGI, compared to a world where the community wasn’t working on it.
Can you say more about which longtermist efforts you’re referring to?
I think a case can be made, but I don’t think it’s an easy (or clear) case.
My current impression is that Yudkowsky & Bostrom’s writings about AGI inspired the creation of OpenAI/DeepMind. And I believe FTX invested a lot in Anthropic and OP invested a little bit (in relative terms) into OpenAI. Since then, there have been capabilities advances and safety advances made by EAs, and I don’t think it’s particularly clear which outweighs.
It seems unclear to me what the sign of these effects are. Like, maybe no one thinks about AGI for decades. Or maybe 3-5 years after Yudkowsky starts thinking about AGI, someone else much less safety-concerned starts thinking about AGI, and we get a world with AGI labs that are much less concerned about safety than status-quo.
I’m not advocating for this position, but I’m using it to illustrate how the case seems far-from-easy.
Is most of the AI capabilities work here causally downstream of Superintelligence, even if Superintelligence may have been (heavily ?) influenced by Yudkowsky? Both Musk and Altman recommended Superintelligence, altough Altman has also directly said Yudkowsky has accelerated timelines the most:
If things stayed in the LW/Rat/EA community, that might have been best. If Yudkowsky hadn’t written about AI, then there might not be much of an AI safety community at all now (it might just be MIRI quietly hacking away at it, and most of MIRI seems to have given up now), and doom would be more likely, just later. Someone had to write about AI safety publicly to build the community, but writing and promoting a popular book on the topic is much riskier, because you bring it to the attention of uncareful people, including entrepreneurial types.
I guess they might have tried to keep the public writing limited to academia, but the AI community has been pretty dismissive of AI safety, so it might have been too hard to build the community that way.
Did Superintelligence have a dramatic effect on people like Elon Musk? I can imagine Elon getting involved without it. That involvement might have been even more harmful (e.g. starting an AGI lab with zero safety concerns).
In college, he thought about what he wanted to do with his life, using as his starting point the question, “What will most affect the future of humanity?” The answer he came up with was a list of five things: “the internet; sustainable energy; space exploration, in particular the permanent extension of life beyond Earth; artificial intelligence; and reprogramming the human genetic code.”
Overall, causality is multifactorial and tricky to analyze, so concepts like “causally downstream” can be misleading.
(Nonetheless, I do think it’s plausible that publishing Superintelligence was a bad idea, at least in 2014.)
I think it’s also easy to make a case that longtermist efforts have increased the x-risk of artificial intelligence, with the money and talent that grew some of the biggest hype machines in AI (Deepmind, OpenAI) coming from longtermist places.
It’s possible that EA has shaved a couple counterfactual years off of time to catastrophic AGI, compared to a world where the community wasn’t working on it.
Can you say more about which longtermist efforts you’re referring to?
I think a case can be made, but I don’t think it’s an easy (or clear) case.
My current impression is that Yudkowsky & Bostrom’s writings about AGI inspired the creation of OpenAI/DeepMind. And I believe FTX invested a lot in Anthropic and OP invested a little bit (in relative terms) into OpenAI. Since then, there have been capabilities advances and safety advances made by EAs, and I don’t think it’s particularly clear which outweighs.
It seems unclear to me what the sign of these effects are. Like, maybe no one thinks about AGI for decades. Or maybe 3-5 years after Yudkowsky starts thinking about AGI, someone else much less safety-concerned starts thinking about AGI, and we get a world with AGI labs that are much less concerned about safety than status-quo.
I’m not advocating for this position, but I’m using it to illustrate how the case seems far-from-easy.
Is most of the AI capabilities work here causally downstream of Superintelligence, even if Superintelligence may have been (heavily ?) influenced by Yudkowsky? Both Musk and Altman recommended Superintelligence, altough Altman has also directly said Yudkowsky has accelerated timelines the most:
https://twitter.com/elonmusk/status/495759307346952192?lang=en
https://blog.samaltman.com/machine-intelligence-part-1
https://twitter.com/sama/status/1621621724507938816
If things stayed in the LW/Rat/EA community, that might have been best. If Yudkowsky hadn’t written about AI, then there might not be much of an AI safety community at all now (it might just be MIRI quietly hacking away at it, and most of MIRI seems to have given up now), and doom would be more likely, just later. Someone had to write about AI safety publicly to build the community, but writing and promoting a popular book on the topic is much riskier, because you bring it to the attention of uncareful people, including entrepreneurial types.
I guess they might have tried to keep the public writing limited to academia, but the AI community has been pretty dismissive of AI safety, so it might have been too hard to build the community that way.
Did Superintelligence have a dramatic effect on people like Elon Musk? I can imagine Elon getting involved without it. That involvement might have been even more harmful (e.g. starting an AGI lab with zero safety concerns).
Here’s one notable quote about Elon (source), who started college over 20 years before Superintelligence:
Overall, causality is multifactorial and tricky to analyze, so concepts like “causally downstream” can be misleading.
(Nonetheless, I do think it’s plausible that publishing Superintelligence was a bad idea, at least in 2014.)