I don’t think you’re alone at all. EY and other prominent rationalists (like LW webmaster Habryka) have also said they believe EA has been net-negative for human survival for quite a while already, EleutherAI’s Connor Leahy has recently released the strongly EA-critical Compendium, which has been praised by many leading longtermists, particularly FLI’s Max Tegmark, and Anthropic’s recent antics like calling for recursive self-improvement to beat China is definitely souring a lot of people left unconvinced in those spaces on OP. From personal conservations, I can tell you PauseAI in particular is increasingly hostile to EA leadership.
I don’t think Eliezer Yudkowsky and the rationalists should be throwing stones here. Sam Altman himself claimed that “eliezer has IMO done more to accelerate AGI than anyone else”. They’ve spent decades trying to convince people of the miraculous powers of AI, and now are acting shocked that this motivated people to try and build it.
Well, they’re not claiming the moral high ground; they can consistently say that EA has been net negative, and been net negative themselves for human survival.
Yeah IIRC I think EY do consider himself to have been net-negative overall so far, hence the whole “death with dignity” spiral. But I don’t think one can claim his role has been more negative than OPP/GV deciding to bankroll OpenAI and Anthropic (at least when removing the indirect consequences due to him having influenced the development of EA in the first place).
I don’t think you’re alone at all. EY and other prominent rationalists (like LW webmaster Habryka) have also said they believe EA has been net-negative for human survival for quite a while already, EleutherAI’s Connor Leahy has recently released the strongly EA-critical Compendium, which has been praised by many leading longtermists, particularly FLI’s Max Tegmark, and Anthropic’s recent antics like calling for recursive self-improvement to beat China is definitely souring a lot of people left unconvinced in those spaces on OP. From personal conservations, I can tell you PauseAI in particular is increasingly hostile to EA leadership.
I don’t think Eliezer Yudkowsky and the rationalists should be throwing stones here. Sam Altman himself claimed that “eliezer has IMO done more to accelerate AGI than anyone else”. They’ve spent decades trying to convince people of the miraculous powers of AI, and now are acting shocked that this motivated people to try and build it.
Well, they’re not claiming the moral high ground; they can consistently say that EA has been net negative, and been net negative themselves for human survival.
Yeah IIRC I think EY do consider himself to have been net-negative overall so far, hence the whole “death with dignity” spiral. But I don’t think one can claim his role has been more negative than OPP/GV deciding to bankroll OpenAI and Anthropic (at least when removing the indirect consequences due to him having influenced the development of EA in the first place).