The main thing that I doubt is that Sam knew at the time that he was gifting the board to doomers. Ilya was a loyalist and non-doomer when appointed. Elon was I guess some mix of doomer and loyalist at the start. Given how AIS worries generally increased in SV circles over time, more likely than not some of D’Angelo, Hoffman, and Hurd moved toward the “doomer” pole over time.
Ilya has always been a doomer AFAICT, he was just loyal to Altman personally, who recruited him to OA. (I can tell you that when I spent a few hours chatting with him in… 2017 or something? a very long time ago, anyway—I don’t remember him dismissing the dangers or being pollyannaish.) ‘Superalignment’ didn’t come out of nowhere or surprise anyone about Ilya being in charge. Elon was… not loyal to Altman but appeared content to largely leave oversight of OA to Altman until he had one of his characteristic mood changes, got frustrated and tried to take over. In any case, he surely counts as a doomer by the time Zilis is being added to the board as his proxy. D’Angelo likewise seems to have consistently, in his few public quotes, been concerned about the danger.
A lot of people have indeed moved towards the ‘doomer’ pole but much of that has been timelines: AI doom in 2060 looks and feels a lot different from AI doom in 2027.
Hmm, OK. Back when I met Ilya, about 2018, he was radiating excitement that his next idea would create AGI, and didn’t seem sensitive to safety worries. I also thought it was “common knowledge” that his interest in safety increased substantially between 2018-22, and that’s why I was unsurprised to see him in charge of superalignment.
Re Elon-Zillis, all I’m saying is that it looked to Sam like the seat would belong to someone loyal to him at the time the seat was created.
You may well be right about D’Angelo and the others.
Hm, maybe it was common knowledge in some areas? I just always took him for being concerned. There’s not really any contradiction between being excited about your short-term work and worried about long-term risks. Fooling yourself about your current idea is an important skill for a researcher. (You ever hear the joke about Geoff Hinton? He suddenly solves how the brain works, at long last, and euphorically tells his daughter; she replies: “Oh Dad—not again!”)
Just judging from his Twitter feed, I got the weak impression D’Angelo is somewhat enthusiastic about AI and didn’t catch any concerns about existential safety.
The main thing that I doubt is that Sam knew at the time that he was gifting the board to doomers. Ilya was a loyalist and non-doomer when appointed. Elon was I guess some mix of doomer and loyalist at the start. Given how AIS worries generally increased in SV circles over time, more likely than not some of D’Angelo, Hoffman, and Hurd moved toward the “doomer” pole over time.
Ilya has always been a doomer AFAICT, he was just loyal to Altman personally, who recruited him to OA. (I can tell you that when I spent a few hours chatting with him in… 2017 or something? a very long time ago, anyway—I don’t remember him dismissing the dangers or being pollyannaish.) ‘Superalignment’ didn’t come out of nowhere or surprise anyone about Ilya being in charge. Elon was… not loyal to Altman but appeared content to largely leave oversight of OA to Altman until he had one of his characteristic mood changes, got frustrated and tried to take over. In any case, he surely counts as a doomer by the time Zilis is being added to the board as his proxy. D’Angelo likewise seems to have consistently, in his few public quotes, been concerned about the danger.
A lot of people have indeed moved towards the ‘doomer’ pole but much of that has been timelines: AI doom in 2060 looks and feels a lot different from AI doom in 2027.
Hmm, OK. Back when I met Ilya, about 2018, he was radiating excitement that his next idea would create AGI, and didn’t seem sensitive to safety worries. I also thought it was “common knowledge” that his interest in safety increased substantially between 2018-22, and that’s why I was unsurprised to see him in charge of superalignment.
Re Elon-Zillis, all I’m saying is that it looked to Sam like the seat would belong to someone loyal to him at the time the seat was created.
You may well be right about D’Angelo and the others.
Hm, maybe it was common knowledge in some areas? I just always took him for being concerned. There’s not really any contradiction between being excited about your short-term work and worried about long-term risks. Fooling yourself about your current idea is an important skill for a researcher. (You ever hear the joke about Geoff Hinton? He suddenly solves how the brain works, at long last, and euphorically tells his daughter; she replies: “Oh Dad—not again!”)
Just judging from his Twitter feed, I got the weak impression D’Angelo is somewhat enthusiastic about AI and didn’t catch any concerns about existential safety.