(Importantly, from my understanding, this isn’t OpenAI being evil or anything like that—OpenAI would love to hire more alignment researchers, but there just aren’t many great researchers out there focusing on this problem.)
Thank you emphasizing you’re not implying OpenAI is evil only because some practices at OpenAI may be inadequate. I feel like I shouldn’t have to thank you for that, though I do, just to emphasize how backwards the thinking and discourse in the AI safety/alignment community often is when a pall of fear and paranoia is cast on all AI capabilities researchers.
During a recent conversation about AI alignment with a few others, when I expressed a casual opinion about how AGI labs have some particularly mistaken practice, I too felt a need to clarify I didn’t mean to imply that Sundar Pichai or Sam Altman are evil because of it.
I don’t even remember right now what that point of criticism I made was. I don’t remember if that conversation was last week or the week before. It hasn’t stuck in my mind because it didn’t feel that important. It was an offhand comment about a relatively minor mistake AGI labs are making, tangential to the main argument I was trying to make.
Yet it’s telling that I felt a need to clarify I wasn’t implying OpenAI or DeepMind is evil, during even just a private conversation. It’s telling that you’ve felt a need to do that in this post. It’s a sign of a serious problem in the mindset of at least a minority of the AI safety/alignment community.
Another outstanding feature of this post is how you’ve mustered the effort to explain at all why you consider different approaches to alignment to be inadequate. This distinguishes your post from others like it from the the last year. Others who’ve tried to get across the same point you’re making have, instead of explaining their disagreements, have generally alleged almost everyone else in entire field of AI alignment are literally insane.
That’s not helpful for a few reasons. Such a claim is probably not true. It’d be harder to make a more intellectually lazy or unconvincing argument. It counts as someone making a bold, senseless attempt to, arguably, dehumanize hundreds of their peers.
This isn’t just a negligible error from somebody recognized as part of a hyperbolic fringe in AI safety/alignment community. It’s direly counterproductive when it comes from leading rationalists, like Eliezer Yudkowsky and Oliver Habryka, who wield great influence in their own right, and are taken very seriously by hundreds of other people. Thank you for writing this post as corrective to that kind of mistake a lot of your close allies have been making too.
Thank you emphasizing you’re not implying OpenAI is evil only because some practices at OpenAI may be inadequate. I feel like I shouldn’t have to thank you for that, though I do, just to emphasize how backwards the thinking and discourse in the AI safety/alignment community often is when a pall of fear and paranoia is cast on all AI capabilities researchers.
During a recent conversation about AI alignment with a few others, when I expressed a casual opinion about how AGI labs have some particularly mistaken practice, I too felt a need to clarify I didn’t mean to imply that Sundar Pichai or Sam Altman are evil because of it.
I don’t even remember right now what that point of criticism I made was. I don’t remember if that conversation was last week or the week before. It hasn’t stuck in my mind because it didn’t feel that important. It was an offhand comment about a relatively minor mistake AGI labs are making, tangential to the main argument I was trying to make.
Yet it’s telling that I felt a need to clarify I wasn’t implying OpenAI or DeepMind is evil, during even just a private conversation. It’s telling that you’ve felt a need to do that in this post. It’s a sign of a serious problem in the mindset of at least a minority of the AI safety/alignment community.
Another outstanding feature of this post is how you’ve mustered the effort to explain at all why you consider different approaches to alignment to be inadequate. This distinguishes your post from others like it from the the last year. Others who’ve tried to get across the same point you’re making have, instead of explaining their disagreements, have generally alleged almost everyone else in entire field of AI alignment are literally insane.
That’s not helpful for a few reasons. Such a claim is probably not true. It’d be harder to make a more intellectually lazy or unconvincing argument. It counts as someone making a bold, senseless attempt to, arguably, dehumanize hundreds of their peers.
This isn’t just a negligible error from somebody recognized as part of a hyperbolic fringe in AI safety/alignment community. It’s direly counterproductive when it comes from leading rationalists, like Eliezer Yudkowsky and Oliver Habryka, who wield great influence in their own right, and are taken very seriously by hundreds of other people. Thank you for writing this post as corrective to that kind of mistake a lot of your close allies have been making too.