Thanks for the feedback. I tried to do both. I think the doomerism levels are so intense right now and need to be balanced out with a bit of inspiration.
I worry that the doomer levels are so high EAs will be frozen into inaction and non-EAs will take over from here. This is the default outcome, I think.
I worry that the doomer levels are so high EAs will be frozen into inaction and non-EAs will take over from here. This is the default outcome, I think.
On one hand, as I got at in this comment, I’m more ambivalent than you about whether it’d be worse for non-EAs to take more control over the trajectory on AI alignment.
On the other hand, one reason why I’m ambivalent about effective altruists (or rationalists) retaining that level is control is that I’m afraid that the doomer-ism may become an endemic or terminal disease for the EA community. AI alignment might be refreshed by many of those effective altruists currently staffing the field being replaced. So, thank you for pointing that out too. I expressed a similar sentiment in this comment, though I was more specific because I felt it was important to explain just how bad the doomer-ism has been getting.
Others who’ve tried to get across the same point [Leopold is] making have, instead of explaining their disagreements, have generally alleged almost everyone else in entire field of AI alignment are literally insane.
That’s not helpful for a few reasons. Such a claim is probably not true. It’d be harder to make a more intellectually lazy or unconvincing argument. It counts as someone making a bold, senseless attempt to, arguably, dehumanize hundreds of their peers.
This isn’t just a negligible error from somebody recognized as part of a hyperbolic fringe in AI safety/alignment community. It’s direly counterproductive when it comes from leading rationalists, like Eliezer Yudkowsky and Oliver Habryka, who wield great influence in their own right, and are taken very seriously by hundreds of other people.
Thanks for the feedback. I tried to do both. I think the doomerism levels are so intense right now and need to be balanced out with a bit of inspiration.
I worry that the doomer levels are so high EAs will be frozen into inaction and non-EAs will take over from here. This is the default outcome, I think.
On one hand, as I got at in this comment, I’m more ambivalent than you about whether it’d be worse for non-EAs to take more control over the trajectory on AI alignment.
On the other hand, one reason why I’m ambivalent about effective altruists (or rationalists) retaining that level is control is that I’m afraid that the doomer-ism may become an endemic or terminal disease for the EA community. AI alignment might be refreshed by many of those effective altruists currently staffing the field being replaced. So, thank you for pointing that out too. I expressed a similar sentiment in this comment, though I was more specific because I felt it was important to explain just how bad the doomer-ism has been getting.