Going to say something seemingly-unpopular in a tone that usually gets downvoted but I think needs to be said anyway:
This stat is why I still have hope: 100,000 capabilities researchers vs 300 alignment researchers.
Humanity has not tried to solve alignment yet.
There’s no cavalry coming—we are the cavalry.
I am sympathetic to fears of a new alignment researchers being net negative, and I think plausibly the entire field has, so far, been net negative, but guys, there are 100,000 capabilities researchers now! One more is a drop in the bucket.
If you’re still on the sidelines, go post that idea that’s been gathering dust in your Google Docs for the last six months. Go fill out that fundraising application.
This comment looks like it’s written in an attempt to “be inspirational”, not an attempt to share a useful insight, or ask a question.
I hope this doesn’t sound unkind. I recognise that there can be value in being inspirational, but it’s not what I’m looking for when I’m reading these comments.
Thanks for the feedback. I tried to do both. I think the doomerism levels are so intense right now and need to be balanced out with a bit of inspiration.
I worry that the doomer levels are so high EAs will be frozen into inaction and non-EAs will take over from here. This is the default outcome, I think.
I worry that the doomer levels are so high EAs will be frozen into inaction and non-EAs will take over from here. This is the default outcome, I think.
On one hand, as I got at in this comment, I’m more ambivalent than you about whether it’d be worse for non-EAs to take more control over the trajectory on AI alignment.
On the other hand, one reason why I’m ambivalent about effective altruists (or rationalists) retaining that level is control is that I’m afraid that the doomer-ism may become an endemic or terminal disease for the EA community. AI alignment might be refreshed by many of those effective altruists currently staffing the field being replaced. So, thank you for pointing that out too. I expressed a similar sentiment in this comment, though I was more specific because I felt it was important to explain just how bad the doomer-ism has been getting.
Others who’ve tried to get across the same point [Leopold is] making have, instead of explaining their disagreements, have generally alleged almost everyone else in entire field of AI alignment are literally insane.
That’s not helpful for a few reasons. Such a claim is probably not true. It’d be harder to make a more intellectually lazy or unconvincing argument. It counts as someone making a bold, senseless attempt to, arguably, dehumanize hundreds of their peers.
This isn’t just a negligible error from somebody recognized as part of a hyperbolic fringe in AI safety/alignment community. It’s direly counterproductive when it comes from leading rationalists, like Eliezer Yudkowsky and Oliver Habryka, who wield great influence in their own right, and are taken very seriously by hundreds of other people.
Your first comment at the top was better, it seems you were inspired. What in the entire universe of possibilities could be wrong with being inspirational?...the entire EA movement is hoping to inspire people to give and act toward the betterment of humankind...before any good idea can be implemented, there must be something to inspire a person to stand up and act. Wow, you’re mindset is so off of human reality. Is this an issue of post vs. comments?...who cares if someone adds original material in comments, it’s a conversation. Humans are not data in a test tube...the human spirit is another way of saying, “inspired human”...when inspired humans think, good things can happen. It is the evil of banality that is so frightening. Uninspired intellect is probably what will kill us all if it’s digital.
Sanjay, I just realized you were the top comment, and now I notice that I feel confused, because your comment directly inspired me to express my views in a tone that was more opinionated and less-hedgy.
I appreciate—no, I *love* - EA’s truth seeking culture but I wish it were more OK to add a bit of Gryffindor to balance out the Ravenclaw.
It’s ambiguous who this “we” is. It obscures the fact there are overlapping and distinct communities among AI alignment as an umbrella movement. There have also been increasing concerns that a couple of those communities serving as nodes in that network, namely rationality and effective altruism, are becoming more trouble than they’re worth. This has been coming from effective altruists and rationalists themselves.
I’m aware of, and have been part of, increasingly frequent conversations that AI safety and alignment, as a movement/community/whatever, shouldn’t just “divorce” from EA or rationality, but can and should become more autonomous and independent from them.
What that implies for ‘the cavalry’ is, first, that much of the standing calvary is more trouble than it’s worth. It might be prudent to discard and dismiss much of the existing cavalry.
Second, the AI safety/alignment community gaining more control over its own trajectory may provide an opportunity to rebuild the cavalry, for the better. AI alignment as a field could become more attractive to those who find it offputting, at this point, understandably, because of its association with EA and the rationality community.
AI safety and AI alignment, freed of the baggage EA and rationality, could bring in fresh ranks to the cavalry to replace those standing ranks still causing so many problems.
Going to say something seemingly-unpopular in a tone that usually gets downvoted but I think needs to be said anyway:
This stat is why I still have hope: 100,000 capabilities researchers vs 300 alignment researchers.
Humanity has not tried to solve alignment yet.
There’s no cavalry coming—we are the cavalry.
I am sympathetic to fears of a new alignment researchers being net negative, and I think plausibly the entire field has, so far, been net negative, but guys, there are 100,000 capabilities researchers now! One more is a drop in the bucket.
If you’re still on the sidelines, go post that idea that’s been gathering dust in your Google Docs for the last six months. Go fill out that fundraising application.
We’ve had enough fire alarms. It’s time to act.
My gut reaction when reading this comment:
This comment looks like it’s written in an attempt to “be inspirational”, not an attempt to share a useful insight, or ask a question.
I hope this doesn’t sound unkind. I recognise that there can be value in being inspirational, but it’s not what I’m looking for when I’m reading these comments.
Thanks for the feedback. I tried to do both. I think the doomerism levels are so intense right now and need to be balanced out with a bit of inspiration.
I worry that the doomer levels are so high EAs will be frozen into inaction and non-EAs will take over from here. This is the default outcome, I think.
On one hand, as I got at in this comment, I’m more ambivalent than you about whether it’d be worse for non-EAs to take more control over the trajectory on AI alignment.
On the other hand, one reason why I’m ambivalent about effective altruists (or rationalists) retaining that level is control is that I’m afraid that the doomer-ism may become an endemic or terminal disease for the EA community. AI alignment might be refreshed by many of those effective altruists currently staffing the field being replaced. So, thank you for pointing that out too. I expressed a similar sentiment in this comment, though I was more specific because I felt it was important to explain just how bad the doomer-ism has been getting.
Your first comment at the top was better, it seems you were inspired. What in the entire universe of possibilities could be wrong with being inspirational?...the entire EA movement is hoping to inspire people to give and act toward the betterment of humankind...before any good idea can be implemented, there must be something to inspire a person to stand up and act. Wow, you’re mindset is so off of human reality. Is this an issue of post vs. comments?...who cares if someone adds original material in comments, it’s a conversation. Humans are not data in a test tube...the human spirit is another way of saying, “inspired human”...when inspired humans think, good things can happen. It is the evil of banality that is so frightening. Uninspired intellect is probably what will kill us all if it’s digital.
Sanjay, I just realized you were the top comment, and now I notice that I feel confused, because your comment directly inspired me to express my views in a tone that was more opinionated and less-hedgy.
I appreciate—no, I *love* - EA’s truth seeking culture but I wish it were more OK to add a bit of Gryffindor to balance out the Ravenclaw.
It’s ambiguous who this “we” is. It obscures the fact there are overlapping and distinct communities among AI alignment as an umbrella movement. There have also been increasing concerns that a couple of those communities serving as nodes in that network, namely rationality and effective altruism, are becoming more trouble than they’re worth. This has been coming from effective altruists and rationalists themselves.
I’m aware of, and have been part of, increasingly frequent conversations that AI safety and alignment, as a movement/community/whatever, shouldn’t just “divorce” from EA or rationality, but can and should become more autonomous and independent from them.
What that implies for ‘the cavalry’ is, first, that much of the standing calvary is more trouble than it’s worth. It might be prudent to discard and dismiss much of the existing cavalry.
Second, the AI safety/alignment community gaining more control over its own trajectory may provide an opportunity to rebuild the cavalry, for the better. AI alignment as a field could become more attractive to those who find it offputting, at this point, understandably, because of its association with EA and the rationality community.
AI safety and AI alignment, freed of the baggage EA and rationality, could bring in fresh ranks to the cavalry to replace those standing ranks still causing so many problems.