One way to collect some “answers” to this is to observe the behavior of organizations that do a lot of work on this problem. For example:
Open Philanthropy still makes grants in several different longtermist areas
80,000 Hours lists biorisk as a priority on the same “level” as AI alignment
FLI is using their massive Vitalik Buterin donation to fund PhD and postdoctoral fellowships, which seems to imply that “drop everything” doesn’t imply e.g. “drop out of school” or “start thinking exclusively about alignment even if it tanks your grades”
Biorisk needs people who are good with numbers and computers. EA community building needs people who are good with computers (there’s a lot of good software to be built, websites to be designed, etc.)
To keep the scope of my analysis limited, I’m not even going to mention the dozens of other priorities that someone with the right skills + interests might be better off pursuing. But it at least seems not to be consensus that “crunch time” is here, even among people who think about the problem quite a lot.
That said, I would never turn away anyone who wants to work on alignment, and I think anyone with related skills should strongly consider it as an area to explore + take seriously. That’s the pitch I’d be making if I were in college, alongside messages like (paraphrased a lot):
“This seems like a good way to end up working on something historically significant, in a way that probably won’t happen if you join Facebook instead.”
“If you want to do this, there’s a good chance you’ll have unbelievable access to mentorship and support from top people in the field, which… probably won’t happen if you join Facebook instead.” (As a non-programmer, I don’t know whether this is true, but I’d guess that it is.)
Of course, some organizations and people that do a lot of work on this problem would say that it is, in fact, crunch time. If someone decides to explore the area, “it’s crunch time” is a hypothesis they should consider. I just don’t think it should be their default assumption, or your default pitch.
One way to collect some “answers” to this is to observe the behavior of organizations that do a lot of work on this problem. For example:
Open Philanthropy still makes grants in several different longtermist areas
80,000 Hours lists biorisk as a priority on the same “level” as AI alignment
FLI is using their massive Vitalik Buterin donation to fund PhD and postdoctoral fellowships, which seems to imply that “drop everything” doesn’t imply e.g. “drop out of school” or “start thinking exclusively about alignment even if it tanks your grades”
Biorisk needs people who are good with numbers and computers. EA community building needs people who are good with computers (there’s a lot of good software to be built, websites to be designed, etc.)
To keep the scope of my analysis limited, I’m not even going to mention the dozens of other priorities that someone with the right skills + interests might be better off pursuing. But it at least seems not to be consensus that “crunch time” is here, even among people who think about the problem quite a lot.
That said, I would never turn away anyone who wants to work on alignment, and I think anyone with related skills should strongly consider it as an area to explore + take seriously. That’s the pitch I’d be making if I were in college, alongside messages like (paraphrased a lot):
“This seems like a good way to end up working on something historically significant, in a way that probably won’t happen if you join Facebook instead.”
“If you want to do this, there’s a good chance you’ll have unbelievable access to mentorship and support from top people in the field, which… probably won’t happen if you join Facebook instead.” (As a non-programmer, I don’t know whether this is true, but I’d guess that it is.)
Of course, some organizations and people that do a lot of work on this problem would say that it is, in fact, crunch time. If someone decides to explore the area, “it’s crunch time” is a hypothesis they should consider. I just don’t think it should be their default assumption, or your default pitch.