One way to collect some âanswersâ to this is to observe the behavior of organizations that do a lot of work on this problem. For example:
Open Philanthropy still makes grants in several different longtermist areas
80,000 Hours lists biorisk as a priority on the same âlevelâ as AI alignment
FLI is using their massive Vitalik Buterin donation to fund PhD and postdoctoral fellowships, which seems to imply that âdrop everythingâ doesnât imply e.g. âdrop out of schoolâ or âstart thinking exclusively about alignment even if it tanks your gradesâ
Biorisk needs people who are good with numbers and computers. EA community building needs people who are good with computers (thereâs a lot of good software to be built, websites to be designed, etc.)
To keep the scope of my analysis limited, Iâm not even going to mention the dozens of other priorities that someone with the right skills + interests might be better off pursuing. But it at least seems not to be consensus that âcrunch timeâ is here, even among people who think about the problem quite a lot.
That said, I would never turn away anyone who wants to work on alignment, and I think anyone with related skills should strongly consider it as an area to explore + take seriously. Thatâs the pitch Iâd be making if I were in college, alongside messages like (paraphrased a lot):
âThis seems like a good way to end up working on something historically significant, in a way that probably wonât happen if you join Facebook instead.â
âIf you want to do this, thereâs a good chance youâll have unbelievable access to mentorship and support from top people in the field, which⌠probably wonât happen if you join Facebook instead.â (As a non-programmer, I donât know whether this is true, but Iâd guess that it is.)
Of course, some organizations and people that do a lot of work on this problem would say that it is, in fact, crunch time. If someone decides to explore the area, âitâs crunch timeâ is a hypothesis they should consider. I just donât think it should be their default assumption, or your default pitch.
One way to collect some âanswersâ to this is to observe the behavior of organizations that do a lot of work on this problem. For example:
Open Philanthropy still makes grants in several different longtermist areas
80,000 Hours lists biorisk as a priority on the same âlevelâ as AI alignment
FLI is using their massive Vitalik Buterin donation to fund PhD and postdoctoral fellowships, which seems to imply that âdrop everythingâ doesnât imply e.g. âdrop out of schoolâ or âstart thinking exclusively about alignment even if it tanks your gradesâ
Biorisk needs people who are good with numbers and computers. EA community building needs people who are good with computers (thereâs a lot of good software to be built, websites to be designed, etc.)
To keep the scope of my analysis limited, Iâm not even going to mention the dozens of other priorities that someone with the right skills + interests might be better off pursuing. But it at least seems not to be consensus that âcrunch timeâ is here, even among people who think about the problem quite a lot.
That said, I would never turn away anyone who wants to work on alignment, and I think anyone with related skills should strongly consider it as an area to explore + take seriously. Thatâs the pitch Iâd be making if I were in college, alongside messages like (paraphrased a lot):
âThis seems like a good way to end up working on something historically significant, in a way that probably wonât happen if you join Facebook instead.â
âIf you want to do this, thereâs a good chance youâll have unbelievable access to mentorship and support from top people in the field, which⌠probably wonât happen if you join Facebook instead.â (As a non-programmer, I donât know whether this is true, but Iâd guess that it is.)
Of course, some organizations and people that do a lot of work on this problem would say that it is, in fact, crunch time. If someone decides to explore the area, âitâs crunch timeâ is a hypothesis they should consider. I just donât think it should be their default assumption, or your default pitch.