If you give me $250,000, I will work on AI alignment independently for a year. (Serious offer!) I’m not Terence Tao or Von Neumann, but I did graduate with a master’s degree in CS/ML from Oxford so have at least some knowledge of ML. I think getting people like me to work on it is more realistic and potentially higher reward because you can get people who will actually work on it, rather than the slim chance of getting Terence Tao on board, and can have a greater number of hands to complete the tedious work and experiments that need to be done
I think for that money you’re going to need to prove that you’re worth it—can you link to any of your work? Also, as per my note at the top of the OP, I think that there basically isn’t time to spin up an alignment career now, so unless you are a genius or have some novel insights into the problem already, then I’m not very hopeful that your work could make a difference at this late stage. I’m more excited about people pivoting to work on getting a global AGI moratorium in place asap. Once we have that, then we can focus on a “Manhattan Project” for Alignment.
If you give me $250,000, I will work on AI alignment independently for a year. (Serious offer!) I’m not Terence Tao or Von Neumann, but I did graduate with a master’s degree in CS/ML from Oxford so have at least some knowledge of ML. I think getting people like me to work on it is more realistic and potentially higher reward because you can get people who will actually work on it, rather than the slim chance of getting Terence Tao on board, and can have a greater number of hands to complete the tedious work and experiments that need to be done
I think for that money you’re going to need to prove that you’re worth it—can you link to any of your work? Also, as per my note at the top of the OP, I think that there basically isn’t time to spin up an alignment career now, so unless you are a genius or have some novel insights into the problem already, then I’m not very hopeful that your work could make a difference at this late stage. I’m more excited about people pivoting to work on getting a global AGI moratorium in place asap. Once we have that, then we can focus on a “Manhattan Project” for Alignment.