If someone wants to become a grantmaker (perhaps with an AI risk focus) for an organization like LTFF, what do you think they should be doing to increase their odds of success?
IMO a good candidate is anything that is object-level useful for X-risk mitigation. E.g. technical alignment work, AI governance / policy work, biosecurity, etc.
To add to that, I’d expect practice with communication and reasoning transparency and having a broad (not just deep) understanding of other work in your cause area to be quite helpful. Also, to the extent that this is trainable, it’s probably good to model yourself as training to become a high-integrity and reasonably uncompromising person now, because of course integrity failures “on the job” are very costly. My thoughts on who could make a good LTFF fund chair might also be relevant.
On LessWrong, jacquesthibs asks:
On LessWrong, Lauro said:
To add to that, I’d expect practice with communication and reasoning transparency and having a broad (not just deep) understanding of other work in your cause area to be quite helpful. Also, to the extent that this is trainable, it’s probably good to model yourself as training to become a high-integrity and reasonably uncompromising person now, because of course integrity failures “on the job” are very costly. My thoughts on who could make a good LTFF fund chair might also be relevant.