As someone who is not an AI safety researcher, I’ve always had trouble knowing where to donate if I wanted to reduce x-risk specifically from AI. I think I would have donated quite a larger share of my donations to AI safety over the past 10 years if something like an AI Safety Metacharity existed. Nuclear Threat Initiative tends to be my go to for x-risk donations, but I’m more worried about AI specifically lately. I’m open to being pitched on where to give for AI safety.
Regarding the model, I think it’s good to flesh things out like this, so thank you for undertaking the exercise. I had a bit of a play with the model, and one thing that stood out to me is that the impact of an AI safety professional at different percentiles doesn’t seem to depend on the ideal size, which doesn’t seem right (I may be missing something). Shouldn’t the marginal impact of one AI safety professional be lower if it turned out the ideal size of the AI safety workforce were 10 million rather than 100,000?
Luke, you’ve been so strong at the helm of GWWC for so long that I’m often guilty of thinking about you and GWWC as synonymous (that’s a compliment, I swear!). Well done on the amazing work you’ve done, and enjoy a well deserved break. I can’t wait to see what you do next.