I’m a little surprised by your perspective. My impression is that Open Phil, EA Infrastructure, FTX, Future Flourishing, etc. are all eagerly funding AI safety stuff. Who else are you imagining funding this space who isn’t already?
Also, a bunch of EA community organizers are pushing AI risks substantially harder as a cause area now than they did 5 years ago (e.g. 80k, many university groups).
If you’re worried about short timelines, shouldn’t the push be to transition people from meta work on community building to object level work directly on alignment?
Thanks for sharing your thoughts! Let me know if I misunderstood something.
My impression is that Open Phil, EA Infrastructure, FTX, Future Flourishing, etc. are all eagerly funding AI safety stuff. Who else are you imagining funding this space who isn’t already?
There’s a reason why companies often have multiple brands instead of one. It lets you reach more people. If you created the AI Safety Movement Building Fund and there was literally no difference between it and the EA Infrastructure Fund (same people evaluating and everything), you would still get more applications because lots of people make snap judgments based on a name. (Though in retrospect I’m feeling much less confident about this idea b/c movement-builders are disproportionately likely to know about what opportunities are available).
If you’re worried about short timelines, shouldn’t the push be to transition people from meta work on community building to object level work directly on alignment?
If I was more confident in short-timelines then I would be more supportive of this. I would say I’m more worried about short-timelines (25-70% chance) than confident in them. Another reason for being wary about this strategy is that most of our survival probability might be in timelines that are longer.
I’m a little surprised by your perspective. My impression is that Open Phil, EA Infrastructure, FTX, Future Flourishing, etc. are all eagerly funding AI safety stuff. Who else are you imagining funding this space who isn’t already?
Also, a bunch of EA community organizers are pushing AI risks substantially harder as a cause area now than they did 5 years ago (e.g. 80k, many university groups).
If you’re worried about short timelines, shouldn’t the push be to transition people from meta work on community building to object level work directly on alignment?
Thanks for sharing your thoughts! Let me know if I misunderstood something.
There’s a reason why companies often have multiple brands instead of one. It lets you reach more people. If you created the AI Safety Movement Building Fund and there was literally no difference between it and the EA Infrastructure Fund (same people evaluating and everything), you would still get more applications because lots of people make snap judgments based on a name. (Though in retrospect I’m feeling much less confident about this idea b/c movement-builders are disproportionately likely to know about what opportunities are available).
If I was more confident in short-timelines then I would be more supportive of this. I would say I’m more worried about short-timelines (25-70% chance) than confident in them. Another reason for being wary about this strategy is that most of our survival probability might be in timelines that are longer.