OP has provided very mixed messages around AI safety. They’ve provided surprisingly little funding / support for technical AI safety in the last few years (perhaps 1 full-time grantmaker?), but they have seemed to provide more support for AI safety community building / recruiting
Yeah, I find myself very confused by this state of affairs. Hundreds of people are being funneled through the AI safety community-building pipeline, but there’s little funding for them to work on things once they come out the other side.[1]
As well as being suboptimal from the viewpoint of preventing existential catastrophe, this also just seems kind of common-sense unethical. Like, all these people (most of whom are bright-eyed youngsters) are being told that they can contribute, if only they skill up, and then they later findout that that’s not the case.
These community-building graduates can, of course, try going the non-philanthropic route—i.e., apply to AGI companies or government institutes. But there are major gaps in what those organizations are working on, in my view, and they also can’t absorb so many people.
Yea, I think this setup has been incredibly frustrating downstream. I’d hope that people from OP with knowledge could publicly reflect on this, but my quick impression is that some of the following factors happened: 1. OP has had major difficulties/limitations around hiring in the last 5+ years. Some of this is lack of attention, some is that there aren’t great candidates, some is a lack of ability. This effected some cause areas more than others. For whatever reason, they seemed to have more success hiring (and retaining talent) for community than for technical AI safety. 2. I think there’s been some uncertainties / disagreements into how important / valuable current technical AI safety organizations are to fund. For example, I imagine if this were a major priority from those in charge of OP, more could have been done. 3. OP management seems to be a bit in flux now. Lost Holden recently, hiring a new head of GCR, etc. 4. I think OP isn’t very transparent and public with explaining their limitations/challenges publicly. 5. I would flag that there are spots at Anthropic and Deepmind that we don’t need to fund, that are still good fits for talent. 6. I think some of the Paul Christiano—connected orgs were considered a conflict-of-interest, given that Ajeya Cotra was the main grantmaker. 7. Given all of this, I think it would be really nice if people could at least provide warnings about this. Like, people entering the field are strongly warned that the job market is very limited. But I’m not sure who feels responsible / well placed to do this.
Yeah, I find myself very confused by this state of affairs. Hundreds of people are being funneled through the AI safety community-building pipeline, but there’s little funding for them to work on things once they come out the other side.[1]
As well as being suboptimal from the viewpoint of preventing existential catastrophe, this also just seems kind of common-sense unethical. Like, all these people (most of whom are bright-eyed youngsters) are being told that they can contribute, if only they skill up, and then they later find out that that’s not the case.
These community-building graduates can, of course, try going the non-philanthropic route—i.e., apply to AGI companies or government institutes. But there are major gaps in what those organizations are working on, in my view, and they also can’t absorb so many people.
Yea, I think this setup has been incredibly frustrating downstream. I’d hope that people from OP with knowledge could publicly reflect on this, but my quick impression is that some of the following factors happened:
1. OP has had major difficulties/limitations around hiring in the last 5+ years. Some of this is lack of attention, some is that there aren’t great candidates, some is a lack of ability. This effected some cause areas more than others. For whatever reason, they seemed to have more success hiring (and retaining talent) for community than for technical AI safety.
2. I think there’s been some uncertainties / disagreements into how important / valuable current technical AI safety organizations are to fund. For example, I imagine if this were a major priority from those in charge of OP, more could have been done.
3. OP management seems to be a bit in flux now. Lost Holden recently, hiring a new head of GCR, etc.
4. I think OP isn’t very transparent and public with explaining their limitations/challenges publicly.
5. I would flag that there are spots at Anthropic and Deepmind that we don’t need to fund, that are still good fits for talent.
6. I think some of the Paul Christiano—connected orgs were considered a conflict-of-interest, given that Ajeya Cotra was the main grantmaker.
7. Given all of this, I think it would be really nice if people could at least provide warnings about this. Like, people entering the field are strongly warned that the job market is very limited. But I’m not sure who feels responsible / well placed to do this.