> For example, it doesn’t seem like your project is at serious risk of defunding if you’re 20-30% more explicit about the risks you care about or what personally motivates you to do this work.
I suspect that most nonprofit leaders feel a great deal of funding insecurity. There’s always neat new initiatives that a group would love to expand to, and also, managers hate the risk of potentially needing to fire employees. They’re often thinking about funding on the margins—either they are nervous about firing a few employees, or they are hoping to expand to new areas.
> There are probably only about 200 people on Earth with the context x competence for OP to enthusiastically fund for leading on this work
I think there’s more competition. OP covers a lot of ground. I could easily see them just allocating a bit more money to human welfare later on, for example.
> My wish here is that specific people running orgs and projects were made of tougher stuff re following funding incentives.
I think that the issue of incentives runs deeper than this. It’s not just a matter of leaders straightforwardly understanding the incentives and taking according actions. It’s also that people will start believing things that are convenient to said incentives, that leaders will be chosen who seem to be good fits for the funding situation, and so on. The people who really believe in other goals often get frustrated and leave.
I’d guess that the leaders of these orgs feel more aligned with the OP agenda then they do the agenda you outline, for instance.
Agree on most of this too. I wrote too categorically about the risk of “defunding.” You will be on a shorter leash if you take your 20-30% independent-view discount. I was mostly saying that funding wouldn’t go to zero and crash your org.
I further agree on cognitive dissonance + selection effects.
Maybe the main disagreement is that OP is ~a fixed monolith. I know people there. They’re quite EA in my accounting; much like I think of many leaders at grantees. There’s room in these joints. I think current trends are driven by “deference to the vibe” on both sides of the grant-making arrangement. Everyone perceives plain speaking about values and motivations as cringe and counterproductive and it thereby becomes the reality.
I’m sure org leaders and I have disagreements along these lines, but I think they’d also concede they’re doing some substantial amount of deliberate deemphasis of what they regard as their terminal goals in service of something more instrumental. They do probably disagree with me that it is best all-things-considered to undo this, but I wrote the post to convince them!
For what it’s worth, I find some of what’s said in this thread quite surprising.
Reading your post, I saw you describing two dynamics:
Principles-first EA initiatives are being replaced by AI safety initiatives
AI safety initiatives founded by EAs, which one would naively expect to remain x-risk focused, are becoming safety-washed (e.g., your BlueDot example)
I understood @Ozzie’s first comment on funding to be about 1. But then your subsequent discussion with Ozzie seems to also point to funding as explaining 2.[1]
While Open Phil has opinions within AI safety that have alienated some EAs—e.g., heavy emphasis on pure ML work[2]—my impression was that they are very much motivated by ‘real,’ x-risk-focused AI safety concerns, rather than things like discrimination and copyright infringement. But it sounds like you might actually think that OP-funded AI safety orgs are feeling pressure from OP to be less about x-risk? If so, this is a major update for me, and one that fills me with pessimism.
For example, you say, “[OP-funded orgs] bow to incentives to be the very-most-shining star by OP’s standard, so they can scale up and get more funding. I would just make the trade off the other way: be smaller and more focused on things that matter.”
I think OP and grantees are synced up on xrisk (or at least GCRs) being the terminal goal. My issue is that their instrumental goals seem to involve a lot of deemphasizing that focus to expand reach/influence/status/number of allies in ways that I worry lend themselves to mission/value drift.
The BlueDot example seems different to what I was pointing at.
I would flag that lack of EA funding power sometimes makes xrisk less of an issue.
Like, some groups might not trust that OP/SFF will continue to support them, and then do whatever they think they need to in order to attract other money—and this often is at odds with xrisk prioritization.
(I clearly see this as a issue with the broader world, not with OP/SFF)
I see it a bit differently.
> For example, it doesn’t seem like your project is at serious risk of defunding if you’re 20-30% more explicit about the risks you care about or what personally motivates you to do this work.
I suspect that most nonprofit leaders feel a great deal of funding insecurity. There’s always neat new initiatives that a group would love to expand to, and also, managers hate the risk of potentially needing to fire employees. They’re often thinking about funding on the margins—either they are nervous about firing a few employees, or they are hoping to expand to new areas.
> There are probably only about 200 people on Earth with the context x competence for OP to enthusiastically fund for leading on this work
I think there’s more competition. OP covers a lot of ground. I could easily see them just allocating a bit more money to human welfare later on, for example.
> My wish here is that specific people running orgs and projects were made of tougher stuff re following funding incentives.
I think that the issue of incentives runs deeper than this. It’s not just a matter of leaders straightforwardly understanding the incentives and taking according actions. It’s also that people will start believing things that are convenient to said incentives, that leaders will be chosen who seem to be good fits for the funding situation, and so on. The people who really believe in other goals often get frustrated and leave.
I’d guess that the leaders of these orgs feel more aligned with the OP agenda then they do the agenda you outline, for instance.
Agree on most of this too. I wrote too categorically about the risk of “defunding.” You will be on a shorter leash if you take your 20-30% independent-view discount. I was mostly saying that funding wouldn’t go to zero and crash your org.
I further agree on cognitive dissonance + selection effects.
Maybe the main disagreement is that OP is ~a fixed monolith. I know people there. They’re quite EA in my accounting; much like I think of many leaders at grantees. There’s room in these joints. I think current trends are driven by “deference to the vibe” on both sides of the grant-making arrangement. Everyone perceives plain speaking about values and motivations as cringe and counterproductive and it thereby becomes the reality.
I’m sure org leaders and I have disagreements along these lines, but I think they’d also concede they’re doing some substantial amount of deliberate deemphasis of what they regard as their terminal goals in service of something more instrumental. They do probably disagree with me that it is best all-things-considered to undo this, but I wrote the post to convince them!
For what it’s worth, I find some of what’s said in this thread quite surprising.
Reading your post, I saw you describing two dynamics:
Principles-first EA initiatives are being replaced by AI safety initiatives
AI safety initiatives founded by EAs, which one would naively expect to remain x-risk focused, are becoming safety-washed (e.g., your BlueDot example)
I understood @Ozzie’s first comment on funding to be about 1. But then your subsequent discussion with Ozzie seems to also point to funding as explaining 2.[1]
While Open Phil has opinions within AI safety that have alienated some EAs—e.g., heavy emphasis on pure ML work[2]—my impression was that they are very much motivated by ‘real,’ x-risk-focused AI safety concerns, rather than things like discrimination and copyright infringement. But it sounds like you might actually think that OP-funded AI safety orgs are feeling pressure from OP to be less about x-risk? If so, this is a major update for me, and one that fills me with pessimism.
For example, you say, “[OP-funded orgs] bow to incentives to be the very-most-shining star by OP’s standard, so they can scale up and get more funding. I would just make the trade off the other way: be smaller and more focused on things that matter.”
At the expense of, e.g., more philosophical approaches
I think OP and grantees are synced up on xrisk (or at least GCRs) being the terminal goal. My issue is that their instrumental goals seem to involve a lot of deemphasizing that focus to expand reach/influence/status/number of allies in ways that I worry lend themselves to mission/value drift.
Yea, I broadly agree with Mjreard here.
The BlueDot example seems different to what I was pointing at.
I would flag that lack of EA funding power sometimes makes xrisk less of an issue.
Like, some groups might not trust that OP/SFF will continue to support them, and then do whatever they think they need to in order to attract other money—and this often is at odds with xrisk prioritization.
(I clearly see this as a issue with the broader world, not with OP/SFF)