I do have a lot of respect for the Open Phil team I just think they are making some critical mistakes, which is fully compatible with respectability.
The most straightforward reason is that Open Phil seemingly does not want to fund any AI policy org that explicitly prioritizes x-risk reduction, and doesn’t want to fund any org that works against AI companies, and I want to fund orgs that do both of those things. So, even putting neglectedness aside, Open Phil funding an AI policy org is evidence that the org is following a strategy that I don’t expect to be effective. That said, this consideration ended up not really being a factor in my decision-making because it’s screened off by looking at what orgs are actually doing (I don’t need to use heuristics for interpreting orgs’ activities if I look at their actual activities).
I do have a lot of respect for the Open Phil team I just think they are making some critical mistakes, which is fully compatible with respectability
Sorry, my intention wasn’t to imply that you didn’t respect them, I agree that it is consistent to both respect and disagree.
Re the rest of your comment, my understanding of what you meant is as follows:
You think the most effective strategies for reducing AI x risk are explicitly black listed by OpenPhil. Therefore OpenPhil funding an org is strong evidence they don’t follow those strategies. This doesn’t necessarily mean that the orgs work is neutral or negative impact, but it’s evidence against being one of your top things. Further, this is a heuristic rather than a confident rule, and you made the time for a shallow investigation into some orgs funded by OpenPhil anyway, at which point heuristics are screened off and can be ignored anyway.
It’s an approximately correct summary except it overstates my confidence. AFAICT Open Phil hasn’t explicitly blacklisted any x-risk strategies; and I would take Open Phil funding as weak to moderate evidence, not strong evidence.
I do have a lot of respect for the Open Phil team I just think they are making some critical mistakes, which is fully compatible with respectability.
The most straightforward reason is that Open Phil seemingly does not want to fund any AI policy org that explicitly prioritizes x-risk reduction, and doesn’t want to fund any org that works against AI companies, and I want to fund orgs that do both of those things. So, even putting neglectedness aside, Open Phil funding an AI policy org is evidence that the org is following a strategy that I don’t expect to be effective. That said, this consideration ended up not really being a factor in my decision-making because it’s screened off by looking at what orgs are actually doing (I don’t need to use heuristics for interpreting orgs’ activities if I look at their actual activities).
Sorry, my intention wasn’t to imply that you didn’t respect them, I agree that it is consistent to both respect and disagree.
Re the rest of your comment, my understanding of what you meant is as follows:
You think the most effective strategies for reducing AI x risk are explicitly black listed by OpenPhil. Therefore OpenPhil funding an org is strong evidence they don’t follow those strategies. This doesn’t necessarily mean that the orgs work is neutral or negative impact, but it’s evidence against being one of your top things. Further, this is a heuristic rather than a confident rule, and you made the time for a shallow investigation into some orgs funded by OpenPhil anyway, at which point heuristics are screened off and can be ignored anyway.
Is this a correct summary?
It’s an approximately correct summary except it overstates my confidence. AFAICT Open Phil hasn’t explicitly blacklisted any x-risk strategies; and I would take Open Phil funding as weak to moderate evidence, not strong evidence.
Thanks for clarifying! I somewhat disagree with your premises, but agree this is a reasonable position given your premises