As a rule of thumb, I don’t want to fund anything Open Philanthropy has funded. Not because it means they don’t have room for more funding, but because I believe (credence: 80%) that Open Philanthropy has bad judgment on AI policy (as explained in this comment by Oliver Habryka and reply by Akash—I have similar beliefs, but they explain it better than I do).
This seems like an bizarre position to me. Sure, maybe you disagree with them (I personally have a fair amount of respect for the OpenPhil team and their judgement, but whatever, I can see valid reasons to criticise), but to consider their judgement not just irrelevant, but actively such strong negative evidence as to make an org not worth donating to, seems kinda wild. Why do you believe this? Reversed stupidity is not intelligence. Is the implicit model that all of x risk focused AI policy is pushing on some 1D spectrum such that EVERY org in the two camps is actively working against the other camp? That doesn’t seem true to me.
I would have a lot more sympathy with an argument that eg other kinds of policy work is comparatively neglected, so OpenPhil funding it is a sign that it’s less neglected.
I do have a lot of respect for the Open Phil team I just think they are making some critical mistakes, which is fully compatible with respectability.
The most straightforward reason is that Open Phil seemingly does not want to fund any AI policy org that explicitly prioritizes x-risk reduction, and doesn’t want to fund any org that works against AI companies, and I want to fund orgs that do both of those things. So, even putting neglectedness aside, Open Phil funding an AI policy org is evidence that the org is following a strategy that I don’t expect to be effective. That said, this consideration ended up not really being a factor in my decision-making because it’s screened off by looking at what orgs are actually doing (I don’t need to use heuristics for interpreting orgs’ activities if I look at their actual activities).
I do have a lot of respect for the Open Phil team I just think they are making some critical mistakes, which is fully compatible with respectability
Sorry, my intention wasn’t to imply that you didn’t respect them, I agree that it is consistent to both respect and disagree.
Re the rest of your comment, my understanding of what you meant is as follows:
You think the most effective strategies for reducing AI x risk are explicitly black listed by OpenPhil. Therefore OpenPhil funding an org is strong evidence they don’t follow those strategies. This doesn’t necessarily mean that the orgs work is neutral or negative impact, but it’s evidence against being one of your top things. Further, this is a heuristic rather than a confident rule, and you made the time for a shallow investigation into some orgs funded by OpenPhil anyway, at which point heuristics are screened off and can be ignored anyway.
It’s an approximately correct summary except it overstates my confidence. AFAICT Open Phil hasn’t explicitly blacklisted any x-risk strategies; and I would take Open Phil funding as weak to moderate evidence, not strong evidence.
“Reversed stupidity is not intelligence” is a surprisingly insightful wee concept that I hadn’t heard of before. Had a look at the stuff on LessWrong about it and found it helpful thanks!
This seems like an bizarre position to me. Sure, maybe you disagree with them (I personally have a fair amount of respect for the OpenPhil team and their judgement, but whatever, I can see valid reasons to criticise), but to consider their judgement not just irrelevant, but actively such strong negative evidence as to make an org not worth donating to, seems kinda wild. Why do you believe this? Reversed stupidity is not intelligence. Is the implicit model that all of x risk focused AI policy is pushing on some 1D spectrum such that EVERY org in the two camps is actively working against the other camp? That doesn’t seem true to me.
I would have a lot more sympathy with an argument that eg other kinds of policy work is comparatively neglected, so OpenPhil funding it is a sign that it’s less neglected.
I do have a lot of respect for the Open Phil team I just think they are making some critical mistakes, which is fully compatible with respectability.
The most straightforward reason is that Open Phil seemingly does not want to fund any AI policy org that explicitly prioritizes x-risk reduction, and doesn’t want to fund any org that works against AI companies, and I want to fund orgs that do both of those things. So, even putting neglectedness aside, Open Phil funding an AI policy org is evidence that the org is following a strategy that I don’t expect to be effective. That said, this consideration ended up not really being a factor in my decision-making because it’s screened off by looking at what orgs are actually doing (I don’t need to use heuristics for interpreting orgs’ activities if I look at their actual activities).
Sorry, my intention wasn’t to imply that you didn’t respect them, I agree that it is consistent to both respect and disagree.
Re the rest of your comment, my understanding of what you meant is as follows:
You think the most effective strategies for reducing AI x risk are explicitly black listed by OpenPhil. Therefore OpenPhil funding an org is strong evidence they don’t follow those strategies. This doesn’t necessarily mean that the orgs work is neutral or negative impact, but it’s evidence against being one of your top things. Further, this is a heuristic rather than a confident rule, and you made the time for a shallow investigation into some orgs funded by OpenPhil anyway, at which point heuristics are screened off and can be ignored anyway.
Is this a correct summary?
It’s an approximately correct summary except it overstates my confidence. AFAICT Open Phil hasn’t explicitly blacklisted any x-risk strategies; and I would take Open Phil funding as weak to moderate evidence, not strong evidence.
Thanks for clarifying! I somewhat disagree with your premises, but agree this is a reasonable position given your premises
“Reversed stupidity is not intelligence” is a surprisingly insightful wee concept that I hadn’t heard of before. Had a look at the stuff on LessWrong about it and found it helpful thanks!