This level of support for centralization and deferral is really unusual. I actually don’t know of any community besides EA that endorses it. I’m aware it’s a common position in effective altruism. But the arguments for it haven’t been worked out in detail anywhere I know.
“Keep in mind that many things you might want to fund are in scope of an existing fund, including even small grants for things like laptops. You can just recommend they apply to these funds. If they don’t get any money, I’d guess there were better options you would have missed but should have funded first. You may also be unaware of ways it would backfire, and the reason something doesn’t get funded is because others judge it to be net negative.”
I genuinely don’t think there is any evidence (besides some theory-crafting around unilateralists curse) to think this level of second-guessing yourself and deferring is effective. Please keep in mind the history of the EA funds. Several funds basically never dispersed the funds. And the fund managers explicitly said they didn’t have time. Of course things can improve but this level of deferral is really extreme given the communities history.
Suffice to day I don’t think further centralizing resources is good nor is making things more bureaucratic. Im also not sure there is actually very much risk of ‘unilateralist curse’ unless you are being extremely careless. I trust most EAs to be at least as careful as the leadership. Probably the most dangerous thing you could possible fund is AI capabilities. Openphil gave 30M to OpenAI and the community has been pretty accepting of ai capabilities. This is way more dangerous than anything I would consider funding!
Probably the most dangerous thing you could possible fund is AI capabilities. Openphil gave 30M to OpenAI and the community has been pretty accepting of ai capabilities. This is way more dangerous than anything I would consider funding!
Ya, I guess I wouldn’t have funded them myself in Open Phil’s position, but I’m probably missing a lot of context. I think they did this to try to influence OpenAI to take safety more seriously, getting Holden on their board. Pretty expensive for a board seat, though, and lots of potential downside with unrestricted funding. From their grant writeup:
We expect the primary benefits of this grant to stem from our partnership with OpenAI, rather than simply from contributing funding toward OpenAI’s work. While we would also expect general support for OpenAI to be likely beneficial on its own, the case for this grant hinges on the benefits we anticipate from our partnership, particularly the opportunity to help play a role in OpenAI’s approach to safety and governance issues.
FWIW, I trust the judgement of Open Phil in animal welfare and the EA Animal Welfare Fund a lot. See my long comment here.
If I had to guess I would predict Luke is more careful than various other EA leaders (mostly cause of Luke’s ties to Eliezer). But you can look at the observed behavior of OpenPhil/80K/etc and I dont think they are behaving as carefully as I would endorse with respect to the most dangerous possible topic (besides maybe gain of function research which Ea would not fund). It doesn’t make sense to write leadership a blank check. But it also doesn’t make sense to worry about the ‘unilateralists curse’ when deciding if you should buy your friend a laptop!
This level of support for centralization and deferral is really unusual. I actually don’t know of any community besides EA that endorses it. I’m aware it’s a common position in effective altruism. But the arguments for it haven’t been worked out in detail anywhere I know.
“Keep in mind that many things you might want to fund are in scope of an existing fund, including even small grants for things like laptops. You can just recommend they apply to these funds. If they don’t get any money, I’d guess there were better options you would have missed but should have funded first. You may also be unaware of ways it would backfire, and the reason something doesn’t get funded is because others judge it to be net negative.”
I genuinely don’t think there is any evidence (besides some theory-crafting around unilateralists curse) to think this level of second-guessing yourself and deferring is effective. Please keep in mind the history of the EA funds. Several funds basically never dispersed the funds. And the fund managers explicitly said they didn’t have time. Of course things can improve but this level of deferral is really extreme given the communities history.
Suffice to day I don’t think further centralizing resources is good nor is making things more bureaucratic. Im also not sure there is actually very much risk of ‘unilateralist curse’ unless you are being extremely careless. I trust most EAs to be at least as careful as the leadership. Probably the most dangerous thing you could possible fund is AI capabilities. Openphil gave 30M to OpenAI and the community has been pretty accepting of ai capabilities. This is way more dangerous than anything I would consider funding!
Ya, I guess I wouldn’t have funded them myself in Open Phil’s position, but I’m probably missing a lot of context. I think they did this to try to influence OpenAI to take safety more seriously, getting Holden on their board. Pretty expensive for a board seat, though, and lots of potential downside with unrestricted funding. From their grant writeup:
FWIW, I trust the judgement of Open Phil in animal welfare and the EA Animal Welfare Fund a lot. See my long comment here.
Luke from Open Phil on net negative interventions in AI safety (maybe AI governance specifically): https://forum.effectivealtruism.org/posts/pxALB46SEkwNbfiNS/the-motivated-reasoning-critique-of-effective-altruism#6yFEBSgDiAfGHHKTD
If I had to guess I would predict Luke is more careful than various other EA leaders (mostly cause of Luke’s ties to Eliezer). But you can look at the observed behavior of OpenPhil/80K/etc and I dont think they are behaving as carefully as I would endorse with respect to the most dangerous possible topic (besides maybe gain of function research which Ea would not fund). It doesn’t make sense to write leadership a blank check. But it also doesn’t make sense to worry about the ‘unilateralists curse’ when deciding if you should buy your friend a laptop!