One issue is that decentralised grant-making could increase the risk that projects that are net negative get funding, as per the logic of the unilateralist’s curse. The risk of that probably varies with cause area and type of project.
My hunch is that many people have a bit of an intuitive bias against centralised funding; e.g. because it conjures up images of centralised bureaucracies (cf. the reference to the USSR) or appears elitist. I think that in the end it’s a tricky empirical question and that the hypothesis that relatively centralised funding is indeed best shouldn’t be discarded prematurely.
I should also say that how centralised or coordinated grant-makers are isn’t just a function of how many grant-makers there are, but also of how much they communicate with each other. There might be ways of getting many of the benefits of decentralisation while reducing the risks, e.g. by the right kinds of coordination.
Right, but the unilateralist’s curse is just a pro tanto reason not to have dispersed funding. It’s something of a false positive (funding stuff that shouldn’t get funded) but that needs to be considered against the false negatives of centralised funding (not funding stuff that should get funded). It’s not obvious, as a matter of conjecture, which is larger.
To be honest, the overall (including non-EA) grantmaking ecosystem is not so centralized that people can’t get funding for possibly net-negative ideas elsewhere. Especially given they have already put work in, have a handful of connections, or will be working in a sort of “sexy” cause area like AI that even some rando UHNWI would take interest in.
Given that, I don’t think that keeping grantmaking very centralized yields enough of a reduction in risk that it is worth protecting centralized grantmaking on that metric. And frankly, sweeping such risky applications under the rug hoping they disappear because they aren’t funded (by you, that one time) seems a terrible strategy. I’m not sure that is what is effectively happening, but if it is:
I propose a 2 part protocol within the grantmaking ecosystem to reduce downside risk: 1. Overt feedback from grantmakers in the case that they think a project is potentially net-negative. 2. To take it a step further, EA could employ someone whose role it is to try to actively sway a person from an idea, or help mitigate the risks of their project if the applicants affirm they are going to keep trying.
Imagine, as an applicant, receiving an email saying:
”Hello [Your Name],
Thank you for your grant application. We are sorry to bear the bad news that we will not be funding your project. We commend you on the effort you have already put in, but we have concerns that there may be great risks to following through and we want to strongly encourage you to consider other options.
We have CC’ed [name of unilateralist’s curse expert with domain expertise], who is a specialist in cases like these who contracts with various foundations. They would be willing to have a call with you about why your idea may be too risky to move forward with. If this email has not already convinced you, we hope you consider scheduling a call on their [calendly] for more details and ideas, including potential risk mitigation.
We also recommend you apply for 80k coaching [here]. They may be able to point you toward roles that are just as good or a better fit for you, but with no big downside risk and with community support. You can list us a recommendation on your coaching application.
We hope that you do not take this too personally as this is not an uncommon reason to withhold funding (hopefully evidenced by the resources in place for such cases), and we hope to see you continuing to put your skills toward altruistic efforts.
Best, [Name of Grantmaker]”
Should I write a quick EA forum post on this 2 part idea? (Basically I’ll copy-paste this comment and add a couple paragraphs). Is there a better idea?
I realize that email will look dramatic as a response to some, but it wouldn’t have to be sent in every “cursed case”. I’m sure many applications are rather random ideas. I imagine that a grantmaker could tell by the applicants’ resumes and their social positioning how likely the founding team are to keep trying to start or perpetuate a project.
I think giving this type of feedback when warranted also reflects well on EA. It makes EA seem less of an ivory tower/billionaire hobby and more of a conversational and collaborative movement.
*************************************
The above is a departure from the point of the post. FWIW, I do think the EA grantmaking ecosystem is so centralized that people who have potentially good ideas which stem from a bit of a different framework than those of typical EA grantmakers will struggle to get funding elsewhere. I agree decentralizing grantmaking to some extent is important and I have my reasoning here
I’m very strongly in favor of this level of transparency. My co-founder Max has been doing some work along those lines in coordination with CEA’s community health team. But if I understand correctly, they’re not that up front about why they’re reaching out. Being more “on the nose” about it, paired with a clear signal of support would be great because these people are usually well-meaning and can struggle parsing ambiguous signals. Of course, that’s a question of qualified manpower—arguably our most limited resource—but we shouldn’t let our limited capacity for immediate implementation stand in the way of inching ever closer to our ideal norms.
I completely agree with this actually. I think concerns over unilaterialist’s curse is a great argument in favour of keeping funding central, at least for many areas. I also don’t feel particularly confident that attempts to spread out or democratize funding would actually lead to net-better projects.
But I do think there is a strong argument in favour of experimenting with other types of grantmaking, seeing as we have identified weaknesses in the current form which could potentially be alleviated.
I think the unilateralist’s curse can be avoided if we make sure to avoid hazardous domains of funding for our experiements to evaluate other types of grantmaking.
Actually, a simple (but perhaps not easy) way to reduce the risks of funding bad projects in a decentralized system would be to have a centralized team screen out obviously bad projects. For example, in the case of quadratic funding, prospective projects would first be vetted to filter out clearly bad projects. Then, anyone using the platform would be able to direct matching funds to whichever of the approved projects they like. As an analogy, Impact CoLabs is a decentralized system for matching volunteers to projects, but it has a centralized screening process with somewhat rigorous vetting criteria.
(Just saying I did lots of the vetting for colabs and I think it would be better if our screening would be totally transparent instead of hidden, though I don’t speak for the entire team)
If you want a system to counter the univerversalist curse, then designen a system with the goal of countering the univeralist curse. Don’t relly on an unintended sidefect of a coincidental system design.
I don’t think there is a negative bias against centalised funging in the EA netowrk.
I’ve discussed funding with quite a few people, and my experience is that EAs like experts and efficiency, which mathces well with centralisd funding, at least in theory. I never heard anyone compare it to USSR and similar before.
Even this post is not against centralsised funding. The autor is just arguing that any system have blindspots, and we should have other systems too.
While it’s definitely a potential issue, I don’t think it’s a guaranteed issue. For example, with a more distributed grantmaking system, grantmakers could agree to not fund projects that have consensus around potential harms, but fund projects that align with their specific worldviews that other funders may not be interested in funding but do not believe have significant downside risks. That structure was part of the initial design intent of the first EA Angel Group (not to be confused with the EA Angel Group that is currently operating).
I see, just pointing out a specific example for readers! You mention the “hypothesis that relatively centralised funding is indeed best shouldn’t be discarded prematurely.” Do you think it’s concerning that EA hasn’t (to my understanding) tried decentralized funding at any scale?
Obviously there is a big opportunity cost to not trying something that could vastly outperform something we currently do—that’s more or less true by definition. But the question is whether we could (or rather—whether there is a decent chance that we would) see such a vast outperformance.
There’s evidence to suggest that decentralized decision making can outperform centralized decision making; for example with prediction markets and crowdsourcing. I think it’s problematic in general to assume that centralized thinking and institutions are better than decentralized thinking and institutions, especially if that reasoning is based on the status quo. I was asking this series of questions because by wording that centralized funding was a “hypothesis,” I thought you would support testing other hypotheses by default.
I don’t think there’s evidence that centralised or decentralised decision-making is in general better than the other. It has to be decided on a case-by-case-basis.
I think this discussion is too abstract and that to determine whether EA grant-making should be more decentralised one needs to get into way more empirical detail. I just wanted to raise a consideration the OP didn’t mention in my top-level comment.
I agree! I was trying to highlight that because we’re not sure that centralized funding is better or not, it would be a high priority to test other mechanisms, especially if there’s reason to believe other mechanisms could result in significantly different outcomes.
Instead of increasing the number of grantmakers, which would increase the number of altruistic agents and increase the risks from the unilateralists’ curse, we could work on ways for our grantmakers to have different blind spots. The simplest approach would be to recruit grantmakers from different countries, academic backgrounds, etc.
That being said, I am still in favour of a greater number of grantmakers but in areas unrelated to AI Safety and biosecurity so that the risks from the unilateralists curse are much smaller—such as global health, development, farmed animal welfare, promoting evidence based policy, promoting liberal democracy etc.
One issue is that decentralised grant-making could increase the risk that projects that are net negative get funding, as per the logic of the unilateralist’s curse. The risk of that probably varies with cause area and type of project.
My hunch is that many people have a bit of an intuitive bias against centralised funding; e.g. because it conjures up images of centralised bureaucracies (cf. the reference to the USSR) or appears elitist. I think that in the end it’s a tricky empirical question and that the hypothesis that relatively centralised funding is indeed best shouldn’t be discarded prematurely.
I should also say that how centralised or coordinated grant-makers are isn’t just a function of how many grant-makers there are, but also of how much they communicate with each other. There might be ways of getting many of the benefits of decentralisation while reducing the risks, e.g. by the right kinds of coordination.
Right, but the unilateralist’s curse is just a pro tanto reason not to have dispersed funding. It’s something of a false positive (funding stuff that shouldn’t get funded) but that needs to be considered against the false negatives of centralised funding (not funding stuff that should get funded). It’s not obvious, as a matter of conjecture, which is larger.
Yes, but it was a consideration not mentioned in the OP, so it seemed worth mentioning.
To be honest, the overall (including non-EA) grantmaking ecosystem is not so centralized that people can’t get funding for possibly net-negative ideas elsewhere. Especially given they have already put work in, have a handful of connections, or will be working in a sort of “sexy” cause area like AI that even some rando UHNWI would take interest in.
Given that, I don’t think that keeping grantmaking very centralized yields enough of a reduction in risk that it is worth protecting centralized grantmaking on that metric. And frankly, sweeping such risky applications under the rug hoping they disappear because they aren’t funded (by you, that one time) seems a terrible strategy. I’m not sure that is what is effectively happening, but if it is:
I propose a 2 part protocol within the grantmaking ecosystem to reduce downside risk:
1. Overt feedback from grantmakers in the case that they think a project is potentially net-negative.
2. To take it a step further, EA could employ someone whose role it is to try to actively sway a person from an idea, or help mitigate the risks of their project if the applicants affirm they are going to keep trying.
Imagine, as an applicant, receiving an email saying:
”Hello [Your Name],
Thank you for your grant application. We are sorry to bear the bad news that we will not be funding your project. We commend you on the effort you have already put in, but we have concerns that there may be great risks to following through and we want to strongly encourage you to consider other options.
We have CC’ed [name of unilateralist’s curse expert with domain expertise], who is a specialist in cases like these who contracts with various foundations. They would be willing to have a call with you about why your idea may be too risky to move forward with. If this email has not already convinced you, we hope you consider scheduling a call on their [calendly] for more details and ideas, including potential risk mitigation.
We also recommend you apply for 80k coaching [here]. They may be able to point you toward roles that are just as good or a better fit for you, but with no big downside risk and with community support. You can list us a recommendation on your coaching application.
We hope that you do not take this too personally as this is not an uncommon reason to withhold funding (hopefully evidenced by the resources in place for such cases), and we hope to see you continuing to put your skills toward altruistic efforts.
Best,
[Name of Grantmaker]”
Should I write a quick EA forum post on this 2 part idea? (Basically I’ll copy-paste this comment and add a couple paragraphs). Is there a better idea?
I realize that email will look dramatic as a response to some, but it wouldn’t have to be sent in every “cursed case”. I’m sure many applications are rather random ideas. I imagine that a grantmaker could tell by the applicants’ resumes and their social positioning how likely the founding team are to keep trying to start or perpetuate a project.
I think giving this type of feedback when warranted also reflects well on EA. It makes EA seem less of an ivory tower/billionaire hobby and more of a conversational and collaborative movement.
*************************************
The above is a departure from the point of the post. FWIW, I do think the EA grantmaking ecosystem is so centralized that people who have potentially good ideas which stem from a bit of a different framework than those of typical EA grantmakers will struggle to get funding elsewhere. I agree decentralizing grantmaking to some extent is important and I have my reasoning here
tl;dr please write that post
I’m very strongly in favor of this level of transparency. My co-founder Max has been doing some work along those lines in coordination with CEA’s community health team. But if I understand correctly, they’re not that up front about why they’re reaching out. Being more “on the nose” about it, paired with a clear signal of support would be great because these people are usually well-meaning and can struggle parsing ambiguous signals. Of course, that’s a question of qualified manpower—arguably our most limited resource—but we shouldn’t let our limited capacity for immediate implementation stand in the way of inching ever closer to our ideal norms.
I completely agree with this actually. I think concerns over unilaterialist’s curse is a great argument in favour of keeping funding central, at least for many areas. I also don’t feel particularly confident that attempts to spread out or democratize funding would actually lead to net-better projects.
But I do think there is a strong argument in favour of experimenting with other types of grantmaking, seeing as we have identified weaknesses in the current form which could potentially be alleviated.
I think the unilateralist’s curse can be avoided if we make sure to avoid hazardous domains of funding for our experiements to evaluate other types of grantmaking.
Actually, a simple (but perhaps not easy) way to reduce the risks of funding bad projects in a decentralized system would be to have a centralized team screen out obviously bad projects. For example, in the case of quadratic funding, prospective projects would first be vetted to filter out clearly bad projects. Then, anyone using the platform would be able to direct matching funds to whichever of the approved projects they like. As an analogy, Impact CoLabs is a decentralized system for matching volunteers to projects, but it has a centralized screening process with somewhat rigorous vetting criteria.
(Just saying I did lots of the vetting for colabs and I think it would be better if our screening would be totally transparent instead of hidden, though I don’t speak for the entire team)
Yes! Exactly!
If you want a system to counter the univerversalist curse, then designen a system with the goal of countering the univeralist curse. Don’t relly on an unintended sidefect of a coincidental system design.
I don’t think there is a negative bias against centalised funging in the EA netowrk.
I’ve discussed funding with quite a few people, and my experience is that EAs like experts and efficiency, which mathces well with centralisd funding, at least in theory. I never heard anyone compare it to USSR and similar before.
Even this post is not against centralsised funding. The autor is just arguing that any system have blindspots, and we should have other systems too.
While it’s definitely a potential issue, I don’t think it’s a guaranteed issue. For example, with a more distributed grantmaking system, grantmakers could agree to not fund projects that have consensus around potential harms, but fund projects that align with their specific worldviews that other funders may not be interested in funding but do not believe have significant downside risks. That structure was part of the initial design intent of the first EA Angel Group (not to be confused with the EA Angel Group that is currently operating).
Yes, cf. my ending:
I see, just pointing out a specific example for readers! You mention the “hypothesis that relatively centralised funding is indeed best shouldn’t be discarded prematurely.” Do you think it’s concerning that EA hasn’t (to my understanding) tried decentralized funding at any scale?
I haven’t studied EA grant-making in detail so can’t say with any confidence, but if you ask me I’d say I’m not concerned, no.
Isn’t there a very considerably potential opportunity cost by not trying out funding systems that could vastly outperform the current funding system?
Obviously there is a big opportunity cost to not trying something that could vastly outperform something we currently do—that’s more or less true by definition. But the question is whether we could (or rather—whether there is a decent chance that we would) see such a vast outperformance.
There’s evidence to suggest that decentralized decision making can outperform centralized decision making; for example with prediction markets and crowdsourcing. I think it’s problematic in general to assume that centralized thinking and institutions are better than decentralized thinking and institutions, especially if that reasoning is based on the status quo. I was asking this series of questions because by wording that centralized funding was a “hypothesis,” I thought you would support testing other hypotheses by default.
I don’t think there’s evidence that centralised or decentralised decision-making is in general better than the other. It has to be decided on a case-by-case-basis.
I think this discussion is too abstract and that to determine whether EA grant-making should be more decentralised one needs to get into way more empirical detail. I just wanted to raise a consideration the OP didn’t mention in my top-level comment.
I agree! I was trying to highlight that because we’re not sure that centralized funding is better or not, it would be a high priority to test other mechanisms, especially if there’s reason to believe other mechanisms could result in significantly different outcomes.
One idea I have:
Instead of increasing the number of grantmakers, which would increase the number of altruistic agents and increase the risks from the unilateralists’ curse, we could work on ways for our grantmakers to have different blind spots. The simplest approach would be to recruit grantmakers from different countries, academic backgrounds, etc.
That being said, I am still in favour of a greater number of grantmakers but in areas unrelated to AI Safety and biosecurity so that the risks from the unilateralists curse are much smaller—such as global health, development, farmed animal welfare, promoting evidence based policy, promoting liberal democracy etc.