I have a pretty strong view that I don’t fully trust any single person’s judgment (including my own), and that aggregating judgments (through discussion and voting) has been super helpful for the EAIF’s, Animal Welfare Fund’s (AWF’s), and especially the Long-Term Future Fund’s (LTFF’s) overall judgment ability in the past. E.g., I can recall a bunch of (in my view) net-negative grants that didn’t end up being made thanks to this sort of aggregation, and also some that did end up happening – where it ultimately turned out that I was wrong.
I have also heard through the grapevine that previous experiments in this direction didn’t go very well (mostly in that the ‘potential benefits’ you listed didn’t really materialize; I don’t think anything bad happened). Edit: I don’t give a lot of weight to this though; I think perhaps there’s a model that works better than what has been tried in the past.
I also think that having more discussion between grantmakers seems useful for improving judgment over the longer term. I think the LTFF partly has good judgment because it has discussed a lot of disagreements that generalize to other cases, has exchanged a lot of models/gears, etc.
For this reason, I’m fairly skeptical of any approach that gives a single person full discretion over some funding, and would prefer a process with more engagement with a broader range of opinions of other grantmakers. (Edit: Though others disagree somewhat, and will hopefully share their views as well.)
Appointing guest managers takes quite a lot of time, so I’m not sure how many we will have in the future.
Another idea that I think would be interesting is to implement your suggestion with teams of potential grantmakers (rather than individuals), like the Oxford Prioritisation Project. Again it would take some capacity to oversee, but could be quite promising. If someone applied for a grant for a project like this, I’d be quite interested in funding it.
I have a pretty strong view that I don’t fully trust any single person’s judgment (including my own), and that aggregating judgments (through discussion and voting) has been super helpful for the EAIF’s, Animal Welfare Fund’s (AWF’s), and especially the Long-Term Future Fund’s (LTFF’s) overall judgment ability in the past. E.g., I can recall a bunch of (in my view) net-negative grants that didn’t end up being made thanks to this sort of aggregation, and also some that did end up happening – where it ultimately turned out that I was wrong.
I have also heard through the grapevine that previous experiments in this direction didn’t go very well (mostly in that the ‘potential benefits’ you listed didn’t really materialize; I don’t think anything bad happened). Edit: I don’t give a lot of weight to this though; I think perhaps there’s a model that works better than what has been tried in the past.
I also think that having more discussion between grantmakers seems useful for improving judgment over the longer term. I think the LTFF partly has good judgment because it has discussed a lot of disagreements that generalize to other cases, has exchanged a lot of models/gears, etc.
For this reason, I’m fairly skeptical of any approach that gives a single person full discretion over some funding, and would prefer a process with more engagement with a broader range of opinions of other grantmakers. (Edit: Though others disagree somewhat, and will hopefully share their views as well.)
Our current solution is to appoint guest managers instead, as elaborated on here: https://forum.effectivealtruism.org/posts/ek5ZctFxwh4QFigN7/ea-funds-has-appointed-new-fund-managers
Appointing guest managers takes quite a lot of time, so I’m not sure how many we will have in the future.
Another idea that I think would be interesting is to implement your suggestion with teams of potential grantmakers (rather than individuals), like the Oxford Prioritisation Project. Again it would take some capacity to oversee, but could be quite promising. If someone applied for a grant for a project like this, I’d be quite interested in funding it.