I haven’t looked much into this but basically I’m wondering if simple, uniform promotion of EA Funds would undermine the capacity of community members in say the upper quartile of rationality/commitment to built robust idea sharing and collaboration networks.
In other words, whether it would decrease their collective intelligence pertaining to solving cause-selection problems. I’m really interested in getting practical insights on improving the collective intelligence of a community (please send me links: remmeltellenis[at]gmail.dot.com)
My earlier comment seems related to this:
Put simply, I wonder if going for a) centralisation would make the ‘system’ fragile because EA donors would be less inclined to build up their awareness of big risks. For those individual donors who’d approach cause-selection with rigour and epistemic humility, I can see b) being antifragile. But for those approaching it amateuristically/sloppily, it makes sense to me that they’re much better off handing over their money and employing their skills elsewhere.
(Btw, I admire your openness to improving analysis here.)
I haven’t looked much into this but basically I’m wondering if simple, uniform promotion of EA Funds would undermine the capacity of community members in say the upper quartile of rationality/commitment to built robust idea sharing and collaboration networks.
In other words, whether it would decrease their collective intelligence pertaining to solving cause-selection problems. I’m really interested in getting practical insights on improving the collective intelligence of a community (please send me links: remmeltellenis[at]gmail.dot.com)
My earlier comment seems related to this:
(Btw, I admire your openness to improving analysis here.)