As much as I admire the care that has been put into EA Funds (e.g. the ‘Why might you choose not to donate to this fund?’ heading for each fund), this sentence came across as ‘too easy’ for me. To be honest, it made me wonder if the analysis was self-critical enough (I admit to having scanned it) as I’d be surprised if the trusted people you spoke with couldn’t think of any significant risks. I also think ‘largely positive’ reception does not seem like a good indicator.
I agree. This was a mistake on my part. I was implicitly thinking about some of the recent feedback I’d read on Facebook and was not thinking about responses to the initial launch post.
I agree that it’s not fair to say that the criticism have been predominately about website copy. I’ve changed the relevant section in the post to include links to some of the concerns we received in the launch post.
I’d like to develop some content for the EA Funds website that goes into potential harms of EA Funds that are separate from the question of whether EA Funds is the best option right now for individual donors. Do you have a sense of what concerns seem most compelling or that you’d particularly like to see covered?
I forgot to do a disclosure here (to reveal potential bias):
I’m working on the EA Community Safety Net project with other committed people, which just started on 31 March. We’re now shifting direction from focusing on peer insurance against income loss to building a broader peer-funding platform in a Slack Team that also includes project funding and loans.
It will likely fail to become a thriving platform that hosts multiple financial instruments given the complexities involved and the past project failures I’ve seen on .impact. Having said that, we’re aiming high and I’m guessing there’s a 20% chance that it will succeed.
I’d especially be interested in hearing people’s thoughts on structuring the application form (i.e. criteria for project framework) to be able to reduce Unilateralist’s Curse scenarios as much as possible (and other stupid things we could cause as entrepreneurial creators who are moving away from the status quo).
Is there actually a list of ‘bad strategies naive EAs could think off’ where there’s a consensus amongst researchers that one party’s decision to pursue one of them will create systemic damage on an expected value basis? A short checklist (that I can go through before making an important decision) based on surveys would be really useful to me.
Come to think of this: I’ll start by with a quick Facebook poll in the general EA group. That sounds useful for compiling an initial list.
Any other opinions on preventing risks here are really welcome. I’m painfully aware of my ignorance here.
I haven’t looked much into this but basically I’m wondering if simple, uniform promotion of EA Funds would undermine the capacity of community members in say the upper quartile of rationality/commitment to built robust idea sharing and collaboration networks.
In other words, whether it would decrease their collective intelligence pertaining to solving cause-selection problems. I’m really interested in getting practical insights on improving the collective intelligence of a community (please send me links: remmeltellenis[at]gmail.dot.com)
My earlier comment seems related to this:
Put simply, I wonder if going for a) centralisation would make the ‘system’ fragile because EA donors would be less inclined to build up their awareness of big risks. For those individual donors who’d approach cause-selection with rigour and epistemic humility, I can see b) being antifragile. But for those approaching it amateuristically/sloppily, it makes sense to me that they’re much better off handing over their money and employing their skills elsewhere.
(Btw, I admire your openness to improving analysis here.)
I agree. This was a mistake on my part. I was implicitly thinking about some of the recent feedback I’d read on Facebook and was not thinking about responses to the initial launch post.
I agree that it’s not fair to say that the criticism have been predominately about website copy. I’ve changed the relevant section in the post to include links to some of the concerns we received in the launch post.
I’d like to develop some content for the EA Funds website that goes into potential harms of EA Funds that are separate from the question of whether EA Funds is the best option right now for individual donors. Do you have a sense of what concerns seem most compelling or that you’d particularly like to see covered?
I forgot to do a disclosure here (to reveal potential bias):
I’m working on the EA Community Safety Net project with other committed people, which just started on 31 March. We’re now shifting direction from focusing on peer insurance against income loss to building a broader peer-funding platform in a Slack Team that also includes project funding and loans.
It will likely fail to become a thriving platform that hosts multiple financial instruments given the complexities involved and the past project failures I’ve seen on .impact. Having said that, we’re aiming high and I’m guessing there’s a 20% chance that it will succeed.
I’d especially be interested in hearing people’s thoughts on structuring the application form (i.e. criteria for project framework) to be able to reduce Unilateralist’s Curse scenarios as much as possible (and other stupid things we could cause as entrepreneurial creators who are moving away from the status quo).
Is there actually a list of ‘bad strategies naive EAs could think off’ where there’s a consensus amongst researchers that one party’s decision to pursue one of them will create systemic damage on an expected value basis? A short checklist (that I can go through before making an important decision) based on surveys would be really useful to me.
Come to think of this: I’ll start by with a quick Facebook poll in the general EA group. That sounds useful for compiling an initial list.
Any other opinions on preventing risks here are really welcome. I’m painfully aware of my ignorance here.
I haven’t looked much into this but basically I’m wondering if simple, uniform promotion of EA Funds would undermine the capacity of community members in say the upper quartile of rationality/commitment to built robust idea sharing and collaboration networks.
In other words, whether it would decrease their collective intelligence pertaining to solving cause-selection problems. I’m really interested in getting practical insights on improving the collective intelligence of a community (please send me links: remmeltellenis[at]gmail.dot.com)
My earlier comment seems related to this:
(Btw, I admire your openness to improving analysis here.)