A particular risk here, is that coordination is one of the most costly things to fail at.
I’m happy to encourage new EAs to tackle a random research project, or to attempt the sort of charity entrepreneurship that, well, Charity Entrepreneurship seems to encourage.
I’m much more cautious about encouraging people to try to build infrastructure for the EA community, if it only actually works if it not only is high quality but also everyone gets on board with it at the same time. In particular, it seems like people are too prone to focus on the second part.
Every time you try to coordinate on a piece of changing infrastructure, and the project flops, it makes people less enthusiastic to try the next piece of coordination infrastructure (and I think there’s a variation on this for hierarchical leadership)
But I’m fairly excited about things like AI Safety camp, i.e. building new hubs of infrastructure that other existing infrastructure doesn’t rely on until it’s been vetted.
(It’s still important to make sure something like AI Safety camp is done well, because if it’s done poorly at scale it can result in a confusing morass of training tools of questionable quality. This is not a warning not to try it, just to be careful when you do)
Interesting, this definitely seems possible. Are there any examples of EA projects that failed, resulting in less enthusiasm for EA projects generally?
A particular risk here, is that coordination is one of the most costly things to fail at.
I’m happy to encourage new EAs to tackle a random research project, or to attempt the sort of charity entrepreneurship that, well, Charity Entrepreneurship seems to encourage.
I’m much more cautious about encouraging people to try to build infrastructure for the EA community, if it only actually works if it not only is high quality but also everyone gets on board with it at the same time. In particular, it seems like people are too prone to focus on the second part.
Every time you try to coordinate on a piece of changing infrastructure, and the project flops, it makes people less enthusiastic to try the next piece of coordination infrastructure (and I think there’s a variation on this for hierarchical leadership)
But I’m fairly excited about things like AI Safety camp, i.e. building new hubs of infrastructure that other existing infrastructure doesn’t rely on until it’s been vetted.
(It’s still important to make sure something like AI Safety camp is done well, because if it’s done poorly at scale it can result in a confusing morass of training tools of questionable quality. This is not a warning not to try it, just to be careful when you do)
Interesting, this definitely seems possible. Are there any examples of EA projects that failed, resulting in less enthusiasm for EA projects generally?