I’m confused by the strong negative reaction to this comment. I guess it’s about the CoGi funding, which does sound like I was wrong. But it seems to be true that there’s no option to directly apply for funding for a new project (NickLaing mentions the GH funding circle, but they completed one round last year and their website doesn’t currently imply there would be any more).
I think this helps explain the decline of GHD in the OP—AIM’s charity list notwithstanding, no-one in the movement is incentivised to come up with practical ideas in the field.
>I think it’s historically pretty incorrect that the grounding in cost-effectiveness is what made EA good
FWIW the reasons you’re giving here are closely related to the reasons why I’m sceptical that modern AI-focused EA is in fact as good. I don’t think it’s unreasonable to support AI safety work, but I think it’s throwing away most of the epistemics that could make EA a long-term robustly positive influence. EA’s original tagline used to be ‘using evidence and reason’, but the extreme AI safety focus seems to drop the ‘evidence’ part.
To believe you should focus on AI safety, you need to believe all of
short timelines
either
no trend of convergence between intelligence and morality or
the view that convergence wouldn’t matter or wouldn’t be enough to avoid moral disaster
either
long timelines on other GCRs or
other GCRs not really mattering to humanity’s long term prospects and
zero discounting on future people
that a flourishing human future is +EV
that trying to improve average welfare in a flourishing future is less good than trying to increase the probability of a flourishing future
reasonable confidence that AI safety work has learned from its past mistakes and will be reliably +EV
there won’t be a sufficient public shift towards AI safety to make it low enough leverage that less
you personally have more comparative advantage working on AI safety than any other cause
and surely some further assumptions I’ve missed, and many ways to further unpack these premises. To advocate work on AI safety as the primary EA cause you need to believe that the final bullet applies to the majority of your audience.
But I think there’s plenty in that list of assumptions that’s easy to disagree with, and a lot of entangled assumptions whose entanglement to my knowledge hasn’t really been explored (e.g. I find it hard to credit both that there’s no convergence between intelligence and morality and that there’s a long term equilibrium which is both stable and in some nontrivial sense positive or desirable).
So I semiagree with @MichaelDickens’s original comment’s in-principle scepticism while wondering whether in practice int/a might end up promoting causes that feel closer to the what I view as the original spirit of the movement.
There’s also some practical concerns in the OP that I think EA has dropped the ball on, such as building the sort of real community that would have retained greater support/membership over the years (my impression is the substantial majority of EAs who joined the movement more than 6 or 7 years ago have largely disengaged with it).
So I guess I’m noncommittally hopeful that this becomes something valuable—and remains, and Euan said, symbiotic with EA. If it just gives people who would have been somewhat supportive but felt too constrained a way to stay engaged with an encouraging community, that seems like it could be high value.