>I think it’s historically pretty incorrect that the grounding in cost-effectiveness is what made EA good
FWIW the reasons you’re giving here are closely related to the reasons why I’m sceptical that modern AI-focused EA is in fact as good. I don’t think it’s unreasonable to support AI safety work, but I think it’s throwing away most of the epistemics that could make EA a long-term robustly positive influence. EA’s original tagline used to be ‘using evidence and reason’, but the extreme AI safety focus seems to drop the ‘evidence’ part.
To believe you should focus on AI safety, you need to believe all of
short timelines
either
no trend of convergence between intelligence and morality or
the view that convergence wouldn’t matter or wouldn’t be enough to avoid moral disaster
either
long timelines on other GCRs or
other GCRs not really mattering to humanity’s long term prospects and
zero discounting on future people
that a flourishing human future is +EV
that trying to improve average welfare in a flourishing future is less good than trying to increase the probability of a flourishing future
reasonable confidence that AI safety work has learned from its past mistakes and will be reliably +EV
there won’t be a sufficient public shift towards AI safety to make it low enough leverage that less
you personally have more comparative advantage working on AI safety than any other cause
and surely some further assumptions I’ve missed, and many ways to further unpack these premises. To advocate work on AI safety as the primary EA cause you need to believe that the final bullet applies to the majority of your audience.
But I think there’s plenty in that list of assumptions that’s easy to disagree with, and a lot of entangled assumptions whose entanglement to my knowledge hasn’t really been explored (e.g. I find it hard to credit both that there’s no convergence between intelligence and morality and that there’s a long term equilibrium which is both stable and in some nontrivial sense positive or desirable).
So I semiagree with @MichaelDickens’s original comment’s in-principle scepticism while wondering whether in practice int/a might end up promoting causes that feel closer to the what I view as the original spirit of the movement.
There’s also some practical concerns in the OP that I think EA has dropped the ball on, such as building the sort of real community that would have retained greater support/membership over the years (my impression is the substantial majority of EAs who joined the movement more than 6 or 7 years ago have largely disengaged with it).
So I guess I’m noncommittally hopeful that this becomes something valuable—and remains, and Euan said, symbiotic with EA. If it just gives people who would have been somewhat supportive but felt too constrained a way to stay engaged with an encouraging community, that seems like it could be high value.
FWIW the reasons you’re giving here are closely related to the reasons why I’m sceptical that modern AI-focused EA is in fact as good.
In hindsight I shouldn’t have used the phrase “what made EA good”, since by this point I’m skeptical about both the AI safety version and the “original spirit” of EA. I guess that makes me one of the people you’re describing who joined a long time ago (in my case, over a decade) and have now disengaged.
I do think that int/a is less likely than EA to be significantly harmful, and I’m excited about that. Whether or not it has a decent chance of actually doing something meaningful will depend a lot on the vision of the founders (and Euan in particular). Right now I’m not seeing what will prevent it from dissolving into the background of general hippie-adjacent things (kinda like a lot of the Game B and metacrisis stuff seems to have done). But we’ll see.
>I think it’s historically pretty incorrect that the grounding in cost-effectiveness is what made EA good
FWIW the reasons you’re giving here are closely related to the reasons why I’m sceptical that modern AI-focused EA is in fact as good. I don’t think it’s unreasonable to support AI safety work, but I think it’s throwing away most of the epistemics that could make EA a long-term robustly positive influence. EA’s original tagline used to be ‘using evidence and reason’, but the extreme AI safety focus seems to drop the ‘evidence’ part.
To believe you should focus on AI safety, you need to believe all of
short timelines
either
no trend of convergence between intelligence and morality or
the view that convergence wouldn’t matter or wouldn’t be enough to avoid moral disaster
either
long timelines on other GCRs or
other GCRs not really mattering to humanity’s long term prospects and
zero discounting on future people
that a flourishing human future is +EV
that trying to improve average welfare in a flourishing future is less good than trying to increase the probability of a flourishing future
reasonable confidence that AI safety work has learned from its past mistakes and will be reliably +EV
there won’t be a sufficient public shift towards AI safety to make it low enough leverage that less
you personally have more comparative advantage working on AI safety than any other cause
and surely some further assumptions I’ve missed, and many ways to further unpack these premises. To advocate work on AI safety as the primary EA cause you need to believe that the final bullet applies to the majority of your audience.
But I think there’s plenty in that list of assumptions that’s easy to disagree with, and a lot of entangled assumptions whose entanglement to my knowledge hasn’t really been explored (e.g. I find it hard to credit both that there’s no convergence between intelligence and morality and that there’s a long term equilibrium which is both stable and in some nontrivial sense positive or desirable).
So I semiagree with @MichaelDickens’s original comment’s in-principle scepticism while wondering whether in practice int/a might end up promoting causes that feel closer to the what I view as the original spirit of the movement.
There’s also some practical concerns in the OP that I think EA has dropped the ball on, such as building the sort of real community that would have retained greater support/membership over the years (my impression is the substantial majority of EAs who joined the movement more than 6 or 7 years ago have largely disengaged with it).
So I guess I’m noncommittally hopeful that this becomes something valuable—and remains, and Euan said, symbiotic with EA. If it just gives people who would have been somewhat supportive but felt too constrained a way to stay engaged with an encouraging community, that seems like it could be high value.
In hindsight I shouldn’t have used the phrase “what made EA good”, since by this point I’m skeptical about both the AI safety version and the “original spirit” of EA. I guess that makes me one of the people you’re describing who joined a long time ago (in my case, over a decade) and have now disengaged.
I do think that int/a is less likely than EA to be significantly harmful, and I’m excited about that. Whether or not it has a decent chance of actually doing something meaningful will depend a lot on the vision of the founders (and Euan in particular). Right now I’m not seeing what will prevent it from dissolving into the background of general hippie-adjacent things (kinda like a lot of the Game B and metacrisis stuff seems to have done). But we’ll see.